You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a major release. There may be changes to the API that break or alter how your code behaves. Please read the upgrade guide.
New Features
Added a new VR rig: FOVE Leap Motion. This rig requires a FOVE headset and Leap Motion sensor.
Command API
New Commands
Command
Description
set_vsync_count
Set the renderer's vsync count.
send_fove
Send FOVE headset data.
add_ui_cutout
Add a UI "cutout" image to the scene. This will draw a hole in a base UI element.
set_ui_element_rotation
Rotate a UI element to a new angle.
allow_fove_headset_movement
Handle whether to send the Fove Headset position or not.
allow_fove_headset_rotation
Handle whether to send the Fove Headset orientation or not.
refresh_leap_motion_rig
Refresh a Leap Motion rig in the scene. This must be sent whenever new objects are added to the scene after the rig was created.
show_leap_motion_hands
Visually show or hide Leap Motion hands.
start_fove_calibration
Start the FOVE headset's internal calibration.
tilt_fove_rig_by
Tilt (pitch) the Fove rig by an angle.
scale_object_to
Scale the object to the given value. This is only useful if you know the model scale beforehand, which is not always (1, 1, 1). This command is only useful when used with send_scales, because ObjectScales output data will return the actual scale of each object.
send_avatar_ids
Send the IDs of each avatar in the scene.
send_fast_avatars
Send the position and rotation of each avatar in the scene. This is slightly faster than SendAvatars, and FastAvatars compresses much better than Avatars. However, FastAvatars doesn't contain avatar IDs, which makes it harder to use. See: send_avatar_ids which serializes the avatar IDs in the same order as the data in FastAvatars.
send_fast_image_sensors
Send the and rotation of each avatar's camera in the scene. This is slightly faster than SendImageSensors, and FastImageSensors compresses much better than ImageSensors. However, FastImageSensors is missing a lot of information contained in ImageSensors, including avatar IDs, making it harder to use. See: send_avatar_ids which serializes the avatar IDs in the same order as the data in FastImageSensors.
send_fast_transforms
Send FastTransforms output data. This is slightly faster than SendTransforms, and FastTransforms compresses much better than Transforms. However, FastTransforms excludes some data (see output data documentation) and it is also harder to use. See: send_object_ids which serializes the object IDs in the same order as the data in FastTransforms.
send_object_ids
Send the IDs of all Rigidbody objects (models and composite sub-objects) in the scene. The object IDs are sorted.
send_post_process
Send post-processing values.
send_scene
Send streamed scene metadata.
send_models
Send name and URL of each model in the scene.
send_scales
Send Scales data of objects in the scene. The scales are the worldspace scales rather than a factor. Send scale_object_to, not scale_object
Modified Commands
Command
Modification
create_vr_rig
Added new rig: fove_leap_motion
Output Data
New Output Data
Output Data
Description
Fove
FOVE headset and eye tracking data.
Models
Model names and URLs per object.
ObjectScales
The spatial scale of each object in the scene.
PostProcess
Post-processing values.
Scene
The scene name and URL of the asset bundle.
SystemInfo
System and hardware information.
AvatarIds
The IDs of each avatar in the scene.
FastAvatars
Fast, fixed-length avatar transform data. Use this in conjunction with FastAvatarIds: the order of the IDs matches the order of this data.
FastImageSensors
Fast, fixed-length avatar image sensor transform data. Use this in conjunction with FastAvatarIds: the order of the IDs matches the order of this data.
FastTransforms
None
ObjectIds
The IDs of all Rigidbody objects (models and composite sub-objects) in the scene.
Modified Output Data
Output Data
Modification
Mouse
This is now fixed-length data, meaning that it compresses better and is slightly faster.
Version
This is now fixed-length data, meaning that it compresses better and is slightly faster.
tdw module
Added: FoveLeapMotion add-on.
Added data classes and enums used by FoveLeapMotion:
CalibrationMethod
CalibrationSphere
CalibrationState
EyeByEyeCalibration
EyeTorsionCalibration
EyeState
Eye
Added: Autohand Abstract base class for all VR rigs that use Autohand.
Added: LeapMotion Abstract base class for all VR rigs that use Leap Motion.
Modified the UI add-on:
The image parameter in add_image() can now be a PIL image.
Added: add_cutout() Add a UI image that cuts a transparent hole in another UI image.
Fixed: replicant.reach_for(target, offhand_follows=True, absolute=False) doesn't move the offhand to the correct position. The code responsbile for calculating the offhand target has been moved from tdw into the build.
Likewise for wheelchair_replicant.reach_for(target, offhand_follows=True, absolute=False)
This is an incremental update to v1.12. If you are already using a v1.12 release, you can safely upgrade without having to change any of your code.
Command API
New Commands
Command
Description
set_ui_color
Set the color of a UI image or text.
Output Data
Modified Output Data
Output Data
Modification
Occlusion
Removed: get_sensor_name() get_occluded() returns an integer between 0 and 255 instead of 0 and 1. The number now describes the fraction of the image's pixels are occupied by objects, as opposed to background meshes. Added: get_unoccluded(). An integer between 0 and 255 describing what fraction of the image's pixels are occupied by objects if background meshes aren't rendered. To get the value that get_occluded() would have returned in prior versions of TDW: 1 - (occ.get_unoccluded() - occ.get_occluded()) / 255
Build
Fixed: Occlusion is highly inaccurate, especially in scenes other than the ProcGen Room.
Fixed: The command apply_force doesn't work as expected in most scenes.
Documentation
Modified Documentation
Document
Description
lessons/visual_perception/occlusion.md
Updated document to describe the new Occlusion data.
api/output_data.md
Improved the documentation for all positions and rotations. Improved the documentation of LocalTransforms.get_forward()