ChimeraX docs icon

Command: device, vr

Usage:
device  device-typestatus ]  device-options

The device command sets modes for certain external devices, where the device-type can be:

The status of a mode can be on (synonyms true, True, 1) or off (synonyms false, False, 0). Device-specific options are described below.

device snavstatus ] [ fly  true | false ] [ speed  factor ]

The command device snav enables manipulation with a SpaceNavigator® 3D mouse from 3Dconnexion. The fly option indicates whether the force applied to the device should be interpreted as acting on the camera (true), where pushing forward zooms in because it moves the camera viewpoint toward the scene, or as acting on the models in the scene (false, default), where pushing forward zooms out because it pushes the models away. In either case, however, it is the camera that actually moves. The speed option sets a sensitivity factor for motion relative to the device (default 1.0). Decreasing the value (for example, to 0.1) reduces sensitivity to give slower motion, whereas increasing the value has the opposite effect. See also: mousemode

vrstatus ] [ mirror  true | false ] [ center  true | false ] [ clickRange  r ] [ gui  tool1[,tool2 ... ]] [ simplifyGraphics  true | false ] [ multishadowAllowed  true | false ] [ roomPosition  matrix | report ]

The vr command (same as device vr) enables a virtual reality mode for systems supported by SteamVR, including HTC Vive, Oculus Rift, and Samsung Odyssey. SteamVR must be installed separately by the user and started before the mode is enabled. In addition, Oculus Rift users should start the Oculus runtime before starting SteamVR. For details and related issues, see ChimeraX virtual reality. See also: vr button, vr roomCamera, conference, buttonpanel, camera, view, making movies

VR status, model positions in the room, and hand-controller button assignments are saved in ChimeraX session files. The VR model positions and button assignments are also retained through uses of vr on and vr off, but not after exiting ChimeraX.

The mirror option indicates whether to show the ChimeraX scene in the desktop graphics window. If true (default), the VR headset right-eye view is shown, with graphics waitForVsync automatically set to false so that VR rendering will not slow down to the desktop rendering rate. If false, no graphics are shown in the desktop display, allowing all graphics computing resources to be dedicated to VR headset rendering. Updating the graphics window can cause flicker in the VR headset because refreshing the computer display (nominally 60 times per second) slows rendering to headset. ChimeraX turns off syncing to vertical refresh if possible. Another way to mirror is to use SteamVR's menu entry to display mirror window.

The center option (default true) centers and scales models in the room.

The clickRange option sets the depth range for picking objects with the hand-controller cones, where r (default 5.0) is the maximum distance from the tip of the cone to the object in scene distance units, typically Å. Limiting the range prevents accidentally picking far-away objects.

The gui option specifies which ChimeraX tool panels to show in VR when the hand-controller button assigned to that function (by default, Vive menu or Oculus B/Y) is pressed. Any combination of tools can be specified as a comma-separated list of one or more tool names, as listed in the Tools menu and shown for most tools in their title bars. Tools may also be custom panels created with the buttonpanel command. If the gui option is not given, the same tools as currently shown in the desktop display (including the Toolbar, and on Windows only, the ChimeraX main menu) will be shown.

The simplifyGraphics option (default true) reduces the maximum level of detail in VR by limiting the total atom and bond triangles to one million each. This helps to maintain full rendering speed. The previous total-triangle limits are restored when the VR mode is turned off. Normally, the maximum atom and bond triangles are set to five million each. See also: graphics

Ambient shadowing or “ambient occlusion” requires calculating shadows from multiple directions, which may make rendering too slow for VR and cause stuttering in the headset. By default (multishadowAllowed  false), if the multiShadow lighting parameter is > 0, enabling VR switches to the simple lighting mode. With multishadowAllowed  true, the lighting mode is left unchanged.

When the mode is enabled, the roomPosition option can be used to specify the transformation between room coordinates (in meters, with origin at room center) and scene coordinates (typically in Å) or to simply report the current transformation in the Log. The transformation matrix is given as 12 numbers separated by commas only, corresponding to a 3x3 matrix for rotation and scaling, with a translation vector in the fourth column. Ordering is row-by-row, such that the translation vector is given as the fourth, eighth, and twelfth numbers. Example:

vr  room  20,0,0,0,0,20,0,0,0,0,20,0

The frequency of label reoriention is automatically decreased when VR is enabled and restored when VR is turned off.

vr button  button-name  functionhand  left | right ]
Vive hand controllers
Vive hand controllers
Oculus hand controllers
Oculus “Touch” hand controllers

The vr button command assigns modes (functions) to the hand-controller buttons in virtual reality, as a scriptable alternative to clicking icons to assign modes interactively. The available function names are as listed above for mousemode, plus the following VR-specific modes:

The available function names can be listed in the Log with command usage vr button, and should be enclosed in quotation marks if they contain spaces. The function can also be given as the word default to reset to default functions.

The hand option can be given to assign a function to the specified button of only one hand controller; otherwise, those on both controllers will be assigned.

The button-name can be:

Initial defaults are to translate and rotate with triggers, zoom with Vive touchpad or Oculus A button, recenter with Vive grip or Oculus X button, and show ui with the Vive menu or Oculus B/Y button (more...).

The only function settings that work by tilting the Oculus thumbstick are rotate , zoom , contour level , play map series , and play coordinates .

vr roomCamerastatus ] [ fieldOfView  angle ] [ width  w ] [ backgroundColor  color-spec ] [ savePosition  true | false ] [ tracker  true | false ] [ saveTrackerMount  true | false ]
The vr roomCamera command sets up a separate camera view fixed in the VR room coordinates, useful for making video tutorials. The camera view is shown in the desktop graphics window and as “picture in picture” in the VR headset. See also: camera, making movies

The vr roomCamera command can only be used in virtual reality (after using vr on). In VR, the room camera is shown as a rectangle at the camera position (initially with its center 1.5 meters above the floor and offset 2 meters horizontally from the room center), facing the user and showing what the camera sees. The width of the rectangle is given in meters (default 1.0), and the height is chosen to match the aspect ratio of the desktop graphics window. The default fieldOfView for the room camera is 90°. The background color of the room camera can be set separately from the VR background; default is a dark gray (10,10,10) so that the rectangle is visible against a black scene background.

The room camera rectangle is a model that can be selected and then moved in VR with the hand-controller mode assigned by clicking either icon: (or with the vr button command).

Specifying savePosition true saves the current position and orientation of the room camera in the preferences.

Two options allow using a Vive Tracker device to control the room camera position:

Thus, the room camera can be moved by hand as described above, and then its offsets from the device (for when tracker true is used) in position and orientation saved with saveTrackerMount true.

realsensestatus ] [ size  x,y ] [ dsize  dx,dy ] [ framesPerSecond  fps ] [ align  true | false ] [ denoise  true | false ] [ denoiseWeight  weight ] [ denoiseColorTolerance  tolerance ] [ denoise  true | false ] [ projector  true | false ] [ angstromsPerMeter  apm ] [ skipFrames  N ] [ setWindowSize  true | false ]
The command realsense (same as device realsense) enables blending video from an Intel RealSense depth-sensing camera with ChimeraX graphics to make augmented reality videos. If VR is enabled, this command automatically starts a virtual-reality room camera which renders the models to be blended with the RealSense camera image. It also sets the graphics window size to match the RealSense camera image size. This command is only available after installation of the RealSense bundle from the ChimeraX Toolshed (menu: Tools... More Tools...). See also: Mixed Reality Video Recording in ChimeraX

The size and dsize options set camera image resolution for color and depth, respectively. Each takes a pair of comma-separated values indicating the pixel dimensions in X and Y (defaults: size 960,540 and dsize 1280,720). The framesPerSecond option gives the video capture rate (default 30). The align option indicates whether to compensate the offset between the color and depth cameras in the device (default true, which slows rendering). The denoise option (default true) specifies whether to depth-denoise by averaging depth over time at pixels when their colors remain fairly constant using parameters denoiseWeight (default 0.1) and denoiseColorTolerance (default 10). Denoising details are given below. The projector option enables spraying the room with IR dots from a projector on the camera device for better depth detection (default false, as the IR beams interfere with other devices that use IR such as Vive VR tracking). Normally the camera device is used at the same time as VR, which in turn sets the scale factor of the scene relative to the room. However, if VR is not in use, the angstromsPerMeter option can be used to specify the relative scale (default 50). The skipFrames option indicates how many ChimeraX graphics update frames to skip before getting a new camera frame (default 2, meaning to get a new camera frame at every 3rd ChimeraX graphics frame). The setWindowSize option causes the graphics window size to be changed to match the camera color image resolution (default true).

Denoising details: The depth values from RealSense cameras fluctuate rapidly over time by small amounts, and the depth value for some pixels is unknown. This depth noise causes flickering in the blended video where depths are fluctuating or unknown, especially at boundaries between video objects and computer-generated objects. The denoise true option reduces the depth noise by averaging depth over time at pixels when their colors remain fairly constant. When the color of a pixel changes by more than 10 on a RGBA scale of 255, its depth is always updated immediately to reduce motion blur. The averaging blends current depth values with the previously used depth with weight 0.1, so it roughly averages depth from the previous 10 frames. Pixels with unknown depth occur for two reasons. In some areas of the video frame such as blank white walls the camera is unable to judge depth because it relies on matching features in the two stereo IR cameras, and if there are no discernible features to match, the depth is reported as 0. This problem can be reduced by using projector true, which projects a dense array of invisible infrared laser dots in the room to add texture (but can interfere with tracking by VR headsets that also use infrared). The second cause of missing depth values is that only one stereo camera sees a part of the room because the other stereo camera is blocked by an object. This happens at boundaries of foreground objects in the room. To minimize both of these effects (but especially the second), the denoise option keeps track of the maximum depth seen at each pixel and its color, representing the room background. If a depth value is not available for a pixel for a frame, it will use the maximum depth value, provided that the color of the pixel is close to the color of the background. This background depth fill does not help for pixels for which the camera has never reported a depth value. The denoising algorithm is based on the assumption that the camera is not moving.


UCSF Resource for Biocomputing, Visualization, and Informatics / February 2020