[chimerax-users] VR movie
Tom Goddard
goddard at sonic.net
Fri Mar 13 10:36:13 PDT 2020
Hi Matthias,
The D435 depth sensing camera is noisy -- can't figure out depth values for all pixels. It uses two IR cameras to judge depth by stereo overlap and so plain backgrounds or repeating patterns confuse it. It has an IR projector that puts thousands of dots in the room to help but still depth can be erratic. That leads to flickering patches in the video. So just be aware if you are going to try this that it is a technology that isn't easy to use yet.
The is an Intel RealSense L515 Lidar camera shipping maybe in June, $349 more expensive than D435 $180. It uses a "time of flight" method for getting depth at each pixel -- basically times how many picoseconds it takes for a light pulse it sends out to return. I have not tried this type of camera, it may be less noisy.
Tom
> On Mar 12, 2020, at 11:19 PM, Matthias Wolf <matthias.wolf at oist.jp> wrote:
>
> Thanks, Elaine and Tom!
>
> This is very informative and inspirational. I will get a RealSense camera and try it out.
>
> Matthias
>
> From: Tom Goddard
> Sent: Friday, March 13, 2020 3:45 AM
> To: ChimeraX Users Help <chimerax-users at cgl.ucsf.edu>
> Cc: Matthias Wolf <matthias.wolf at oist.jp>
> Subject: Re: [chimerax-users] VR movie
>
> Hi Matthias,
>
> The link Elaine mentioned describes how I made the augmented reality video. Briefly I use an Intel RealSense D435 depth sensing camera that captures the video of me and the room and in realtime blend it within ChimeraX with the molecular models using the ChimeraX realsense tool (obtained from ChimeraX menu Tools / More Tools...). So on the desktop display the ChimeraX graphics shows the room video and molecules in realtime, and I just screen capture that using Flashback Pro 5. There is a little more optional hardware -- I locate the RealSense camera in the room using a Vive Tracker mounted on the camera. While in VR I see a rectangular screen where the camera is showing live what the camera sees blended with the models. So as I look at the RealSense camera I see exactly what is being recorded so I can frame the molecules and myself in the video. I do not use the headset cameras for pass-through video.
>
> The moving of the spike binding domain in the coronavirus video is a morph of PDB 6acg, 6acj, 6ack, three conformations seen by cryoEM.
>
> Here is another augmented reality video I made on opioids.
>
> https://youtu.be/FCotNi6213w <https://youtu.be/FCotNi6213w>
>
> I think this augmented reality capture can be very useful for presenting results about 3D structures in science publications as supplementary material or for the public. A few people have said they are getting the depth sensing camera to try it, but I don't know of anyone who has done it.
>
> Tom
>
>
>
> On Mar 12, 2020, at 10:35 AM, Elaine Meng wrote:
>
> Hi Matthias,
> Tom wrote a nice summary of his process for making mixed-reality videos here:
>
> <https://www.cgl.ucsf.edu/chimerax/data/mixed-reality-nov2019/mrhowto.html <https://www.cgl.ucsf.edu/chimerax/data/mixed-reality-nov2019/mrhowto.html>>
>
> That may address the first and third questions.
>
> As for the middle question, there is a mouse mode (or VR hand-controller button mode) for bond rotation. However, my guess is that instead he previously made a morph trajectory between the two conformations and then was using the mouse mode "play coordinates" (flipping through different sets of coordinates in a trajectory model)... Tom would have to confirm whether my guess is correct. One would generally use the bond rotation mode when zoomed in on atoms/bonds shown as sticks, so that is easy to start the drag on a specific bond.
>
> Mouse modes and their toolbar icons:
> <http://rbvi.ucsf.edu/chimerax/docs/user/tools/mousemodes.html <http://rbvi.ucsf.edu/chimerax/docs/user/tools/mousemodes.html>>
>
> I hope this helps,
> Elaine
> -----
> Elaine C. Meng, Ph.D.
> UCSF Chimera(X) team
> Department of Pharmaceutical Chemistry
> University of California, San Francisco
>
>
> On Mar 12, 2020, at 4:16 AM, Matthias Wolf wrote:
>
> Hi Tom,
>
> I really liked your CoV movie https://www.youtube.com/watch?v=dKNbRRRFhqY&feature=youtu.be <https://www.youtube.com/watch?v=dKNbRRRFhqY&feature=youtu.be>
> It’s a new way of storytelling. Although we have used chimeraX in the lab with a Vive and Vive pro for about 2 years, it’s usually one person at a time, with a dark background. But your way opens up interactive VR to a larger audience (even if they don’t get to enjoy the stereoscopic 3D). And it’s cool.
>
> I have some questions:
> • How did you overlay the chimera viewport sync’d with the life camera feed showing yourself? Did you use a frame grabber on a different PC to capture the full-screen chimeraX VR viewport while simultaneously recording the camera video stream, e.g. using Adobe Premiere?
> • How did you control flipping out the outer spike domain with your hand controller? I guess you assigned control of a torsional angle in the atomic model to a mouse mode?
> • Did you enable the headset cameras to orient yourself in the room?
>
> Thanks for keeping improving chimeraX and VR!
>
> Matthias
>
>
> _______________________________________________
> ChimeraX-users mailing list
> ChimeraX-users at cgl.ucsf.edu <mailto:ChimeraX-users at cgl.ucsf.edu>
> Manage subscription:
> http://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <http://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
>
> _______________________________________________
> ChimeraX-users mailing list
> ChimeraX-users at cgl.ucsf.edu <mailto:ChimeraX-users at cgl.ucsf.edu>
> Manage subscription:
> http://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <http://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://plato.cgl.ucsf.edu/pipermail/chimerax-users/attachments/20200313/c2ae2bd4/attachment-0001.html>
More information about the ChimeraX-users
mailing list