[chimera-dev] Export Chimera models to Blender, Maya, Cinema4d

Dougherty, Matthew T matthewd at bcm.edu
Wed Apr 9 18:23:14 PDT 2014


already started for the vertex colors.

As for building a collection of test cases, I would need help crafting that (nurbs, texture, color, etc).

I will do some preliminary investigation.  Will see how it goes.



Matthew Dougherty
National Center for Macromolecular Imaging
Baylor College of Medicine
________________________________________
From: Tom Goddard [goddard at sonic.net]
Sent: Wednesday, April 09, 2014 8:13 PM
To: Dougherty, Matthew T
Cc: chimera-dev at cgl.ucsf.edu
Subject: Re: Export Chimera models to Blender, Maya, Cinema4d

Hi Matt,

  The sensible approach is to make a small example of the per-vertex collada coloring not working in Blender and submit it as a bug to the Blender developers and see if anything happens.  If they just ignore it then it isn’t worth trying to find all the other bugs in their collada importer.  If they do fix it, and you have another bug, then it will be worth sending a simple case of the next bug.

  By simple example, for example use a huge step size for a volume map, so all you see is one tetrahedron.  A small example will allow the developer to easily open the text file and see how it is encoding the colors without having to deal with 500 pages of mesh coordinates.  Also provide an image of the correct appearance as shown by Mac Preview.  It is best to use Preview rather than Chimera since that shows that other software can read the colors.  You can say the file comes from UCSF Chimera but just give them the file since they aren’t likely to get Chimera — that would be too much work for them.  A good bug report might take an hour to create and submit.

  If you want test cases of all the Chimera export features, then you should make them.  It sounds like a lot of work.  Import bugs sometimes involve combinations of features so you will not be able to catch all the problems this way.  I think instead the approach of submitting bugs as you find them to the responsible developers is the way to go.

        Tom


On Apr 9, 2014, at 4:38 PM, Dougherty, Matthew T  wrote:

> It would be good to develop some test cases using the collada & X3D formats; something that generates all graphical methods used by chimera.  Clearly CGL can't become bogged down chasing down and correcting other software packages.
>
> Sticking with my workflow strategy  (using chimera as a modeller and using a pro-animation package to do more sophisticated production), it looks like I have two options:
>
> 1) fix the blender collada/x3d problem, the onus on me.  Having some small test cases I can take to the blender community might bring resolution.  If someone at CGL can generate them I will make the effort to track down the blender developers and resolve the problem.
>
> 2) switch to another animation package that could process the chimera test cases.  Downside is learning new complex software.
> Having a CGL webpage listing animation software that can read these test cases would be helpful in making purchases.
>
> This is not say I am giving up on chimera animation. For most of conference and supplemental animations I plan to stick with chimera because I plan through my sw extensions, to get the lab researchers to vet out coloring and camera positions; then there are the animations involving volume rendering, which I would expect cannot be moved to another package.
>
> But for the complex animations that involve raytracing, tricky camera trajectories, compositing models, lighting variance, stereo3D, etc.; that chimera can't perform and is uneconomical to duplicate those features within chimera.
>
>
>
>
>
> Matthew Dougherty
> National Center for Macromolecular Imaging
> Baylor College of Medicine
> ________________________________________
> From: Tom Goddard [goddard at sonic.net]
> Sent: Wednesday, April 09, 2014 4:24 PM
> To: Dougherty, Matthew T
> Cc: chimera-dev at cgl.ucsf.edu
> Subject: Re: Export Chimera models to Blender, Maya, Cinema4d
>
> Hi Matt,
>
>  Yes Chimera exports RGBA vertex colors of surfaces in X3D.  Chimera does not use polygon colors, only vertex colors.  I don’t know if Chimera exports vertex colors for Collada.  Conrad Huang wrote that code and he would know the answer.
>
>  “Passing the OpenGL scenegraph” is not a feasible solution.  All of the graphics programs represent the data differently.  If you just gave the OpenGL calls to render the scene — none of the programs could do anything with that low level information.  The common ground between programs is through standard scene description file formats like Collada, X3D, FBX, ….
>
>        Tom
>
>
> On Apr 9, 2014, at 2:14 PM, Dougherty, Matthew T  wrote:
>
>> Hi Tom,
>>
>> I agree with your assessment.  I have looked at the blender import code, basically a bunch of if statements chopping text; that is, it does not use std conversion libraries, so they inherently pick favorites of the format, and other things are broken.
>>
>> Regarding collada, this seems to have best importation into blender.  The normals are preserved.  The problem I am running into using it are the colors.
>> When I do a volume surface color and export collada, Importing into polytrans & blender the polygon colors are missing.  Does chimera output colors at the polygons and vertex level?  RGB, RGBA?
>>
>> I spoke with the president of x3d, he runs the viz at Virginia tech.  He said partnering with X3d might be an option.  Doing a grant together and they code is a possibility.
>>
>> FBX would be another route using their format library.  That would provide reasonable compatibility to autodesk sw, and possibly blender et al.  I have researched it.  Blender does the same thing for fbx and x3d, chop text no std libraries.  Fbx offers a lot, but you get a lot of boiler plate bloat.
>>
>> What I am digging into is the best animation strategy. Chimera is the best for getting into the data and visualizing it.  But as you noted in prior emails it will never be a pro animation package with ray tracing and animation UI tools to manage timelines.  Generic animation packages are generic with a few targeted plugins.  Passing the OpenGL scenegraph in a socket would be nice.
>>
>>
>> Matthew Dougherty
>> National Center for Macromolecular Imaging
>> Baylor College of Medicine
>> ________________________________________
>> From: Tom Goddard [goddard at sonic.net]
>> Sent: Wednesday, April 09, 2014 3:37 PM
>> To: Dougherty, Matthew T
>> Cc: chimera-dev at cgl.ucsf.edu
>> Subject: Export Chimera models to Blender, Maya, Cinema4d
>>
>> Hi Matt,
>>
>> It has been a problem getting molecular models exported from Chimera to be read in by Blender and Maya.  For example Greg in our lab recently looked at why Blender does not correctly show the end-caps on ribbons exported from Chimera as X3D.  This is a bug in the Blender X3D importer. We’ve worked to make sure Chimera exports valid 3d scene files.  But we don’t fix bugs in Blender or Maya.  As you know, there seems to be know consensus in the 3d model field about what file format to use to exchange data.  Conrad in our lab added Collada format export (in daily builds) but his experience was that only some software would properly render it.  I also have been reading Collada format in our next generation Chimera work which has not been released.
>>
>> Graham Johnson who made ePMV is here at UCSF.  I’d have to ask how he handles data exchange between the PMV molecular viewer and the animation packages (Cinema4d, Blender, Maya).  My impression was that it used plugins written for each animation package which presents a user interface and can call PMV to compute molecular surfaces and do other molecular analysis.  The plugin could just transfer the data in some internal format rather than using some standard file format.  That solution can work if you have enough money to implement it.
>>
>> I think it will take interest from Blender, Maya, Cinema4d, … developers to make progress, and money to pay them to work on more reliable file exchange to allow molecular vis programs like Chimera to operate with professional animation programs.
>>
>>       Tom
>>
>>
>> On Apr 8, 2014, at 2:54 PM, Dougherty, Matthew T wrote:
>>
>>> UNM had a NSF grant to extend OpenGL to do rendering to a sphere, simplifying s3d.
>>>
>>> Blender now has a full dome camera as part of the cycles camera rig.  So this 3 fisheye approach should apply in terms of workflow.
>>>
>>> Getting back to your comment regarding the inherent animation limitations of chimera, and the various scene graph export file  glitches; more thought might be given yo bridge that gap.  Both packages use python and OpenGL.
>>>
>>> The ePMV  approach of extending  maya,  blender, etc seems like a strategy of avoiding a lot of graphic file problems.
>>>
>>> Any ideas how to export natively or pipeline the packages?
>>>
>>> Matthew Dougherty
>>> National Center for Macromolecular Imaging
>>> Baylor College of Medicine
>>> ________________________________________
>>> From: Tom Goddard [goddard at sonic.net]
>>> Sent: Tuesday, April 08, 2014 4:03 PM
>>> To: Dougherty, Matthew T
>>> Cc: chimera-dev at cgl.ucsf.edu
>>> Subject: Re: [chimera-dev] dome camera/S3D
>>>
>>> Hi Matt,
>>>
>>> That looks interesting.  Looks like the same technique I tried a few years ago at the Imiloa stereo dome in Hawaii where I used 24000 virtual cameras so that each vertical strip had the stereo cameras facing that strip.  This paper describes camera hardware for videoing the real world so it uses just 3 fish-eye cameras and there are artifacts because of that.  The author says the artifacts are much reduced with 5 cameras.  It would be a good technique to try even for virtual rendering where you can get away with thousands of cameras, because it would be a lot faster to render.  You’d especially want that for interactive rendering in stereo dome.  My stereo dome rendering took 15 minutes per frame with the 24000 camera approach!
>>>
>>>      http://www.cgl.ucsf.edu/Outreach/technotes/dome3d/dome3d.html
>>>
>>> Tom
>>>
>>>
>>> On Apr 8, 2014, at 12:53 PM, Dougherty, Matthew T <matthewd at bcm.edu> wrote:
>>>
>>>> Hi Tom,
>>>>
>>>> came across this
>>>>
>>>> http://vision3d.iro.umontreal.ca/en/blog/2013/07/29/the-omnipolar-camera-a-new-approach-to-stereo-immersive-capture/
>>>>
>>>> Matthew Dougherty
>>>> National Center for Macromolecular Imaging
>>>> Baylor College of Medicine
>>>> ________________________________________
>>>> From: Tom Goddard [goddard at sonic.net]
>>>> Sent: Monday, April 07, 2014 8:18 PM
>>>> To: Dougherty, Matthew T
>>>> Cc: chimera-dev at cgl.ucsf.edu
>>>> Subject: Re: [chimera-dev] dome camera/S3D
>>>>
>>>> Hi Matt,
>>>>
>>>> Yes the parallax angle used for Chimera dome stereo with the stereo command is half the “convergence angle” if by convergence angle you mean the angle between the lines of sight from your two eyes to an object on the dome surface level with your eyes and midway between them.  So if I was setting it for a viewer in the center of a 4 meter radius dome, with eye separation of 0.06 meters I would use an angle atan(0.03 / 4) = 0.43 degrees.
>>>>
>>>> The positive angle is for the right eye negative for the left eye, because the y-axis points up and a positive rotation in the standard mathematical conventions (right-hand rule) would rotate your straight ahead view line to instead point slightly leftward.  The view line is supposed to intercept an object at dome distance midway between eyes.  A simple test in Chimera dome mode using “stereo dome parallax 10” verifies that this gives a view from the right as should be seen by the right eye.
>>>>
>>>> It makes sense to use symmetrical values +d degrees for the right eye and -d degrees for the left eye.  If instead you shifted the two parallax values so their sum is not zero, that would give an effect as if your head was facing a different direction in the dome other than straight ahead.
>>>>
>>>> The Chimera focal plane setting seen in the Top View mode of the Side View dialog (menu Tools / Viewing Controls / Side View) does correspond to the dome screen position in the straight ahead position.  I say in the “straight ahead position” because the focal plane is a flat plane, and of course the dome is a spherical surface.  The focal plane position is important.  Setting the parallax value does two things.  It moves the camera right or left along the x-axis, and it rotates the camera view direction.  The shift along the x-axis is by an amount so that the view line will hit the mid-point on the dome screen, ie (x camera shift) = (focal distance) * tan(parallax).  The Chimera “eye separation” parameter is not used for dome parallax mode.  The reason for that is because the dome parallax mode isn’t a stereo mode — it renders only one image using one camera.  You have to record your whole movie with a right eye setting, and then re-record it with a left eye setting, then combine the two movies with some external software.  Since it is a mono mode being used twice, it doesn’t use the “eye separation” parameter in the Camera panel (menu Tools / Viewing Controls / Camera).  Instead the eye separation is implied by the parallax value and focal plane distance.
>>>>
>>>> I realize this is all somewhat quirky.  If we provided a real interactive dome stereo mode that for instance produced side-by-side left and right eye dome images, then we could do away with parallax and use the eye separation and focal distance parameters that are used for other Chimera stereo modes.  Since the fish-eye projection generally needs to be warped before display on the dome, this interactive stereo mode would probably also need to accept a warping texture described in a file in order to be useful.
>>>>
>>>>     Tom
>>>>
>>>>
>>>> On Apr 7, 2014, at 2:10 PM, Dougherty, Matthew T wrote:
>>>>
>>>>> Hi Greg,
>>>>>
>>>>> I was experimenting with the dome parallax and needed some help understanding your sw and terminology.
>>>>>
>>>>> Reading the documentation:
>>>>> Left- and right-eye stereo views for dome display can be generated by specifying parallax angles of opposite signs.
>>>>> The views are rotated by p-angle degrees about the Y (vertical) axis through the center of the focal plane.
>>>>> For example, views recorded with dome parallax ±5° can be combined to give a stereo effect when viewed in the forward (−Z) direction.
>>>>>
>>>>>
>>>>>
>>>>> When you talk about p-angle it reminds me of the angle of convergence.  Is that correct?
>>>>>
>>>>> In hollywood terminology they calibrate the word "parallax" to percentages.  Zero parallax is on the screen, positive is behind the screen, and negative in front of the screen.
>>>>>
>>>>> +5° would be the left eye, and -5° the right?
>>>>> Proper usage would be +n and -n; that is symmetrical around 0?
>>>>> as opposed to +5° and 0°, or +4 and -2?
>>>>>
>>>>> Is the focal plane actually the dome screen?
>>>>> Setting the focal plane in the side view, does it actually do anything?
>>>>>
>>>>>
>>>>>
>>>>> Matthew Dougherty
>>>>> National Center for Macromolecular Imaging
>>>>> Baylor College of Medicine
>>>>>
>>>>> _______________________________________________
>>>>> Chimera-dev mailing list
>>>>> Chimera-dev at cgl.ucsf.edu
>>>>> http://www.rbvi.ucsf.edu/mailman/listinfo/chimera-dev
>>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>
>





More information about the Chimera-dev mailing list