• 0 Votes
    4 Posts
    102 Views
    ferdinandF
    Hey @vaishhg, As I tried to explain there are no nested dependencies. *.rs is a full blown scene file format which can express geometry, curves, simulation data, materials and more. When you save a Cinema 4D scene as *.rs al data in it is exported to that format, including the "nested" case where a *.c4d scene is referencing a *.c4d scene. So when you start out with this *.c4d scene: Scene.c4d +-- Objects +-- Cloner ( creates 5 instances) +-- Cube Generator +-- Cache +-- Polygon Object +-- Tags +-- Material Tag (references 'Red Material') +-- Materials +-- Red Material And then export it to Scene.rs, you get this (this is not an actual depiction of the file format, just a visualization of what happens, rs is not an open format). Scene.rs +-- Objects +-- Cube.0 [Red Material] +-- Cube.1 [Red Material] +-- Cube.2 [Red Material] +-- Cube.3 [Red Material] +-- Cube.4 [Red Material] +-- Materials +-- Red Material (contains Red Material definition) If you load that file back into Cinema 4D you get this. All data - that these are 5 separate cubes with a red material each - resides in the Redshift core only, we only see a proxy in Cinema 4D, hence the name "RS Proxy Object". It is the Redshift Core which will resolve the data in the RS file at render time. ReferencingScene.c4d +-- Objects +-- RS Proxy Object.0 (loads Scene.rs) +-- Cache (will be empty by default, there is literally no data in the c4d core, only when we set 'Preview' to 'Mesh' there will be a cache so that the viewport can display something) +-- Polygon Object (one blob representing all 5 cubes and no material information) +-- RS Proxy Object.1 (loads Scene.rs) +-- Cache +-- Polygon Object When we now export ReferencingScene.c4d to ReferencingScene.rs we get this. Because when the exporter runs, it will encounter the two RS Proxy Objects when flattening the c4d scene and do what you cannot do, grab the rs scene data from the referenced Scene.rs files and inline that into the new ReferencingScene.rs file. So we end up with 10 cubes in total, each with the red material assigned. ReferencingScene.rs +-- Objects +-- Cube.0 [Red Material] (from RS Proxy Object.0) +-- Cube.1 [Red Material] ... +-- Cube.2 [Red Material] ... +-- Cube.3 [Red Material] ... +-- Cube.4 [Red Material] ... +-- Cube.0 [Red Material] (from RS Proxy Object.1) +-- Cube.1 [Red Material] ... +-- Cube.2 [Red Material] ... +-- Cube.3 [Red Material] ... +-- Cube.4 [Red Material] ... +-- Materials +-- Red Material (contains Red Material definition) And when we load that back into Cinema 4D we get this: SecondGeneration.c4d +-- Objects +-- RS Proxy Object.0 (loads ReferencingScene.rs) +-- Cache +-- Polygon Object (one blob representing all 10 cubes and no material information) The TLDR is that the Redshift Core can read *.rs files and the Cinema API cannot, it can only write them or load them via an RS Proxy Object. And there is no 'resolving [...] the full proxy chain' as you put it. An *.rs scene file is just a discrete scene representation that contains does not know concepts such as generators or assets known to the Cinema API/Core. When export a *.c4d scene that references *.rs files all data is just flattened into a single *.rs file (again, what I showed under the *.rs formats above was just a visualization, not the actual file format). There is currently no way to do what you want to do, even if you would request access to the Redshift Core C++ SDK. Because the RS file format is a GPU scene file format and very deeply integrated into the core. Even the RS Core SDK does not expose functionalities to read RS files to CPU memory structures. Cheers, Ferdinand
  • 0 Votes
    3 Posts
    773 Views
    B
    Thank you @ferdinand for the help! Write raw memory does help a lot, appreciate!
  • 0 Votes
    6 Posts
    859 Views
    ferdinandF
    Hello @uogygiuol, Thank you for the added details. Yes, reducing the complexity of questions is the right thing to do, thank you for doing tit. Essays are counterproductive, as we then tend to overlook things (q.e.d., I overlooked the fact that you wanted to mangle the scene file in this thread). In general, trying to mangle a file beforehand is not a good route, as you always risk invalidating the file. For your very specific scenario - very simple scene graph, just geometry, no materials, animations or other dependencies - it could make sense. I briefly talked with the owner of our GLTF-importer, and we do not do any sanity checking, e.g., comparing nodes with meshes. So, you could just 'clean up' the scene graph ("nodes") of the file, and Cinema's GLTF importer will then just ignore extra data in fields such as "meshes". How fruitful this will be, you will have to find out yourself. I already had the hunch that your are here surfing on the edge of what is sensbible, and GLTF JSON files which translate to gigabytes of memory are certainly an edge case, due to the fact that text-based file formats are usually a bad choice for such heavy data. Using Python to Read JSON My guesstimate would be that when you throw a GLTF JSON file at Python's JSON parser - which takes five minutes to load in Cinema 4D - to mangle it, you end up with a net-loss or tie, because you loose most or more than the won time in that Python JSON stage. Python's json module is mostly written in C to make it performant, but that is still a lot of JSON to deserialize, modify, and then serialize. One idea could be to use re, i.e., regular expressions, to find the "nodes" section in that file, just deserialize that from JSON, modify it, serialize back to a JSON string, and write it back in place, and by that sidestep having to deserialize that whole file. The problem with all that is that json.load allows you to pass a file object, allowing you to bypass the Python VM entirely and let the data reside in C until the parsing is done, while re does not allow you to regex a file object directly (AFAIK), you always must read the file object into lines or chunks to then pass these strings to the re module. I.e., you would have to load that whole file into a Python string first. What would come here out on top, I have no clue, but my hunch is that re might loose, as Python's string handling is not the fastest. Alternatives might be 3rd party libs such as isjon (a lazy JSON loader) but I do not know how performant it is. For this section it would make a huge difference if you could predict the position of "nodes" in the file, either exactly as a chunk offset, or in the form of 'I know that it is always very close to the end, so let's regex parse the file in reverse'. Using a Binary File Format But the fact remains that text-format file types, e.g., JSON GLTF, become extemely ineffcient once you pass the ~100 MB barrier. Using something like binary GLTF or another binary format such as FBX will likely speed up your Cinema 4D loading times quite a bit, no extra steps required. And to be clear, text-based file formats are always wildely ineffcient. It is just that below the ~100 MB barrier (adjust for the beefiness of your machine), you can drown that inefficency with pure computing power and have the nice advantage of a human-readble file format. Cheers, Ferdinand