Q to memory usage
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 28/02/2003 at 06:57, xxxxxxxx wrote:
User Information:
Cinema 4D Version: 8.012
Platform: Windows ;
Language(s) : C++ ;---------
Hi,
I have a bit complex memory question. I hope, it's not discussed too often in the past but searching for "memory" doesn't make any sense...
A short description on what my plugin does:
it reads character movement samples (e.g. walking) from animation tracks into memory and copies them time independent to characters of the scene. The current version stores these samples (position and rotation data of about 25 bone objects) in base containers. This works fine as long as these movement samples don't exceed a length of about 100 frames. Above that limit, performance is getting very slow!
So I think, accessing the data would be much faster, if it's stored in a memory block allocated by GeAlloc(). But I want to keep the advantage, that the data is saved when saving the scene.
My plan is: After the scene is loaded, the sample data should be copied from base containers into a memory block allocated by GeAlloc() and when the scene is going to be saved, the data should be copied (back) to base containers and the internal memory should be freed.
So I need to know:
1. Is there any kind of message type, that tells my plugin, that the scene is going to be saved?
2. To avoid that the data will be stored twice (basecontainer AND internal memory), the basecontainer data should be freed after copying. Is that possible?
Or are there any suggestions to realize this in another way?
The amount of data for one movement sample is approx.:
25 Bones * 2 Vectors * 100 Frames = 5000 Vectors = 60000 Bytes
There are approx. 8 samples (with different frame lengths from 2 to 200) by now, but I don't want to set a limit.
Klaus Heyne -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 02/03/2003 at 15:50, xxxxxxxx wrote:
best is to use
NodeData::Read(....)
NodeData::Write(....)
missusing containers as arrays is a bad idea.
Michael -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 05/03/2003 at 00:02, xxxxxxxx wrote:
Yes. BaseContainer uses linear search to find the elements so you'll get an O(n^2) penalty by using it as an array. In NodeData::Write() you can either use HyperFile::WriteMemory() if you have an array of PODs, or loop through the array elements and use multiple HyperFile::WriteT() if you have a more complex datatype.