GeneratePrimitive
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 01/02/2003 at 08:23, xxxxxxxx wrote:
Creating objects in that way is not very efficient your main object (plugin object) can be cloned for many reasons, including just rendering, what I was meaning was that you will end up using resources that you don't need, that takes vital memory away from other areas that may or may not need it. It isn't dangerous as in it'll explode the computer, but in that you should not get into bad habits, and that is one!_<_o:_<_o:p_>_o:p>
When creating a generator object, it is similar to creating a shader, it needs to be as efficient and fast as possible, resize is not that fast, it will have to duplicate the memory allocation then refill it, it is useful, but in situations where you need speed and efficiency, it isn't that good to rely on it, and just creating one monster object is also not wis_<_o:_<_o:p_>_
If you really can't work out how many points/polys your object is going to need, then you need to look at how your system works, or build the object up from smaller ones where you do know the sizes, but not too many (like one object per blade of grass!) otherwise you again run into efficiency issues. If you really have no way to know the overall size, then maybe it is a difficult choice of memory and speed, guess it depends how much waste is le_<_o:_<_o:p_>_.
... an_<_o:_<_o:p_>_ur answers
1> why j_<_o:_<_o:p_>_m? you don't have to!
2> NO NEVER! you should never repeat using resize over and over, it will be very very slow! resize itself is not a killer (Windows support memory resizing, don't know about Mac, but any memory resize is not that efficient, n_<_o:_<_o:p_>_llocation for that matter!).
3> is like a memorypool, if the waste is minor it is no problem, but if you really have no way to know how big your object will be, then the size is a guess? and if not a complete guess, you must know the final size? or I guess you do know approximately the final size, so allocate on the big side? hope that is_<_o:_<_o:p_>_o) or do you have shares in RAM
Yes it is "safe", you won't crash, you can do anything you like to your polygon _<_o:_<_o:p_>_provided it is returned as a valid object.
There are many ways to create objects, the overall best way is to know just how big it will need to be! in the case of the project Samir is working on, last time I saw anything it was creating single objects in a hierarchy, that is fine, just like the Atom _<_o:_<_o:p_>_, no join, no resize, no waste, nice and fast
If it works, use it, but watch out that it doesn't bite back later two things should be (tried) to keep in mind when programming anything, especially in something as CPU hungry as a 3d app, speed, efficiency, and can you do it faster! always the last one, time test every part, know how long it takes, why it takes that long, and can it be faster and more efficient. Takes time, but the end product is worth it! maybe I'm just a bit old school, I still remember spending hours just tweaking a few lines of 68000 assembler back in the good old days just to get that extra bit of speed -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 01/02/2003 at 08:35, xxxxxxxx wrote:
1> why join them? you don't have to!
but if you can , do it
why?
perpare time before render.
100,000 objects with 1 poly = go have a cup of tea and eat someting
1 object with 100,000 polys = not enough time to fart
cheers
Paul -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 01/02/2003 at 09:11, xxxxxxxx wrote:
If you're going to preallocate memory, don't forget the trusty old amortized constant double-after-full allocation scheme. I.e. each time you run out of points or polygons you add as many as you have, doubling the number. (It wasn't clear if you already do it this way. Disregard this if you do.)
See this random link for more information: http://c2.com/cgi/wiki?DoubleAfterFull -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 01/02/2003 at 09:28, xxxxxxxx wrote:
ill print that and hang it on the wall .
for it will help me in those moments of doubt.
much like Mary Poppins in her trusty old film , ill sing the words
"trusty old amortized constant exponential allocation scheme","trusty old amortized constant exponential allocation scheme"
until i feel better
I hadnt called it that , but its not far off what ive been doing.
thanks for that one -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 01/02/2003 at 10:30, xxxxxxxx wrote:
That is the largest pile of dog stuff I ever read! that is seriously bad practice unless you really really need dynamic array allocations! there are better ways! really, there are!! use your brain and memory, not the computers! in the case of object allocations, I stick by my point, allocate what you need, no more, you should -know- how big that is! really, you should!
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 01/02/2003 at 11:30, xxxxxxxx wrote:
Oh, just one final (I think) point, the JOIN issue, I don't know what the official word is, but I can show you scenes where JOIN'd objects are actually slow in prepare/render than their unjoined versions! AFAIK, during prepare, the polygons are converted to RayPolygon's per object, during this conversion, the validity of the polygon is checked, if it is invalid it is split into tris or ignored depending on the "problem" with the polygon. This is what takes the time, cloning the scene, converting the polygons to raypolygons. I can't say I see why the number of objects should matter, other than, small memory allocations can be slower (sometimes) then one large one, but unless you are repeating this 100,000 like times, the difference is minor, with it going one way or the other depending on the system and allocation size, there is no perfect solution. I'll try and find out the official word on rendering, if JOIN'd is really any better, my guess is like I said, polygon count of the individual objects and number of objects.
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 01/02/2003 at 14:37, xxxxxxxx wrote:
(?, Of course you're right that your method is better, but my comment was strictly on schemes for allocating memory on-the-fly without knowing the amount in advance. And object resizing *does* involve dynamic array allocations. Given that pre-calculation were out of the question I'd be surprised if double-after-full weren't one of the better methods for allocation. I cannot think of many algorithms for which pre-calculation wouldn't be possible with two passes though...)
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 02/02/2003 at 01:16, xxxxxxxx wrote:
I've had other discussions with people on dynamics allocations, I'm one not in favour with the double method, since I don't like the idea of allocating 256MB when I only need 129MB when talking just plain old boring memory allocations where you really have no way to know the final size, or an approximate upper estimate of the final size (say from a hardware source that simply feeds in data), then sure, it is a valid method, just like adding a fixed size is, one is a trade off of waste against a bit extra speed, the odd extra block allocation is not that bad, personally I prefer the fixed size method, where the size of each block is a good estimate, or a tested trade off, allocating one block of memory isn't much overhead, allocating 10,000 or so starts to build up, you get into a million allocations and you can go make a cup of tea
I think we may have gone a little OT here, the initial points were hints/tips for better generator objects, I was just saying I didn't agree (depending on wastage) that allocating a single large object then filling only what you wanted was a good thing for generators, as I said, I guess it depends on waste, if you are talking about wasting a few Kb, probably no problem, if we're talking MBs then IMHO it is a serious problem!
Imagine a hair geometry object, say one from S&H a fairly low hair count is 20,000 hairs, into polys you get an object around 24MB, so lets say I'd increased the count just enough to get it to 33MB, the double-after system would have allocated me 64MB (say, depending on where you'd say your initial starting point and since you don't know that too is a guess)...
So now the scene is cloned for render to the picture view (or NET for that matter), so that 64MB is turned into 128MB (64 in the editor, 64 in the picture viewer), so I now have nearly 64MB of memory allocated I'm not actually using, and worse, when the clone happened, CINEMA had to copy all the unused data too!
You also have to keep in mind what affect all that extra data (unset, unused) might have on other areas of CINEMA, a deformer for example, that deformers EVERY point, so in this case, you could have many many unused points it will now have to deform, and worse, it also has to clone them each time see where I'm going with this! IMHO, allocate when you need, no more, it might look faster for you at the time, but when you come to use it, you're actually making it worse.
Just one thing comes to mind having re-read this thread (Paul) you said you were filling a mesh, if it ran out you added extra (resize, double or fixed?), was this points and polygons? then do you resize it back down at the end (to fit the exact number)? you weren't clear on that bit -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 02/02/2003 at 01:38, xxxxxxxx wrote:
So now the scene is cloned for render to the picture view (or NET for that >matter), so that 64MB is turned into 128MB (64 in the editor, 64 in the >picture viewer), so I now have nearly 64MB of memory allocated I'm not >actually using, and worse, when the clone happened, CINEMA had to copy >all the unused data too!
no , i think you missed the point.
if you have 32 mb, then you run out , you alloc another 32mb sao you got 64.
you run the rest of your calculation and only needed another 5mb.
once your finsihed , you resize for 37mb and return the object.
there is no wastage , its a temporary buffer , nothing more.
it just saves you having to realloc on the fly / per hair ,ect.
>You also have to keep in mind what affect all that extra data (unset, unused)
nothing goes unused , cos you clear this "cache" when the object is finished.
nice thread -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 02/02/2003 at 01:45, xxxxxxxx wrote:
Just one thing comes to mind having re-read this thread (Paul) you said >you were filling a mesh, if it ran out you added extra (resize, double or >fixed?),
I geuss the max size .I can almost always say , it will be less than xxxx mb
cos I dont want to be doing a full resize half way through .
>was this points and polygons?
yes
>then do you resize it back down at the end (to fit the exact number)? you >weren't clear on that bit
yes , exactly -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 02/02/2003 at 01:50, xxxxxxxx wrote:
ROTFL! think we've been talking across each other that is why I suddenly asked if you were doing that (resize at the end), it didn't sound like it (to me), then I got thinking, well, hang on, what about all those unset polygons! in one of my previous rants I put:
"2> NO NEVER! you should never repeat using resize over and over"
So yes, you are completely correct not to keep resizing
As a temporary allocation system, it doesn't make much difference what system you use, provided it doesn't grow too stupid and you hit virtual memory that is one of my reasons for preferring the fixed size method (rather than double) you have more control to keep the jumps low.
Just a slight OT again, I have not tested the speed issues (but I will), you can also consider not allocating an object until you do know the size, instead buffer the data, probably a MemoryPool, then when you have all the information, create your object and fill it. I don't know if this is faster or slower, or about the same as resize getting called for each block and then to cleanup. I use the buffer method in S&H since I have no choice, the data returned to me is per hair and in its own memory block, so I just cache these until I have them all ready to build the object. Might be useful to know some rules for generators and methods for creating them, I don't know how common it is to not know your final size, especially if your own functions are creating the geometry.
Glad we cleared it up, you got me wondering what you were doing there for a minute next time I'll ask and not leap