R12 - plugin conversion
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 18/11/2010 at 07:23, xxxxxxxx wrote:
Hi Lorenzo,
yes, you are absolutely right. That doesn´t make much sense indeed. I would also expect the ReadSVector to be the equivalent operation for reading pre-R12 vectors written with WriteVector. Actually when I converted my project to R12 I simply assumed that is the case (nothing else would make sense really).
Could someone officially comment on this? And would this behavior also apply to ReadReal and ReadSReal accordingly?
I hope not otherwise this would mean rewriting and contacting all my customers and also do unnecessary conversions to SVectors after reading in into Vectors. Oh com´on...
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 18/11/2010 at 07:28, xxxxxxxx wrote:
Please post the code line of your array construction.
cheers,
Matthias -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 18/11/2010 at 07:42, xxxxxxxx wrote:
My array declaration is this:
GeDynamicArray<Vector>defpos;
in both versions, but in R12 is clearly double precision
thank's
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 18/11/2010 at 07:52, xxxxxxxx wrote:
Originally posted by xxxxxxxx
My array declaration is this:
GeDynamicArray<Vector>defpos;
in both versions, but in R12 is clearly double precision
thank's
Ah, I thought as much Vector is defined as LVector in R12. If you don't want to convert to double precision you have to construct your array with SVector and use ReadSVector().
cheers,
Matthias -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 18/11/2010 at 08:12, xxxxxxxx wrote:
thank god.
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 18/11/2010 at 08:16, xxxxxxxx wrote:
Sorry but I don't understand!
The problem is that in R12 to read a Vector struct from 11.5 scene (that is like SVector for R12), I used an SVector that then I convert in LVector using: ve.ToLV()see the example posted up (WRONG WAY CODE)
But this doesn't work!
I hope of being clear. -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 18/11/2010 at 08:20, xxxxxxxx wrote:
You need to use GeDynamicArray_<_svector_>_ to get the same as in R11.5. Oh wait, now I see what you mean. Right, it actually should still be correct to use ReadSVector when written WriteVector with 11.5 (which you say doesn´t work right?)
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 18/11/2010 at 08:25, xxxxxxxx wrote:
Ok, I will do some tests.
cheers,
Matthias -
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 18/11/2010 at 08:29, xxxxxxxx wrote:
Originally posted by xxxxxxxx
You need to use GeDynamicArray<SVector> to get the same as in R11.5.
Oh wait, now I see what you mean. Right, it actually should still be correct to use ReadSVector when written WriteVector with 11.5 (which you say doesn´t work right?)Yes, the problem is that on reading I don't use the array vector directly, but I use a temporary var that is formatted like a SVector to conform the R11 struct!
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 18/11/2010 at 08:33, xxxxxxxx wrote:
Exactly as Matthias explains. If you want to have R12<->R11- compatibility with binary files then you will need to remember type sizes. I had fun when dealing with Real since it is equivalent to SReal in R12. But Real in R12 is LReal (as can be seen if you hover over a variable in VC++). Same situation as Vector.
This is why I am an advocate for types by size (but we can keep our general types too!) unlike how it is with 'long' changing its size dependent upon the bit-width. I want these standard types instead:
BYTE/UBYTE: 8-bits
WORD/UWORD: 16-bits
LONG/ULONG: 32-bits
LLONG/ULLONG: 64-bits
DLONG/UDLONG: 128-bits
etc.The problem is no one codes to a standard like this. They have WriteLong(LONG lv) and LONG can be 32, 64, 128, or whatever bits dependent upon various configurations. Bad in my book. Great for when you never read and write files (maybe). Painful and minefield otherwise - whenever you need backward compatibility or support both 32-bit and 64-bit systems, for instance.
In other words, instead of changing the definition of a type name by system (like how int can be any one of many, many sizes dependent upon OS and bit-width - now that is a minefield and I never use int any longer), make type names that encode bit-size once and for all. We are never going to have 3-bits or 54-bits. Bit-widths are always a power of 2, starting at 8 (8, 16, 32, 64, 128, 256, 512, 1024, 2048, etc. etc.). My mantra: make it explicit. int isn't explicit - it is overly fluid.
I will disembark from my soap-box now.
-
THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED
On 02/12/2010 at 22:21, xxxxxxxx wrote:
Amen, Brother Robert :).
I have actually run across SDK-type example code for reading/writing a proprietary 3D file-format (examples provided by the designer/author of the format) that had 'int' slathered all through it (in the structures being written/read). Needless to say, I sent him an e-mail on the subject :). Of course this format also used 16bit word values to store "number of polygons" and "number of vertices" values, so it wasn't exactly a modern/forward-thinking format.