Maxon Developers Maxon Developers
    • Documentation
      • Cinema 4D Python API
      • Cinema 4D C++ API
      • Cineware API
      • ZBrush Python API
      • ZBrush GoZ API
      • Code Examples on Github
    • Forum
    • Downloads
    • Support
      • Support Procedures
      • Registered Developer Program
      • Plugin IDs
      • Contact Us
    • Categories
      • Overview
      • News & Information
      • Cinema 4D SDK Support
      • Cineware SDK Support
      • ZBrush 4D SDK Support
      • Bugs
      • General Talk
    • Recent
    • Tags
    • Users
    • Login

    LONG ULONG demoted to int in 64-bit?

    Scheduled Pinned Locked Moved SDK Help
    6 Posts 0 Posters 526 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • H Offline
      Helper
      last edited by

      THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

      On 13/09/2008 at 15:27, xxxxxxxx wrote:

      User Information:
      Cinema 4D Version:   R11 
      Platform:      Mac OSX  ; 
      Language(s) :     C++  ;

      ---------
      This is causing me no end of grief. I thought it went like this:

      byte/char = 8-bits
      word/short = 16-bits
      long = 32-bits
      long long = 64-bits
      int = varies (but obviously not seen as long int here)

      So, now that you've redefined LONG as int and ULONG as unsigned int on Mac 64-bit, Xcode is warning and erroring on any LONG/ULONG used in circumstances where it expects a 32-bit value (GeData, %ld in <>printf() calls, etc.). I'm also getting 'note's about GeData static type setting "Line Location: c4d_gedata.h: <blah>"

      Why did you do this? Any way to suppress the notes or should they just be ignored?

      Thanks,

      1 Reply Last reply Reply Quote 0
      • H Offline
        Helper
        last edited by

        THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

        On 15/09/2008 at 08:35, xxxxxxxx wrote:

        Quote: "This is causing me no end of grief. I thought it went like this:
        >
        > * * *
        >
        >
        > byte/char = 8-bits
        > word/short = 16-bits
        > long = 32-bits
        > long long = 64-bits
        > int = varies (but obviously not seen as long int here)
        >
        > * * *

        This assumption is false. On Windows you use the LLP64 bit model, on OS X and Linux you use the LP64 model.

        Win64:
        - pointers are 64 bit
        - <int> and <long> are 32 bit
        - <long long> is 64 bit

        OS X / Linux 64 bit:
        - pointers are 64 bit
        - <int> is 32 bit
        - <long> is 64 bit

        And that leads us to the source of your problem. the std C functions (like
        printf) are expecting a 64 bit value on OS when using the "%ld" format string and a 32 bit value, when compiling on Win.

        This can introduce extremly hard to find bugs when using something like scanf(), because the input functions might expect 64 bit data (and that might get even worse when you are on PPC with different byte sex).

        How to get around this mess:
        1. Use proper format strings for printf/scanf (it has to be "%d" not "%ld")
        2. Use Cinema's data types (like LONG)
        3. Eventually use Cinema's data type VLONG/VULONG

        Best regards

        Wilfried

        1 Reply Last reply Reply Quote 0
        • H Offline
          Helper
          last edited by

          THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

          On 15/09/2008 at 11:56, xxxxxxxx wrote:

          At least somebody had the balls to respond. 😉

          1. That sucks. I use <x>printf because C4D API equivalents are notorious for bad float conversions (1.234 instead of 1.23456789, no exponential notation, etc.). I could break out the integer values using the C4D equivalents though and only use <x>printf for float values.

          2. Always and everywhere. GeData(LONG) can't possibly be an issue with using other data types. The variables used are LONG to begin with.

          Thanks,

          1 Reply Last reply Reply Quote 0
          • H Offline
            Helper
            last edited by

            THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

            On 15/09/2008 at 12:33, xxxxxxxx wrote:

            Quote: Originally posted by kuroyume0161 on 15 September 2008
            >
            > * * *
            >
            > At least somebody had the balls to respond. 😉
            >
            > 1. That sucks. I use <x>printf because C4D API equivalents are notorious for bad float conversions (1.234 instead of 1.23456789, no exponential notation, etc.). I could break out the integer values using the C4D equivalents though and only use <x>printf for float values.
            >
            >
            >
            > * * *

            Well it is fine to use it. I just want to make sure you know what you are doing 🙂 (we spent an awful lot of time to hunt a (very old) bug inside of some interpreter code on PPC64 that - falsely - used %ld when it expected a 32 bit value and got 64 bit instead...).

            Therefore I suggest: Read K&R; carefully and check your printf/scanf format strings!

            Best regards,

            Wilfried

            1 Reply Last reply Reply Quote 0
            • H Offline
              Helper
              last edited by

              THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

              On 15/09/2008 at 14:51, xxxxxxxx wrote:

              I used K&R; to learn C (starting many years ago back in my Amiga days around 1989). 🙂 And I've used Stroustrup for C++. K&R; and Stroustrup mention the 'l' specifier as a 'long int' for '%ld'. Included in the C99 specification is 'll' for 'long long int' for instance. I do realize that the size of these monikers (word, int, long) are not writ in stone and may change with system. Just bad that the connotation of 'ld' now means 64-bit in this one instance whereas 'lld' IS the actual appropriate choice here - as far as I'm concerned. I can never understand how Apple makes these decisions. 😉

              1 Reply Last reply Reply Quote 0
              • H Offline
                Helper
                last edited by

                THE POST BELOW IS MORE THAN 5 YEARS OLD. RELATED SUPPORT INFORMATION MIGHT BE OUTDATED OR DEPRECATED

                On 16/09/2008 at 02:07, xxxxxxxx wrote:

                Quote: Originally posted by kuroyume0161 on 15 September 2008
                >
                > * * *
                >
                > I used K &R; to learn C (starting many years ago back in my Amiga days around 1989). 🙂 And I've used Stroustrup for C++. K&R; and Stroustrup mention the 'l' specifier as a 'long int' for '%ld'. Included in the C99 specification is 'll' for 'long long int' for instance. I do realize that the size of these monikers (word, int, long) are not writ in stone and may change with system. Just bad that the connotation of 'ld' now means 64-bit in this one instance whereas 'lld' IS the actual appropriate choice here - as far as I'm concerned. I can never understand how Apple makes these decisions. 😉
                >
                >
                > * * *

                It is more or less MS's decision to use a different model. All Unix vendors choose the LP64 model (<long> 64 bit) - and that was a long time before MS created their first 64 bit system.

                MS choose LLP64 (<long long> 64 bit) - I guess - because otherwise they would have killed COM and other 32 bit apis in 64 bit...

                Best regards,

                Wilfried

                P.S. (as we are talking about stumbling blocks ) : If you are using C-runtime lib functions with wchar_t (like wmemset(), etc.) be aware of the different defintion on Windows and Unix systems. wchar_t is 16 bit on Win, but 32 bit on Unix systems.

                1 Reply Last reply Reply Quote 0
                • First post
                  Last post