Typedefs | |
| using | Int16 = int16_t |
| using | UInt16 = uint16_t |
| using | Int32 = int32_t |
| using | UInt32 = uint32_t |
| using | Int64 = int64_t |
| using | UInt64 = uint64_t |
| typedef bool | Bool |
| typedef float | Float32 |
| typedef double | Float64 |
| typedef char | Char |
| typedef unsigned char | UChar |
| typedef Int64 | Int |
| typedef UInt64 | UInt |
| typedef Float64 | Float |
| typedef char16_t | UniChar |
| using Int16 = int16_t |
16 bit signed integer datatype.
| using UInt16 = uint16_t |
16 bit unsigned integer datatype.
| using Int32 = int32_t |
32 bit signed integer datatype.
| using UInt32 = uint32_t |
32 bit unsigned integer datatype.
| using Int64 = int64_t |
64 bit signed integer datatype.
| using UInt64 = uint64_t |
64 bit unsigned integer datatype.
| typedef bool Bool |
Boolean type, possible values are only false/true, 8 bit.
| typedef float Float32 |
32 bit floating point value (float).
| typedef double Float64 |
64 bit floating point value (double).
| typedef char Char |
Signed 8 bit character.
| typedef unsigned char UChar |
Unsigned 8 bit character.
Current floating point model. Right now it is adjusted to Float64==64 bit but it may be redefined to Float32 anytime.
| typedef char16_t UniChar |
16 bit unicode character. UniChar is the datatype for one 16 bit unicode character