Post by Qu0llWhat are the main issues when developing software with C++ which needs
to support both 32-bit and 64-bit operating systems?
Is C++ code portable across these different architectures?
Any special "gotchas" in writing portable C++ code of this nature?
--
And loving it,
-Qu0ll (Rare, not extinct)
A bit of a tautology, but Portable code will be portable, and Unportable
code might not be.
The biggest catch is assuming things about types that are not promised
by the standard. On modern machines you can generally make a few
assumptions not promised by the standard (especially if you concern is
between 32 and 64 bits), things like numbers will be two's complement
and that char will be 8 bits. Sometimes these can simplify your code,
but you do need to watch out that you don't get too used to some that
aren't as universal.
Back in the days of 16 bit processors, people got used to int being
EXACTLY 16 bits, and a long being twice its size. Suddenly when we moved
to 32 bits, int was most often now 32 bits and the same as a long, which
could cause a gotcha if you converted an int to long before multiplying
(or summing a long list of numbers) to avoid overflow.
Moving to 64 bits, even more of the assumptions of the size of types
might get shaken up.
char is almost certainly 8 bits for your mainline modern processor (but
there are still a few oddballs, mostly DSPs, where this isn't true).
short will then be almost certanily 16 bits.
We used to be able to assume long would be 32 bits, and an int would be
the same size as a short or a long. On a 64 bit processor, the "natural"
size of an int would be 64 bits, and then long would need to be at least
that size. On the other hand, it is often desirable for there to be a
"native" type for each of 8/16/32/64 bits, which would push in to be no
bigger than 32 bits (if in was 64, then the 32 bit type would need to be
short, then there is no 16 bit type).
For "Portable" code, this shouldn't matter, you would use an int, only
when its exact size really didn't matter, and don't assume relationships
between types that aren't promised. In practicality, many programmers
use the native type that "works" on their common platform, and this is
what give portability problems.
One key to avoid this, is to minimize the use of the "native" types,
whose sizes are less predictable, and to use instead typedefs that can
be adjusted in one place to met the needs of the program on the
platform, often automatically with some conditional compilation. The
types in stdint.h help a LOT for this coding.
Need a 32 bit type, it is int32_t. Ok for it to be bigger if that
helps?, then least_int32_t or fast_int32_t (if you really want speed and
care less about space). Often it is better to also use a typedef (or a
proxy class) for values in specific domains, so you can easily adjust
it. For example, if you are storing speeds in 16 bit numbers
(appropriately scaled), then you can use a typedef int16_t speed;
statement to define it and use the type speed to define these variables.
If you latter realize you need 32 bits, you can change just one
statement and they all change (you do still need to double check other
parts of you code that might have made assumptions, like converting a
speed and time to distance). Even more C++ based would be to define
classes for these types that would avoid you accidentally storing a
position into a speed variable.