Post by markPost by Alf P. SteinbachAs mentioned, the /language/ doesn't support this. `ptrdiff_t` is the
type of a pointer difference expression, any pointer difference
expression.
The language is broken.
Wrong about this.
Post by markptrdiff_t doesn't need to support arbitrary pointer differences,
Right.
Post by mark<<<
When two pointers to elements of the same array object are subtracted,
the result is the difference of the subscripts of the two array
elements. The type of the result is an implementation-defined signed
integral type; this type shall be the same type that is defined as
std::ptrdiff_t in the <cstddef> header (18.2). As with any other
arithmetic overflow, if the result does not fit in the space provided,
the behavior is undefined.
Good.
Post by markYour n_items_of() has undefined behavior.
Wrong.
Post by markFrom what I can tell, the only restriction on ptrdiff_t is that it is
signed. But apparently, per C and C++ standards, it could theoretically
be int8_t and it could be a smaller type than size_t.
No, it's required, by the C standard, to be at least 17 bits. And that's
not a typo, I didn't mean to write 16.
Post by markPost by Alf P. SteinbachAnd it overflows for this case, which means that you need to
treat that exceptionally large array very specially: you need to avoid
handing it to code with any possible pointer differences.
How do you know in advance how large the array is?
You don't, in general.
Just like you don't know exactly the required stack size.
And there are other problems with such (relative to 32-bit coding) large
arrays, including that an OS built on the idea that
we-at-Microsoft-make-software-that-know-better-than-you-what-you-need
such as Windows, can just start trashing, swapping to and back from
disk, with no way to configure off that behavior.
Post by markPost by Alf P. SteinbachIn other words, you're out of bounds of the language.
???
The core language doesn't support that array size in general, due to the
fact that the type of a pointer difference is `ptrdiff_t`.
The standard library does not support it, since it also defaults its
difference types, e.g. for `std::distance`, to `ptrdiff_t`.
Post by markPost by Alf P. SteinbachIt is the single example where overflow occurs, and it's not relevant
since
Post by Alf P. Steinbach* It's not supported by the language.
The language supports an array size that's larger than ptrdiff_t. Why
don't you point where standard disallows it,
You just did, above.
It's not disallowed. It's just /partly/ supported. Which means it's not
generally or fully supported.
You have to be very careful what you do with it, lest things go haywire.
Post by markinstead of throwing around insults?
Oh, sorry about that.
We'll see where this goes.
Post by markPost by Alf P. Steinbach* It does not occur in practice.
It does occur in practice.
Fire those devs.
Post by markPost by Alf P. Steinbach* It's not even supported by Windows (although Windows can be configured
to support it).
Re the last point, a 32-bit Windows application, without special
configuration of Windows, only has 2GB memory available to it.
32-bit programs running on x64 Windows get 4GB of address space without
any special configuration. They just need to be compiled with the
"largeaddressaware" flag.
Well, there's more to it than that, including bugs in the Windows API
(i.e., be careful with axe, Eugene), but the main sub-point is that this
example, the single one of its kind, does not occur naturally.
That's probably why the language design, the limits of pointer
arithmetic, simply assumes that it doesn't occur.
[snip]
Post by markPost by Alf P. SteinbachPost by markI have seen exactly this kind of bug in production code (e.g. with
people using 64-bit ints for large file access).
No doubt you have. It's so common allocating > 2G byte arrays in 32-bit
code.
Did I say it's common? It was a security bug. A large vector was
allocated shortly after program start, the size being calculated based
on untrusted input. Eventually, there was a comparison between the size
and an int variable leading to memory corruption.
A bug in someone's program means that the function I presented was
unsafe, for they could in theory have used that function in their buggy
code, yes?
No. It doesn't work that way.
A function isn't unsafe because it's possible for someone's buggy code
to give it arguments outside its preconditions.
Post by markThe same kind of thing can happen on platforms where int == int16_t,
size_t == uint16_t, ptrdiff_t == int.
No, `ptrdiff_t` is not allowed to be 16 bits or less.
In a conforming implementation.
Because the C standard requires its limits to be equal or greater in
magnitude to −65535 (lower limit) and +65535 (upper limit).
Cheers & hth.,
- Alf