Discussion:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
Add Reply
Lynn McGuire
2024-03-02 23:13:56 UTC
Reply
Permalink
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"

https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."

No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.

Lynn
Lawrence D'Oliveiro
2024-03-03 00:05:28 UTC
Reply
Permalink
Post by Lynn McGuire
The feddies want to regulate software development very much.
Given the high occurrence of embarrassing mistakes companies have been
making with their code, and continue to make, it’s quite clear they’re not
capable of regulating this issue themselves.

I wouldn’t worry about companies tripping over and hurting themselves, but
when the consequences are security leaks, not of information belonging to
those companies, but to their innocent customers/users who are often
unaware that those companies even had that information, then it’s quite
clear that Government has to step in.

Because if they don’t, then who will?
Chris M. Thomasson
2024-03-03 21:42:07 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Lynn McGuire
The feddies want to regulate software development very much.
Given the high occurrence of embarrassing mistakes companies have been
making with their code, and continue to make, it’s quite clear they’re not
capable of regulating this issue themselves.
Oh my. C/C++ compilers are banned world wide. They even have reeducation
camps that they will confine you to. You know, to learn the one true
way... If you make a bug using the one true way, you risk a firing
squad? lol. ;^)
Post by Lawrence D'Oliveiro
I wouldn’t worry about companies tripping over and hurting themselves, but
when the consequences are security leaks, not of information belonging to
those companies, but to their innocent customers/users who are often
unaware that those companies even had that information, then it’s quite
clear that Government has to step in.
Because if they don’t, then who will?
lol.
John McCue
2024-03-03 02:10:03 UTC
Reply
Permalink
trimmed followups to comp.lang.c

In comp.lang.c Lynn McGuire <***@gmail.com> wrote:
<snip>
Post by Lynn McGuire
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
Well to be fair, the feds regulations in the 60s made COBOL and
FORTRAN very popular, plus POSIX later on. All they did was
say "we will not buy anything unless ... rules".

From "The C Programming Language Quotes by Brian W. Kernighan".
Post by Lynn McGuire
Nevertheless, C retains the basic philosophy that
programmers know what they are doing; it only requires
that they state their intentions explicitly.
If programmers were given time to test and develop, many
issues would not exist. Anyone who has ever worked for a
large company knows the pressure that exists to get things
done quickly instead of right. So all these issues I blame
on management.

How many times have we heard "ship it now, you can fix later"
and "later" never comes. :)

Rust will never fix policy issues, just different and maybe worst
issues will happen.
Post by Lynn McGuire
Lynn
--
[t]csh(1) - "An elegant shell, for a more... civilized age."
- Paraphrasing Star Wars
Lawrence D'Oliveiro
2024-03-03 03:30:17 UTC
Reply
Permalink
Well to be fair, the feds regulations in the 60s made COBOL and FORTRAN
very popular, plus POSIX later on.
The US Government purchasing rules on POSIX were sufficiently sketchy that
Microsoft was able to satisfy them easily with Windows NT, while supplying
a “POSIX” subsystem that was essentially unusable.

And then Microsoft went on to render POSIX largely irrelevant by eating
all the proprietary “Unix” vendors alive.

Nowadays, POSIX (and *nix generally) is undergoing a resurgence because of
Linux and Open Source. Developers are discovering that the Linux ecosystem
offers a much more productive development environment for a code-sharing,
code-reusing, Web-centric world than anything Microsoft can offer.
Blue-Maned_Hawk
2024-03-03 08:54:36 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
of Linux and Open Source. Developers are discovering that the Linux
ecosystem offers a much more productive development environment for a
code-sharing, code-reusing, Web-centric world than anything Microsoft
can offer.
I do not want to live in a web-centric world. I would much rather see
other, better uses of the internet become widespread.
--
Blue-Maned_Hawk│shortens to
Hawk│/
blu.mɛin.dÊ°ak/
│he/him/his/himself/Mr.
blue-maned_hawk.srht.site
Special thanks to misinformed hipsters!
Lawrence D'Oliveiro
2024-03-03 20:11:14 UTC
Reply
Permalink
Post by Blue-Maned_Hawk
Post by Lawrence D'Oliveiro
Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
of Linux and Open Source. Developers are discovering that the Linux
ecosystem offers a much more productive development environment for a
code-sharing, code-reusing, Web-centric world than anything Microsoft
can offer.
I do not want to live in a web-centric world.
You already do.
Chris M. Thomasson
2024-03-03 21:49:15 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Blue-Maned_Hawk
Post by Lawrence D'Oliveiro
Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
of Linux and Open Source. Developers are discovering that the Linux
ecosystem offers a much more productive development environment for a
code-sharing, code-reusing, Web-centric world than anything Microsoft
can offer.
I do not want to live in a web-centric world.
You already do.
You are not an ai, right? ;^)
Blue-Maned_Hawk
2024-03-03 22:11:14 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Blue-Maned_Hawk
Post by Lawrence D'Oliveiro
Nowadays, POSIX (and *nix generally) is undergoing a resurgence
because of Linux and Open Source. Developers are discovering that the
Linux ecosystem offers a much more productive development environment
for a code-sharing, code-reusing, Web-centric world than anything
Microsoft can offer.
I do not want to live in a web-centric world.
You already do.
That does not change the veracity of my statement.
--
Blue-Maned_Hawk│shortens to
Hawk│/
blu.mɛin.dÊ°ak/
│he/him/his/himself/Mr.
blue-maned_hawk.srht.site
Every time!
Lawrence D'Oliveiro
2024-03-03 23:27:54 UTC
Reply
Permalink
Post by Blue-Maned_Hawk
Post by Lawrence D'Oliveiro
Post by Blue-Maned_Hawk
I do not want to live in a web-centric world.
You already do.
That does not change the veracity of my statement.
That doesn’t change the veracity of mine.
Blue-Maned_Hawk
2024-03-03 08:52:03 UTC
Reply
Permalink
Any attempt to displace C will require total replacement of the modern
computing ecosystem. Frankly, i'd be fine with that if pulled off well,
but i wouldn't be fine with a half-baked solution nor trying to force out
C without thinking about the whole rest of everything.
--
Blue-Maned_Hawk│shortens to
Hawk│/
blu.mɛin.dÊ°ak/
│he/him/his/himself/Mr.
blue-maned_hawk.srht.site
Mac and Cheese, Horrifying Quality, Prepared by Barack Obama
Michael S
2024-03-03 09:10:22 UTC
Reply
Permalink
On Sat, 2 Mar 2024 17:13:56 -0600
They have been talking about it for at least 20 years now.
More like 48-49 years.
https://en.wikipedia.org/wiki/High_Order_Language_Working_Group
David Brown
2024-03-03 11:01:57 UTC
Reply
Permalink
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No.  The feddies want to regulate software development very much.  They
have been talking about it for at least 20 years now.  This is a very
bad thing.
Lynn
It's the wrong solution to the wrong problem.

It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. As long as the people
programming in Rust or other modern languages are the more capable and
qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.

But if it gets popular enough for schools and colleges to teach Rust
programming course to the masses, and it gets used by developers who are
paid per KLoC, given responsibilities well beyond their abilities and
experience, lead by incompetent managers, untrained in good development
practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.

Good languages and good tools help, but they are not the root cause of
poor quality software in the world.
Janis Papanagnou
2024-03-03 15:03:10 UTC
Reply
Permalink
Post by David Brown
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. [...]"
[...]
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. [...]
[...]
Good languages and good tools help, but they are not the root cause of
poor quality software in the world.
I agree about the necessity of having good programmers. But a lot more
factors are important, and there's factors that influence programmers.
Languages may have a design that makes it possible to produce safer
software, or to be error prone and require a lot more attention from
the programmers (and also from management). Tools may help a bit to
work around the problems that languages inherently add. Good project
management may also help to increase software quality. But it's much
more costly in case of using inferior (or unsuited) languages.

Janis
Kaz Kylheku
2024-03-03 18:18:26 UTC
Reply
Permalink
Post by David Brown
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No.  The feddies want to regulate software development very much.  They
have been talking about it for at least 20 years now.  This is a very
bad thing.
Lynn
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. As long as the people
programming in Rust or other modern languages are the more capable and
qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.
Programmers who think about safety, correctness and quality and all that
have way fewer diagnostics and more footguns if they are coding in C
compared to Rust.

I think, you can't just wave away the characteristics of Rust as making
no difference in this regard.
Post by David Brown
But if it gets popular enough for schools and colleges to teach Rust
programming course to the masses, and it gets used by developers who are
paid per KLoC, given responsibilities well beyond their abilities and
experience, lead by incompetent managers, untrained in good development
practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.
The rhetoric you hear from Rust people about this is that coders taking
a safety shortcut to make something work have to explicitly ask for that
in Rust. It leaves a visible trace. If something goes wrong because of
an unsafe block, you can trace that to the commit which added it.

The rhetoric all sounds good.

However, like you, I also believe it boils down to people, in a
somewhat different way. To use Rust productively, you have to be one of
the rare idiot savants who are smart enough to use it *and* numb to all
the inconveniences.

The reason the average programmer won't make any safety
boo-boos using Rust is that the average programmer either isn't smart
enough to use it at all, or else doesn't want to put up with the fuss:
they will opt for some safe language which is easy to use.

Rust's problem is that we have safe languages in which you can almost
crank out working code with your eyes closed. (Or if not working,
then at least code in which the only uncaught bugs are your logic bugs,
not some undefined behavior from integer overflow or array out of
bounds.)

This is why Rust people are desperately pitching Rust as an alternative
for C and whatnot, and showcasing it being used in the kernel and
whatnot.

Trying to be both safe and efficient to be able to serve as a "C
replacement" is a clumsy hedge that makes Rust an awkward language.

You know the parable about the fox that tries to chase two rabbits.

The alternative to Rust in application development is pretty much any
convenient, "easy" high level language, plus a little bit of C.
You can get a small quantity of C right far more easily than a large
quantity of C. It's almost immaterial.

An important aspect of Rust is the ownership-based memory management.

The problem is, the "garbage collection is bad" era is /long/ behind us.

Scoped ownership is a half-baked solution to the object lifetime
problem, that gets in the way of the programmer and isn't appropriate
for the vast majority of software tasks.

Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
David Brown
2024-03-03 20:23:56 UTC
Reply
Permalink
Post by Kaz Kylheku
Post by David Brown
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No.  The feddies want to regulate software development very much.  They
have been talking about it for at least 20 years now.  This is a very
bad thing.
Lynn
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. As long as the people
programming in Rust or other modern languages are the more capable and
qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.
Programmers who think about safety, correctness and quality and all that
have way fewer diagnostics and more footguns if they are coding in C
compared to Rust.
I think, you can't just wave away the characteristics of Rust as making
no difference in this regard.
I did not.

I said that the /root/ problem is not the language, but the programmers
and the way they work.

Of course some languages make some things harder and other things
easier. And even the most careful programmers will occasionally make
mistakes. So having a language that helps reduce the risk of some kinds
of errors is a helpful thing.

But consider this. When programming in modern C++, you can be risk-free
from buffer overruns and most kinds of memory leak - use container
classes, string classes, and the like, rather than C-style arrays and
malloc/free or new/delete. You can use the C++ coding guideline
libraries to mark ownership of pointers. You can use compiler
sanitizers to catch many kinds undefined behaviour. You can use all
sorts of static analysis tools, from free to very costly, to help find
problems. And yet there are armies of programmers writing bad C++ code.
PHP and Javascript have automatic memory management and garbage
collection eliminating many of the possible problems seen in C and C++
code, yet armies of programmers write PHP and Javascript code full of
bugs and security faults.

Better languages, better libraries, and better tools certainly help.
There are not many tasks for which C is the best choice of language.
But none of that will deal with the root of the problem. Good
programmers, with good training, in good development departments with
good managers and good resources, will write correct code more
efficiently in a better language, but they can write correct code in
pretty much /any/ language. Similarly, the bulk of programmers will
write bad code in any language.
Post by Kaz Kylheku
Post by David Brown
But if it gets popular enough for schools and colleges to teach Rust
programming course to the masses, and it gets used by developers who are
paid per KLoC, given responsibilities well beyond their abilities and
experience, lead by incompetent managers, untrained in good development
practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.
The rhetoric you hear from Rust people about this is that coders taking
a safety shortcut to make something work have to explicitly ask for that
in Rust. It leaves a visible trace. If something goes wrong because of
an unsafe block, you can trace that to the commit which added it.
The rhetoric all sounds good.
You can't trace the commit for programmers who don't use version control
software - and that is a /lot/ of them. Leaving visible traces does not
help when no one else looks at the code. Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.

Rust makes it possible to have some safety checks for a few things that
are much harder to do in C++. It does not stop people writing bad code
using bad development practices.
Post by Kaz Kylheku
However, like you, I also believe it boils down to people, in a
somewhat different way. To use Rust productively, you have to be one of
the rare idiot savants who are smart enough to use it *and* numb to all
the inconveniences.
And you have to have managers who are smart enough to believe it when
their programmers say they need to train in a new language, re-write
lots of existing code, and accept longer development times as a tradeoff
for fewer bugs in shipped code.

(I personally have a very good manager, but I know a great many
programmers do not.)
Post by Kaz Kylheku
The reason the average programmer won't make any safety
boo-boos using Rust is that the average programmer either isn't smart
they will opt for some safe language which is easy to use.
Rust's problem is that we have safe languages in which you can almost
crank out working code with your eyes closed. (Or if not working,
then at least code in which the only uncaught bugs are your logic bugs,
not some undefined behavior from integer overflow or array out of
bounds.)
This is why Rust people are desperately pitching Rust as an alternative
for C and whatnot, and showcasing it being used in the kernel and
whatnot.
I personally think it is madness to have Rust in a project like the
Linux kernel. I used to see C++ as a rapidly changing language with its
3 year cycle - Rust seems to have a 3 week cycle for updates, with no
formal standardisation and "work in progress" attitude. That's fine for
a new language under development, but /not/ something you want for a
project that spans decades.
Post by Kaz Kylheku
Trying to be both safe and efficient to be able to serve as a "C
replacement" is a clumsy hedge that makes Rust an awkward language.
You know the parable about the fox that tries to chase two rabbits.
The alternative to Rust in application development is pretty much any
convenient, "easy" high level language, plus a little bit of C.
You can get a small quantity of C right far more easily than a large
quantity of C. It's almost immaterial.
There are lots of alternatives to Rust for application development. But
in general, higher level languages mean you do less manual work, and
write fewer lines of code for the same amount of functionality. And
that means a lower risk of errors.
Post by Kaz Kylheku
An important aspect of Rust is the ownership-based memory management.
The problem is, the "garbage collection is bad" era is /long/ behind us.
Scoped ownership is a half-baked solution to the object lifetime
problem, that gets in the way of the programmer and isn't appropriate
for the vast majority of software tasks.
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
Chris M. Thomasson
2024-03-03 22:01:54 UTC
Reply
Permalink
Post by David Brown
Post by Kaz Kylheku
Post by David Brown
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No.  The feddies want to regulate software development very much.  They
have been talking about it for at least 20 years now.  This is a very
bad thing.
Lynn
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe".  It is the
programmers that write the code for them.  As long as the people
programming in Rust or other modern languages are the more capable and
qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.
Programmers who think about safety, correctness and quality and all that
have way fewer diagnostics and more footguns if they are coding in C
compared to Rust.
I think, you can't just wave away the characteristics of Rust as making
no difference in this regard.
I did not.
I said that the /root/ problem is not the language, but the programmers
and the way they work.
Of course some languages make some things harder and other things
easier.  And even the most careful programmers will occasionally make
mistakes.  So having a language that helps reduce the risk of some kinds
of errors is a helpful thing.
But consider this.  When programming in modern C++, you can be risk-free
from buffer overruns and most kinds of memory leak - use container
classes, string classes, and the like, rather than C-style arrays and
malloc/free or new/delete.  You can use the C++ coding guideline
libraries to mark ownership of pointers.  You can use compiler
sanitizers to catch many kinds undefined behaviour.  You can use all
sorts of static analysis tools, from free to very costly, to help find
problems.  And yet there are armies of programmers writing bad C++ code.
 PHP and Javascript have automatic memory management and garbage
collection eliminating many of the possible problems seen in C and C++
code, yet armies of programmers write PHP and Javascript code full of
bugs and security faults.
Better languages, better libraries, and better tools certainly help.
There are not many tasks for which C is the best choice of language. But
none of that will deal with the root of the problem.  Good programmers,
with good training, in good development departments with good managers
and good resources, will write correct code more efficiently in a better
language, but they can write correct code in pretty much /any/
language.  Similarly, the bulk of programmers will write bad code in any
language.
Post by Kaz Kylheku
Post by David Brown
But if it gets popular enough for schools and colleges to teach Rust
programming course to the masses, and it gets used by developers who are
paid per KLoC, given responsibilities well beyond their abilities and
experience, lead by incompetent managers, untrained in good development
practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.
The rhetoric you hear from Rust people about this is that coders taking
a safety shortcut to make something work have to explicitly ask for that
in Rust. It leaves a visible trace.  If something goes wrong because of
an unsafe block, you can trace that to the commit which added it.
The rhetoric all sounds good.
You can't trace the commit for programmers who don't use version control
software - and that is a /lot/ of them.  Leaving visible traces does not
help when no one else looks at the code.  Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
Rust makes it possible to have some safety checks for a few things that
are much harder to do in C++.  It does not stop people writing bad code
using bad development practices.
Post by Kaz Kylheku
However, like you, I also believe it boils down to people, in a
somewhat different way. To use Rust productively, you have to be one of
the rare idiot savants who are smart enough to use it *and* numb to all
the inconveniences.
And you have to have managers who are smart enough to believe it when
their programmers say they need to train in a new language, re-write
lots of existing code, and accept longer development times as a tradeoff
for fewer bugs in shipped code.
(I personally have a very good manager, but I know a great many
programmers do not.)
Post by Kaz Kylheku
The reason the average programmer won't make any safety
boo-boos using Rust is that the average programmer either isn't smart
they will opt for some safe language which is easy to use.
Rust's problem is that we have safe languages in which you can almost
crank out working code with your eyes closed. (Or if not working,
then at least code in which the only uncaught bugs are your logic bugs,
not some undefined behavior from integer overflow or array out of
bounds.)
This is why Rust people are desperately pitching Rust as an alternative
for C and whatnot, and showcasing it being used in the kernel and
whatnot.
I personally think it is madness to have Rust in a project like the
Linux kernel.  I used to see C++ as a rapidly changing language with its
3 year cycle - Rust seems to have a 3 week cycle for updates, with no
formal standardisation and "work in progress" attitude.  That's fine for
a new language under development, but /not/ something you want for a
project that spans decades.
Post by Kaz Kylheku
Trying to be both safe and efficient to be able to serve as a "C
replacement" is a clumsy hedge that makes Rust an awkward language.
You know the parable about the fox that tries to chase two rabbits.
The alternative to Rust in application development is pretty much any
convenient, "easy" high level language, plus a little bit of C.
You can get a small quantity of C right far more easily than a large
quantity of C. It's almost immaterial.
There are lots of alternatives to Rust for application development.  But
in general, higher level languages mean you do less manual work, and
write fewer lines of code for the same amount of functionality.  And
that means a lower risk of errors.
Post by Kaz Kylheku
An important aspect of Rust is the ownership-based memory management.
The problem is, the "garbage collection is bad" era is /long/ behind us.
Scoped ownership is a half-baked solution to the object lifetime
problem, that gets in the way of the programmer and isn't appropriate
for the vast majority of software tasks.
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your various
data structures needs....
David Brown
2024-03-04 08:44:04 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by David Brown
Post by Kaz Kylheku
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your various
data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management. And as Kaz says, you will often use
custom solutions such as resource pools rather than generic malloc/free.
Flexible network communication (such as Ethernet or other IP
networking) is hard to do without dynamic memory.

But for things that are safety or reliability critical, you aim to have
everything statically allocated. (Sometimes you use dynamic memory at
startup for convenience, but you never free that memory.) This, of
course, means you simply don't use certain kinds of data structures.
std::array<> is fine - it's just a nicer type wrapper around a fixed
size C-style array. But you don't use std::vector<>, or other growable
structures. You figure out in advance the maximum size you need for
your structures, and nail them to that size at compile time.

There are three big run-time dangers and one big build-time limitation
when you have dynamic memory:

1. You can run out. PC's can often be assumed to have "limitless"
memory, and it is also often fine for a PC program to say it can't load
that big file until you close other programs and free up memory. In a
safety-critical embedded system, you have limited ram, and your code
never does things it does not have to do - consequently, it is not
acceptable to say it can't run a task at the moment due to lack of memory.

2. You get fragmentation from malloc/free, leading to allocation
failures even when there is enough total free memory. Small embedded
systems don't have virtual memory, paging, MMUs, and other ways to
re-arrange the appearance of memory. If you free your memory in a
different order from allocation, your heap gets fragmented, and you end
up with your "free" memory consisting of lots of discontinuous bits.

3. Your timing is hard to predict or constrain. Walking heaps to find
free memory for malloc, or coalescing free segments on deallocation,
often has very unpredictable timing. This is a big no-no for real time
systems.

And at design/build time, dynamic memory requires are extremely
difficult to analyse. In comparison, if everything is allocated
statically, it's simple - it's all there in your map files, and you have
a pass/fail result from trying to link it all within the available
memory of the target.
Malcolm McLean
2024-03-04 11:38:51 UTC
Reply
Permalink
Post by David Brown
Post by Chris M. Thomasson
Post by David Brown
Post by Kaz Kylheku
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management.  And as Kaz says, you will often use
custom solutions such as resource pools rather than generic malloc/free.
 Flexible network communication (such as Ethernet or other IP
networking) is hard to do without dynamic memory.
But for things that are safety or reliability critical, you aim to have
everything statically allocated.  (Sometimes you use dynamic memory at
startup for convenience, but you never free that memory.)  This, of
course, means you simply don't use certain kinds of data structures.
std::array<> is fine - it's just a nicer type wrapper around a fixed
size C-style array.  But you don't use std::vector<>, or other growable
structures.  You figure out in advance the maximum size you need for
your structures, and nail them to that size at compile time.
And if it's embedded, it's unlikely to have an unbounded dataset thrown
at it, because embedded systems aren't used for those types of problems.
--
Check out Basic Algorithms and my other books:
https://www.lulu.com/spotlight/bgy1mm
Chris M. Thomasson
2024-03-04 20:46:54 UTC
Reply
Permalink
Post by Malcolm McLean
Post by David Brown
Post by Chris M. Thomasson
Post by David Brown
Post by Kaz Kylheku
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management.  And as Kaz says, you will often use
custom solutions such as resource pools rather than generic
malloc/free.   Flexible network communication (such as Ethernet or
other IP networking) is hard to do without dynamic memory.
But for things that are safety or reliability critical, you aim to
have everything statically allocated.  (Sometimes you use dynamic
memory at startup for convenience, but you never free that memory.)
This, of course, means you simply don't use certain kinds of data
structures. std::array<> is fine - it's just a nicer type wrapper
around a fixed size C-style array.  But you don't use std::vector<>,
or other growable structures.  You figure out in advance the maximum
size you need for your structures, and nail them to that size at
compile time.
And if it's embedded, it's unlikely to have an unbounded dataset thrown
at it, because embedded systems aren't used for those types of problems.
Fwiw, this older experimental allocator (2009) works on restricted
memory systems. Please forgive the alignment hacks... ;^)

https://pastebin.com/raw/f37a23918
(to raw text, no ads wrt pastebin)

https://groups.google.com/g/comp.lang.c/c/7oaJFWKVCTw/m/sSWYU9BUS_QJ
Chris M. Thomasson
2024-03-04 20:36:57 UTC
Reply
Permalink
Post by David Brown
Post by Chris M. Thomasson
Post by David Brown
Post by Kaz Kylheku
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management.  And as Kaz says, you will often use
custom solutions such as resource pools rather than generic malloc/free.
 Flexible network communication (such as Ethernet or other IP
networking) is hard to do without dynamic memory.
[...]

Think of using a big chunk of memory, never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange sense...
Chris M. Thomasson
2024-03-04 20:41:26 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by David Brown
Post by Chris M. Thomasson
Post by David Brown
Post by Kaz Kylheku
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management.  And as Kaz says, you will often use
custom solutions such as resource pools rather than generic
malloc/free.   Flexible network communication (such as Ethernet or
other IP networking) is hard to do without dynamic memory.
[...]
Think of using a big chunk of memory,
Say your program gains a special pointer from the system that contains
all of the memory it can use for its lifetime. Its there, and there is
no way to allocate any more...
Post by Chris M. Thomasson
never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange sense...
David Brown
2024-03-05 09:01:53 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by David Brown
Post by Chris M. Thomasson
Post by David Brown
Post by Kaz Kylheku
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management.  And as Kaz says, you will often use
custom solutions such as resource pools rather than generic
malloc/free.   Flexible network communication (such as Ethernet or
other IP networking) is hard to do without dynamic memory.
[...]
Think of using a big chunk of memory, never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange sense...
I believe I mentioned that. You do not, in general, "push and pop" -
you malloc and never free. Excluding debugging code and other parts
useful in testing and developing, you have something like :

enum { heap_size = 16384; }
alignas(max_align_t) static uint8_t heap[heap_size];
uint8_t * next_free = heap;

void free(void * ptr) {
(void) ptr;
}

void * malloc(size_t size) {
const size_t align = alignof(max_align_t);
const real_size = size ? (size + (align - 1)) & ~(align - 1)
: align;
void * p = next_free;
next_free += real_size;
return p;
}


Allowing for pops requires storing the size of the allocations (unless
you change the API from that of malloc/free), and is only rarely useful.
Generally if you want memory that temporary, you use a VLA or alloca
to put it on the stack.
Chris M. Thomasson
2024-03-05 20:51:21 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by David Brown
Post by Chris M. Thomasson
Post by David Brown
Post by Kaz Kylheku
Embedded systems often need custom memory management, not
something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid
any dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic
memory and therefore memory management.  And as Kaz says, you will
often use custom solutions such as resource pools rather than generic
malloc/free.   Flexible network communication (such as Ethernet or
other IP networking) is hard to do without dynamic memory.
[...]
Think of using a big chunk of memory, never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange sense...
I believe I mentioned that.  You do not, in general, "push and pop" -
you malloc and never free.  Excluding debugging code and other parts
enum { heap_size = 16384; }
alignas(max_align_t) static uint8_t heap[heap_size];
uint8_t * next_free = heap;
void free(void * ptr) {
    (void) ptr;
}
void * malloc(size_t size) {
    const size_t align = alignof(max_align_t);
    const real_size = size ? (size + (align - 1)) & ~(align - 1)
                : align;
    void * p = next_free;
    next_free += real_size;
    return p;
}
Allowing for pops requires storing the size of the allocations (unless
you change the API from that of malloc/free), and is only rarely useful.
 Generally if you want memory that temporary, you use a VLA or alloca
to put it on the stack.
wrt systems with no malloc/free I am thinking more along the lines of a
region allocator mixed with a LIFO for a cache, so a node based thing.
The region allocator gets fed with a large buffer. Depending on specific
needs, it can work out nicely for systems that do not have malloc/free.
The pattern I used iirc, was something like:

// pseudo code...
_______________________
node*
node_pop()
{
// try the lifo first...

node* n = lifo_pop();

if (! n)
{
// resort to the region allocator...

n = region_allocate_node();

// note, n can be null here.
// if it is, we are out of memory.

// note, out of memory on a system
// with no malloc/free...
}

return n;
}

void
node_push(
node* n
) {
lifo_push(n);
}
_______________________


make any sense to you?
David Brown
2024-03-06 10:43:21 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by Chris M. Thomasson
Post by David Brown
Post by Chris M. Thomasson
Post by David Brown
Post by Kaz Kylheku
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at
all. Avoiding dynamic memory is an important aspect of
safety-critical embedded development.
You still have to think about memory management even if you avoid
any dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic
memory and therefore memory management.  And as Kaz says, you will
often use custom solutions such as resource pools rather than
generic malloc/free.   Flexible network communication (such as
Ethernet or other IP networking) is hard to do without dynamic memory.
[...]
Think of using a big chunk of memory, never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange sense...
I believe I mentioned that.  You do not, in general, "push and pop" -
you malloc and never free.  Excluding debugging code and other parts
enum { heap_size = 16384; }
alignas(max_align_t) static uint8_t heap[heap_size];
uint8_t * next_free = heap;
void free(void * ptr) {
     (void) ptr;
}
void * malloc(size_t size) {
     const size_t align = alignof(max_align_t);
     const real_size = size ? (size + (align - 1)) & ~(align - 1)
                 : align;
     void * p = next_free;
     next_free += real_size;
     return p;
}
Allowing for pops requires storing the size of the allocations (unless
you change the API from that of malloc/free), and is only rarely
useful.   Generally if you want memory that temporary, you use a VLA
or alloca to put it on the stack.
wrt systems with no malloc/free I am thinking more along the lines of a
region allocator mixed with a LIFO for a cache, so a node based thing.
The region allocator gets fed with a large buffer. Depending on specific
needs, it can work out nicely for systems that do not have malloc/free.
// pseudo code...
_______________________
node*
node_pop()
{
    // try the lifo first...
    node* n = lifo_pop();
    if (! n)
    {
        // resort to the region allocator...
        n = region_allocate_node();
        // note, n can be null here.
        // if it is, we are out of memory.
        // note, out of memory on a system
        // with no malloc/free...
    }
    return n;
}
void
node_push(
    node* n
) {
     lifo_push(n);
}
_______________________
make any sense to you?
I know what you are trying to suggest, and I understand how it can sound
reasonable. In some cases, this can be a useful kind of allocator, and
when it is suitable, it is very fast. But it is has two big issues for
small embedded systems.

One problem is the "region_allocate_node()" - getting a lump of space
from the underlying OS. That is fine on "big systems", and it is normal
that malloc/free systems only ask for memory from the OS in big lumps,
then handle local allocation within the process space for efficiency.
(This can work particularly well if each thread gets dedicated lumps, so
that no locking is needed for most malloc/free calls.)

But in a small embedded system, there is no OS (an RTOS is generally
part of the same binary as the application), and providing such "lumps"
would be dynamic memory management. So if you are using a system like
you describe, then you would have a single statically allocated block of
memory for your lifo stack.

Then there is the question of how often such a stack-like allocator is
useful, independent of the normal stack. I can imagine it is
/sometimes/ helpful, but rarely. I can't think off-hand of any cases
where I would have found it useful in anything I have written.

As I (and others) have said elsewhere, in small embedded systems and
safety or reliability critical systems, you want to avoid dynamic memory
and memory management whenever possible, for a variety of reasons. If
you do need something, then specialise allocators are more common -
possibly including lifos like this.

But it's more likely to have fixed-size pools with fixed-size elements,
dedicated to particular memory tasks. For example, if you need to track
multiple in-flight messages on a wireless mesh network, where messages
might take different amounts of time to be delivered and acknowledged,
or retried, you define a structure that holds all the data you need for
a message. Then you decide how many in-flight messages you will support
as a maximum. This gives you a statically allocated array of N structs.
Block usage is then done by a bitmap, typically within a single 32-bit
word. Finding a free slot is a just finding the first free zero, and
freeing it is clearing the correct bit.

There are, of course, many other kinds of dedicated allocators that can
be used in other circumstances.
Lawrence D'Oliveiro
2024-03-03 23:31:35 UTC
Reply
Permalink
Post by David Brown
But consider this. When programming in modern C++, you can be risk-free
from buffer overruns and most kinds of memory leak - use container
classes, string classes, and the like, rather than C-style arrays and
malloc/free or new/delete.
Or, going further, how about Google‘s “Carbon” project
<https://github.com/carbon-language/carbon-lang>, which tries to keep
the good bits from C++ while chucking out the bad?
Janis Papanagnou
2024-03-04 16:05:54 UTC
Reply
Permalink
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
An indication of bad project management (or none at all) to control
development according to a realistic plan.

Janis
David Brown
2024-03-04 17:24:58 UTC
Reply
Permalink
Post by Janis Papanagnou
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
An indication of bad project management (or none at all) to control
development according to a realistic plan.
Now you are beginning to understand!
Janis Papanagnou
2024-03-05 01:46:51 UTC
Reply
Permalink
Post by David Brown
Post by Janis Papanagnou
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
An indication of bad project management (or none at all) to control
development according to a realistic plan.
Now you are beginning to understand!
Huh? - I posted about various factors (beyond the programmers'
proficiency and tools) in an earlier reply to you; it was including
the management factor that you missed to note and that you adopted
as factor just in a later post. - So there's neither need nor reason
for such an arrogant, wrong, and disrespectful statement.

Janis
David Brown
2024-03-05 10:23:41 UTC
Reply
Permalink
Post by Janis Papanagnou
Post by David Brown
Post by Janis Papanagnou
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
An indication of bad project management (or none at all) to control
development according to a realistic plan.
Now you are beginning to understand!
Huh? - I posted about various factors (beyond the programmers'
proficiency and tools) in an earlier reply to you; it was including
the management factor that you missed to note and that you adopted
as factor just in a later post. - So there's neither need nor reason
for such an arrogant, wrong, and disrespectful statement.
It was not intended that way at all - I'm sorry if that is how it came
across.
Lawrence D'Oliveiro
2024-03-03 20:10:26 UTC
Reply
Permalink
Post by David Brown
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to Rust.
Andreas Kempe
2024-03-03 20:57:54 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by David Brown
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to Rust.
I'm not surprised. I think it is pretty self-evident that a language
that is designed to reduce memory errors, if correctly designed,
will do just that.

It is very easy to go on about good and bad programmers, but that
really doesn't matter since statistics from the real world show that
memory errors are common and cause serious vulnerabilities.

Considering how hostile today's interconnected world has become with
security getting a higher and higher priority, I think we are bound to
see a decline of memory unsafe languages. C++ can be written to be
memory safe, but it is also very easy to write C++ that is not memory
safe. If C++ is to stay competitive, I think the C++ committee needs
to have a good and long think about what can be done to remedy these
issues.

Doing nothing, I could see initiatives like CHERI introducing hardware
based memory safety being a saviour. If the languages that enforce
memory safety through their type system are more difficult to use, C++
might be preferable if the memory safety is provided more or less
transparently through the hardware. Although a software solution is
probably seen as easier and cheaper than a hardware one. As the sales
department at my last job informed me: "We can sell hardware, everyone
likes a shiny box! Software is supposed to be included for free!"
Chris M. Thomasson
2024-03-03 22:06:31 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by David Brown
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
Fwiw, a YouTube link to a scene in the movie Dirty Rotten Scoundrels:
Funny to me:




Putting the cork on the fork is akin to saying nobody should be using C
and/or C++ in this "modern" age? :^)
Lawrence D'Oliveiro
2024-03-03 23:29:42 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by David Brown
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
Except this is Google, and they’re doing it in real-world production
code, namely Android. And showing some positive benefits from doing
so, without impairing the functionality of Android in any way.

Not like “putting corks on the forks”, whatever that might be about
...
Chris M. Thomasson
2024-03-03 23:53:22 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by David Brown
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
Except this is Google, and they’re doing it in real-world production
code, namely Android. And showing some positive benefits from doing
so, without impairing the functionality of Android in any way.
Not like “putting corks on the forks”, whatever that might be about
...
Putting corks on the forks is necessary to prevent the programmer from
hurting itself or others... ;^)
bart
2024-03-04 01:00:24 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by David Brown
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
Except this is Google, and they’re doing it in real-world production
code, namely Android. And showing some positive benefits from doing
so, without impairing the functionality of Android in any way.
That's great. So long as it is somebody else is programming in one of
those languages where you have one hand tied behind your back. That used
to be Ada. Now apparently it is Rust (so more like both hands tied).


In the piechart in your link however, new code in C/C++ still looks to
be nearly 3 times as much as Rust.

Personally I think there must be an easier language which is considered
to be safer without also making coding a nightmare.
Malcolm McLean
2024-03-04 11:44:06 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by David Brown
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
Except this is Google, and they’re doing it in real-world production
code, namely Android. And showing some positive benefits from doing
so, without impairing the functionality of Android in any way.
Not like “putting corks on the forks”, whatever that might be about
...
And it's pump money at it until something which is not going to a goer
for anyone else starts to be a goer, and it is now made to work. And of
course Google can solve a problem by inventing a new language and
putting up all the infrastructure that that would need around it.
--
Check out Basic Algorithms and my other books:
https://www.lulu.com/spotlight/bgy1mm
Lawrence D'Oliveiro
2024-03-04 21:07:27 UTC
Reply
Permalink
And of course Google can solve a problem by inventing a new language and
putting up all the infrastructure that that would need around it.
Google has invented quite a lot of languages: Dart and Go come to mind,
and also this “Carbon” effort.

I suppose nowadays a language can find a niche outside the mainstream, and
still be viable. Proprietary products need mass-market success to stay
afloat, but with open-source ones, what’s important is the contributor
base, not the user base.
Michael S
2024-03-04 22:59:48 UTC
Reply
Permalink
On Mon, 4 Mar 2024 21:07:27 -0000 (UTC)
Post by Lawrence D'Oliveiro
And of course Google can solve a problem by inventing a new
language and putting up all the infrastructure that that would need
around it.
Google has invented quite a lot of languages: Dart and Go come to
mind, and also this “Carbon” effort.
I suppose nowadays a language can find a niche outside the
mainstream, and still be viable. Proprietary products need
mass-market success to stay afloat, but with open-source ones, what’s
important is the contributor base, not the user base.
Go *is* mainstream, more so than Rust.
Dart is not mainstream and is not even niche.
For Carbon it's too early to call, but so far prospects look bleak.
Lawrence D'Oliveiro
2024-03-05 01:54:46 UTC
Reply
Permalink
Post by Michael S
Go *is* mainstream, more so than Rust.
Google looked at what language to use for its proprietary “Fuchsia” OS,
and decided Rust was a better choice than Go.

Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better performance.
Chris M. Thomasson
2024-03-05 06:18:47 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Michael S
Go *is* mainstream, more so than Rust.
Google looked at what language to use for its proprietary “Fuchsia” OS,
and decided Rust was a better choice than Go.
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better performance.
Why do you mention performance? I thought is was all about safety...
Lawrence D'Oliveiro
2024-03-05 07:06:38 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by Michael S
Go *is* mainstream, more so than Rust.
Google looked at what language to use for its proprietary “Fuchsia” OS,
and decided Rust was a better choice than Go.
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better
performance.
Why do you mention performance? I thought is was all about safety...
Safety’s a given. Plus you get performance as well.
Chris M. Thomasson
2024-03-05 07:10:47 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by Michael S
Go *is* mainstream, more so than Rust.
Google looked at what language to use for its proprietary “Fuchsia” OS,
and decided Rust was a better choice than Go.
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better
performance.
Why do you mention performance? I thought is was all about safety...
Safety’s a given. Plus you get performance as well.
For sure? There is no way a programmer can f it up, so to speak?
Michael S
2024-03-05 09:11:03 UTC
Reply
Permalink
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Post by Lawrence D'Oliveiro
Post by Michael S
Go *is* mainstream, more so than Rust.
Google looked at what language to use for its proprietary “Fuchsia”
OS, and decided Rust was a better choice than Go.
Go is (1) garbage-collected, (2) mostly statically linked.
(1) means it is not suitable for kernel
(2) means it is suitable for big user-mode utilities, but probably
impractical for smaller utilities, because you don't want your tiny
utility to occupy 2-3 MB on permanent storage.
But both (1) and (2) are advantages for typical application programming,
esp. for back-end processing.
Post by Lawrence D'Oliveiro
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better
performance.
I have no idea who is Discord.
However I fully expect that for micro- or mini-benchmarks they are
correct.
I also expect that
- even for micro- or mini-benchmark the difference in speed is less
than factor of 3
- for big and complex real-world back-end processing, writing working
solution in go will take 5 time less man hours than writing it in
Rust
- for more complex processing just making it work in Rust, regardless of
execution speed, will require uncommon level of programming skills
- even if Rust solution works initially, it would be more costly (than
go solution) to maintain and especially to adapt to changing
requirements.
Lawrence D'Oliveiro
2024-03-05 22:58:10 UTC
Reply
Permalink
Post by Michael S
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Post by Lawrence D'Oliveiro
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing working
solution in go will take 5 time less man hours than writing it in Rust
Nevertheless, they found the switch to Rust worthwhile.
Michael S
2024-03-06 12:02:14 UTC
Reply
Permalink
On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
Post by Lawrence D'Oliveiro
Post by Michael S
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Post by Lawrence D'Oliveiro
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than writing
it in Rust
Nevertheless, they found the switch to Rust worthwhile.
I read a little more about it.
https://discord.com/blog/why-discord-is-switching-from-go-to-rust

Summary: performance of one of Discord's most heavy-duty servers
suffered from weakness in implementation of Go garbage collector. On
average the performance was satisfactory, but every two minutes there
was spike in latency. The latency during the spike was not that big
(300 msec), but they stilled were feeling that they want better.
They tried to tune GC, but the problem appeared to be fundamental.
So they just rewrote this particular server in Rust. Naturally, Rust
does not collect garbage, so this particular problem disappeared.

The key phrase of the story is "This service was a great candidate to
port to Rust since it was small and self-contained".
I'd add to this that even more important for eventual success of
migration was the fact that at time of rewrite server was already
running for several years, so requirements were stable and
well-understood.
Another factor is that their service does not create/free that many
objects. The delay was caused by mere fact of GC scanning rather than
by frequent compacting of memory pools. So, from the beginning it was
obvious that potential fragmentation of the heap, which is the main
weakness of "plain" C/C++/Rust based solutions for Web back-ends, does
not apply in their case.

There is also non-technical angle involved: Discord is fueled by
investor's money. It's not that they have no revenues at all, but their
revenues at this stage are not supposed to cover their expenses.
Companies that operate in such mode have different
perspective to just about everything. I mean, different from
perspective of people like myself, working in a company that fights hard
to stay profitable and succeeds more often than not.

I have few questions about the story, most important one is whether the
weakness of this sort is specific to GC of Go, due to its relative
immaturity or more general and applies equally to most mature GCs on
the market, i.e. J2EE and .NET.
Another question is whether the problem is specific to GC-style of
automatic memory management (AMM) or applies, at least to some degree,
to other forms of AMM, most importantly, to AMMs based on Reference
Counting used by Swift and also popular in C++.
Of course, I don't expected that my questions will be answered fully on
comp.lang.c, but if some knowledgeable posters will try to answer I
would appreciate.
bart
2024-03-06 12:28:59 UTC
Reply
Permalink
Post by Michael S
On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
Post by Lawrence D'Oliveiro
Post by Michael S
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Post by Lawrence D'Oliveiro
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than writing
it in Rust
Nevertheless, they found the switch to Rust worthwhile.
I read a little more about it.
https://discord.com/blog/why-discord-is-switching-from-go-to-rust
Summary: performance of one of Discord's most heavy-duty servers
suffered from weakness in implementation of Go garbage collector. On
average the performance was satisfactory, but every two minutes there
was spike in latency. The latency during the spike was not that big
(300 msec), but they stilled were feeling that they want better.
They tried to tune GC, but the problem appeared to be fundamental.
So they just rewrote this particular server in Rust. Naturally, Rust
does not collect garbage, so this particular problem disappeared.
The key phrase of the story is "This service was a great candidate to
port to Rust since it was small and self-contained".
I'd add to this that even more important for eventual success of
migration was the fact that at time of rewrite server was already
running for several years, so requirements were stable and
well-understood.
Another factor is that their service does not create/free that many
objects. The delay was caused by mere fact of GC scanning rather than
by frequent compacting of memory pools. So, from the beginning it was
obvious that potential fragmentation of the heap, which is the main
weakness of "plain" C/C++/Rust based solutions for Web back-ends, does
not apply in their case.
From the same link:

"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps track
of who can read and write to memory. It knows when the program is using
memory and immediately frees the memory once it is no longer needed. It
enforces memory rules at compile time, making it virtually impossible to
have runtime memory bugs.⁴ You do not need to manually keep track of
memory. The compiler takes care of it."

This suggests the language automatically takes care of this. But you
have to write your programs in a certain way to make it possible. The
programmer has to help the language keep track of what owns what.

So you will probably be able to do the same thing in another language.
But Rust will do more compile-time enforcement by restricting how you
share objects in memory.
Kenny McCormack
2024-03-04 00:44:41 UTC
Reply
Permalink
In article <us2s96$2n6h3$***@dont-email.me>,
Chris M. Thomasson <***@gmail.com> wrote:
...
Post by Chris M. Thomasson
Sure. Putting corks on the forks reduces the chance of eye injuries.
http://youtu.be/eF8QAeQm3ZM
Leader Keith gets mad when you post YouTube URLs here.

I'd be more careful, if I were you.
Post by Chris M. Thomasson
Putting the cork on the fork is akin to saying nobody should be using C
and/or C++ in this "modern" age? :^)
--
The randomly chosen signature file that would have appeared here is more than 4
lines long. As such, it violates one or more Usenet RFCs. In order to remain
in compliance with said RFCs, the actual sig can be found at the following URL:
http://user.xmission.com/~gazelle/Sigs/ModernXtian
Chris M. Thomasson
2024-03-04 20:57:21 UTC
Reply
Permalink
Post by Kenny McCormack
...
Post by Chris M. Thomasson
Sure. Putting corks on the forks reduces the chance of eye injuries.
http://youtu.be/eF8QAeQm3ZM
Leader Keith gets mad when you post YouTube URLs here.
I'd be more careful, if I were you.
Well, at least I added in a description... ;^)
Post by Kenny McCormack
Post by Chris M. Thomasson
Putting the cork on the fork is akin to saying nobody should be using C
and/or C++ in this "modern" age? :^)
Chris M. Thomasson
2024-03-03 21:48:37 UTC
Reply
Permalink
Post by David Brown
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No.  The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now.  This is a
very bad thing.
Lynn
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe".  It is the
programmers that write the code for them.  As long as the people
programming in Rust or other modern languages are the more capable and
qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.
Then we will hear about how human programmers cannot be trusted... AI is
there. No programmers needed now. Jesting, of course, but I have heard
some people starting to think that way.
Post by David Brown
But if it gets popular enough for schools and colleges to teach Rust
programming course to the masses, and it gets used by developers who are
paid per KLoC, given responsibilities well beyond their abilities and
experience, lead by incompetent managers, untrained in good development
practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.
Good languages and good tools help, but they are not the root cause of
poor quality software in the world.
Scott Lurndal
2024-03-03 15:31:15 UTC
Reply
Permalink
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much.
You've been reading far to much apocalyptic fiction and seeing the
world through trump-colored glasses. Neither reflect reality.
Lynn McGuire
2024-03-05 06:09:35 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much.
You've been reading far to much apocalyptic fiction and seeing the
world through trump-colored glasses. Neither reflect reality.
Nope, I actually have had a Professional Engineer's License in Texas for
34 years now and can tell you all about what it takes to get one and
what it takes to keep one.

This bunch of crazies in the White House wants to do the same thing to
software development.

Lynn
Lawrence D'Oliveiro
2024-03-05 07:07:24 UTC
Reply
Permalink
... I actually have had a Professional Engineer's License in Texas for
34 years now and can tell you all about what it takes to get one and
what it takes to keep one.
Does that include any qualification in safety-critical or security-
critical systems?
Scott Lurndal
2024-03-05 14:56:58 UTC
Reply
Permalink
Post by Lynn McGuire
Post by Scott Lurndal
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much.
You've been reading far to much apocalyptic fiction and seeing the
world through trump-colored glasses. Neither reflect reality.
Nope, I actually have had a Professional Engineer's License in Texas for
34 years now and can tell you all about what it takes to get one and
what it takes to keep one.
This bunch of crazies in the White House wants to do the same thing to
software development.
Nothing in the quoted article supports your ridiculous assertion.
Blue-Maned_Hawk
2024-03-03 22:14:31 UTC
Reply
Permalink
Frankly, i think we should all be programming in macros over assembly
anyway.
--
Blue-Maned_Hawk│shortens to
Hawk│/
blu.mɛin.dÊ°ak/
│he/him/his/himself/Mr.
blue-maned_hawk.srht.site
You have a disease!
Chris M. Thomasson
2024-03-03 22:15:09 UTC
Reply
Permalink
Post by Blue-Maned_Hawk
Frankly, i think we should all be programming in macros over assembly
anyway.
lol! :^D
Lynn McGuire
2024-03-05 06:02:01 UTC
Reply
Permalink
Post by Blue-Maned_Hawk
Frankly, i think we should all be programming in macros over assembly
anyway.
Been there, done that. No more.

Lynn
David LaRue
2024-03-03 23:59:33 UTC
Reply
Permalink
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-
invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a
very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages. They
can be hacked just as easily as C and C++ and many other languages. The
government should worry about things they really need to control, which is
less not more, IMHO. They obviously know very little about computer
development.

David
Professional developer for nearly 45 years
Chris M. Thomasson
2024-03-04 00:06:24 UTC
Reply
Permalink
Post by David LaRue
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-
invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a
very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages. They
can be hacked just as easily as C and C++ and many other languages. The
government should worry about things they really need to control, which is
less not more, IMHO. They obviously know very little about computer
development.
[...]

I remember a while back when some people would try to tell me that ADA
solves all issues...
Lawrence D'Oliveiro
2024-03-04 05:43:40 UTC
Reply
Permalink
I remember a while back when some people would try to tell me that [Ada]
solves all issues...
It did make a difference. Did you know the life-support system on the
International Space Station was written in Ada? Not something you
would trust C++ code to, let’s face it.

And here
<https://devclass.com/2022/11/08/spark-as-good-as-rust-for-safer-coding-adacore-cites-nvidia-case-study/>
is a project to make it even safer.
Chris M. Thomasson
2024-03-04 21:15:20 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
I remember a while back when some people would try to tell me that [Ada]
solves all issues...
It did make a difference. Did you know the life-support system on the
International Space Station was written in Ada? Not something you
would trust C++ code to, let’s face it.
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
Post by Lawrence D'Oliveiro
And here
<https://devclass.com/2022/11/08/spark-as-good-as-rust-for-safer-coding-adacore-cites-nvidia-case-study/>
is a project to make it even safer.
Lawrence D'Oliveiro
2024-03-04 21:26:51 UTC
Reply
Permalink
Post by Chris M. Thomasson
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
The less C code you write, the easier it is to keep it under control.
Chris M. Thomasson
2024-03-04 21:28:46 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
The less C code you write, the easier it is to keep it under control.
Excellent comment in a C group. Well, you should move to another group?
Chris M. Thomasson
2024-03-04 21:29:52 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
The less C code you write, the easier it is to keep it under control.
Excellent comment in a C group. Well, you should move to another group?
http://fractallife247.com/test/hmac_cipher/ver_0_0_0_1?ct_hmac_cipher=7e7e1c663477d02a3adbf99372cfa1e0e719dcdabd20b50c27000dba3eb5dc342e3e0403607bb40f00b999b6bc24559ca0858b445c097a3848b457b1028ab0d78aa57934cd00b99dd080f80bf7791a11d5df6435fb0e
Malcolm McLean
2024-03-05 02:46:33 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
The less C code you write, the easier it is to keep it under control.
Excellent comment in a C group. Well, you should move to another group?
There's an underlying reality there. The less code you have, the less
that can go wrong. So don;t just knock out code, but think a bit about
what you do and do not really need.
--
Check out Basic Algorithms and my other books:
https://www.lulu.com/spotlight/bgy1mm
Chris M. Thomasson
2024-03-05 03:40:37 UTC
Reply
Permalink
Post by Malcolm McLean
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
The less C code you write, the easier it is to keep it under control.
Excellent comment in a C group. Well, you should move to another group?
There's an underlying reality there. The less code you have, the less
that can go wrong.
Well, hard to disagree with that. :^D
Post by Malcolm McLean
So don;t just knock out code, but think a bit about
what you do and do not really need.
Indeed.

[...]
Lawrence D'Oliveiro
2024-03-05 04:43:21 UTC
Reply
Permalink
The less code you have, the less that can go wrong.
This can also mean using the build system to automatically generate some
repetitive things, to avoid having to write them manually.
Chris M. Thomasson
2024-03-05 05:23:49 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
The less code you have, the less that can go wrong.
This can also mean using the build system to automatically generate some
repetitive things, to avoid having to write them manually.
Does the build system depend on anything coded in C?
Lawrence D'Oliveiro
2024-03-05 07:07:48 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
The less code you have, the less that can go wrong.
This can also mean using the build system to automatically generate
some repetitive things, to avoid having to write them manually.
Does the build system depend on anything coded in C?
These days, it might be Rust.
Chris M. Thomasson
2024-03-05 21:48:25 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
The less code you have, the less that can go wrong.
This can also mean using the build system to automatically generate
some repetitive things, to avoid having to write them manually.
Does the build system depend on anything coded in C?
These days, it might be Rust.
The keyword is might... Right?
Lawrence D'Oliveiro
2024-03-06 00:25:49 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Does the build system depend on anything coded in C?
These days, it might be Rust.
The keyword is might... Right?
Might does not make right.
Chris M. Thomasson
2024-03-06 06:01:01 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
Post by Chris M. Thomasson
Does the build system depend on anything coded in C?
These days, it might be Rust.
The keyword is might... Right?
Might does not make right.
So, what is the right language to use?
Janis Papanagnou
2024-03-05 02:32:23 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
I remember a while back when some people would try to tell me that [Ada]
solves all issues...
It did make a difference. Did you know the life-support system on the
International Space Station was written in Ada? Not something you
would trust C++ code to, let’s face it.
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
You named them as "critical libraries", which (as a project manager)
I'd handle as such; be sure about their quality, about certificates,
write own test cases if necessary, or demand source code for reviews
for own verification.

As already said, there's more factors than the language. An external
library is also an externality to consider, and to not consider it
(per se) as okay.

Janis
Chris M. Thomasson
2024-03-05 03:42:54 UTC
Reply
Permalink
Post by Janis Papanagnou
Post by Chris M. Thomasson
Post by Lawrence D'Oliveiro
I remember a while back when some people would try to tell me that [Ada]
solves all issues...
It did make a difference. Did you know the life-support system on the
International Space Station was written in Ada? Not something you
would trust C++ code to, let’s face it.
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
You named them as "critical libraries", which (as a project manager)
I'd handle as such; be sure about their quality, about certificates,
write own test cases if necessary, or demand source code for reviews
for own verification.
As already said, there's more factors than the language. An external
library is also an externality to consider, and to not consider it
(per se) as okay.
Think of a critical library as an essential part of a runtime for a
language, perhaps? Say you create a new language that depends on certain
things that are coded in C and/or ASM. Fair enough?
Lynn McGuire
2024-03-05 06:03:54 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
I remember a while back when some people would try to tell me that [Ada]
solves all issues...
It did make a difference. Did you know the life-support system on the
International Space Station was written in Ada? Not something you
would trust C++ code to, let’s face it.
And here
<https://devclass.com/2022/11/08/spark-as-good-as-rust-for-safer-coding-adacore-cites-nvidia-case-study/>
is a project to make it even safer.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.

Lynn
Lawrence D'Oliveiro
2024-03-05 07:08:54 UTC
Reply
Permalink
Post by Lynn McGuire
Post by Lawrence D'Oliveiro
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
David Brown
2024-03-05 10:27:01 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Lynn McGuire
Post by Lawrence D'Oliveiro
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
Chris M. Thomasson
2024-03-05 21:01:52 UTC
Reply
Permalink
Post by David Brown
Post by Lawrence D'Oliveiro
Post by Lynn McGuire
Post by Lawrence D'Oliveiro
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
Really? Any logic errors in the program itself?
Kaz Kylheku
2024-03-05 21:24:26 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by David Brown
Post by Lawrence D'Oliveiro
Post by Lynn McGuire
Post by Lawrence D'Oliveiro
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
Really? Any logic errors in the program itself?
Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware
overflow exception from forcing a 64 bit floating-point value into a 16
bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
Chris M. Thomasson
2024-03-05 21:44:30 UTC
Reply
Permalink
Post by Kaz Kylheku
Post by Chris M. Thomasson
Post by David Brown
Post by Lawrence D'Oliveiro
Post by Lynn McGuire
Post by Lawrence D'Oliveiro
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
Really? Any logic errors in the program itself?
Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware
overflow exception from forcing a 64 bit floating-point value into a 16
bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
I need to study up on that one; Thanks. Fwiw, the joint strike fighter
C++ rules are interesting to me as well. Can a little bugger make one of
its air-to-air missiles fire? Now I can hear one of my friends saying,
see, I told you that human programmers cannot be trusted... ;^) lol.

ADA is bullet proof... Until its not... ;^)
Keith Thompson
2024-03-05 22:11:51 UTC
Reply
Permalink
"Chris M. Thomasson" <***@gmail.com> writes:
[...]
Post by Chris M. Thomasson
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.

Of course no language that can be used for real work can be completely
bulletproof. Ada is designed to be relatively safe (and neither of
these newsgroups is the place to discuss the details.)
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+***@gmail.com
Working, but not speaking, for Medtronic
void Void(void) { Void(); } /* The recursive call of the void */
Chris M. Thomasson
2024-03-05 22:34:03 UTC
Reply
Permalink
Post by Keith Thompson
[...]
Post by Chris M. Thomasson
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
I wonder how many people got confused?
Post by Keith Thompson
Of course no language that can be used for real work can be completely
bulletproof. Ada is designed to be relatively safe (and neither of
these newsgroups is the place to discuss the details.)
That's fine.
David Brown
2024-03-06 13:31:50 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by Keith Thompson
[...]
Post by Chris M. Thomasson
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
I wonder how many people got confused?
Apparently you and Malcolm got confused.

Others who mentioned the language know it is called "Ada". I not only
corrected you, but gave an explanation of it, in the hope that with that
clarity, you'd learn.
bart
2024-03-06 13:50:16 UTC
Reply
Permalink
Post by David Brown
Post by Chris M. Thomasson
Post by Keith Thompson
[...]
Post by Chris M. Thomasson
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
I wonder how many people got confused?
Apparently you and Malcolm got confused.
Others who mentioned the language know it is called "Ada".  I not only
corrected you, but gave an explanation of it, in the hope that with that
clarity, you'd learn.
Whoever wrote this short Wikipedia article on it got confused too as it
uses both Ada and ADA:

https://simple.wikipedia.org/wiki/Ada_(programming_language)

(The example program also includes 'Ada' as some package name. Since it
is case-insensitive, 'ADA' would also work.)

Here's also a paper that uses 'ADA' (I assume it is the same language):

https://www.sciencedirect.com/science/article/abs/pii/0166361582900136

Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'
written in all-caps or only capitalised? You can't tell!
Michael S
2024-03-06 14:18:42 UTC
Reply
Permalink
On Wed, 6 Mar 2024 13:50:16 +0000
Post by bart
Post by David Brown
Post by Chris M. Thomasson
Post by Keith Thompson
[...]
Post by Chris M. Thomasson
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
I wonder how many people got confused?
Apparently you and Malcolm got confused.
Others who mentioned the language know it is called "Ada".  I not
only corrected you, but gave an explanation of it, in the hope that
with that clarity, you'd learn.
Whoever wrote this short Wikipedia article on it got confused too as
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
Your link is to "simple Wikipedia". I don't know what it is
exactly, but it does not appear as authoritative as real Wikipedia

https://en.wikipedia.org/wiki/Ada_(programming_language)
Post by bart
Here's also a paper that uses 'ADA' (I assume it is the same
https://www.sciencedirect.com/science/article/abs/pii/0166361582900136
The article published 1982. The language became official in 1983.
Possibly, in 1982 there still was a confusion w.r.t. its name.
Post by bart
Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'
written in all-caps or only capitalised? You can't tell!
If only ADA, written in upper case, was not widely used for something
else...

Keith Thompson
2024-03-05 21:58:10 UTC
Reply
Permalink
Post by Kaz Kylheku
Post by Chris M. Thomasson
Post by David Brown
Post by Lawrence D'Oliveiro
Post by Lynn McGuire
Post by Lawrence D'Oliveiro
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
Really? Any logic errors in the program itself?
Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware
overflow exception from forcing a 64 bit floating-point value into a 16
bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
A numeric overflow occurred during the Ariane 5's initial flight -- and
the software *did* catch the overflow. The same overflow didn't occur
on Ariane 4 because of its different flight profile. There was a
management decision to reuse the Ariane 4 flight software for Ariane 5
without sufficient review.

The code (which had been thoroughly tested on Ariane 4 and was known not
to overflow) emitted an error message describing the overflow exception.
That error message was then processed as data. Another problem was that
systems were designed to shut down on any error; as a result, healthy
and necessary equipment was shut down prematurely.

This is from my vague memory, and may not be entirely accurate.

*Of course* logic errors are possible in Ada programs, but in my
experience and that of many other programmers, if you get an Ada program
to compile (and run without raising unhandled exceptions), you're likely
to be much closer to a working program than if you get a C program to
compile. A typo in a C program is more likely to result in a valid
program with different semantics.
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+***@gmail.com
Working, but not speaking, for Medtronic
void Void(void) { Void(); } /* The recursive call of the void */
Chris M. Thomasson
2024-03-05 22:02:43 UTC
Reply
Permalink
Post by Keith Thompson
Post by Kaz Kylheku
Post by Chris M. Thomasson
Post by David Brown
Post by Lawrence D'Oliveiro
Post by Lynn McGuire
Post by Lawrence D'Oliveiro
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
Really? Any logic errors in the program itself?
Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware
overflow exception from forcing a 64 bit floating-point value into a 16
bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
A numeric overflow occurred during the Ariane 5's initial flight -- and
the software *did* catch the overflow. The same overflow didn't occur
on Ariane 4 because of its different flight profile. There was a
management decision to reuse the Ariane 4 flight software for Ariane 5
without sufficient review.
The code (which had been thoroughly tested on Ariane 4 and was known not
to overflow) emitted an error message describing the overflow exception.
That error message was then processed as data. Another problem was that
systems were designed to shut down on any error; as a result, healthy
and necessary equipment was shut down prematurely.
This is from my vague memory, and may not be entirely accurate.
*Of course* logic errors are possible in Ada programs, but in my
experience and that of many other programmers, if you get an Ada program
to compile (and run without raising unhandled exceptions), you're likely
to be much closer to a working program than if you get a C program to
compile. A typo in a C program is more likely to result in a valid
program with different semantics.
So close you can just feel its a 100% correct and working program?
David Brown
2024-03-06 13:34:50 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by Keith Thompson
Post by Kaz Kylheku
Post by Chris M. Thomasson
Post by David Brown
Post by Lawrence D'Oliveiro
Post by Lynn McGuire
Post by Lawrence D'Oliveiro
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
Really? Any logic errors in the program itself?
Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware
overflow exception from forcing a 64 bit floating-point value into a 16
bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
A numeric overflow occurred during the Ariane 5's initial flight -- and
the software *did* catch the overflow.  The same overflow didn't occur
on Ariane 4 because of its different flight profile.  There was a
management decision to reuse the Ariane 4 flight software for Ariane 5
without sufficient review.
The code (which had been thoroughly tested on Ariane 4 and was known not
to overflow) emitted an error message describing the overflow exception.
That error message was then processed as data.  Another problem was that
systems were designed to shut down on any error; as a result, healthy
and necessary equipment was shut down prematurely.
This is from my vague memory, and may not be entirely accurate.
That matches my recollection too.
Post by Chris M. Thomasson
Post by Keith Thompson
*Of course* logic errors are possible in Ada programs, but in my
experience and that of many other programmers, if you get an Ada program
to compile (and run without raising unhandled exceptions), you're likely
to be much closer to a working program than if you get a C program to
compile.  A typo in a C program is more likely to result in a valid
program with different semantics.
So close you can just feel its a 100% correct and working program?
Didn't you notice the smiley in my comment? It used to be a running
joke that if you managed to get your Ada code to compile, it was ready
to ship. The emphasis is on the word "joke".
Malcolm McLean
2024-03-04 11:54:29 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by David LaRue
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-
invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No.  The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now.  This is a
very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages.
They
can be hacked just as easily as C and C++ and many other languages.  The
government should worry about things they really need to control, which is
less not more, IMHO.  They obviously know very little about computer
development.
[...]
I remember a while back when some people would try to tell me that ADA
solves all issues...
And there's ADA, and there's Ada, the lady.

And she wrote.

"The Analytical Engine has no pretensions whatever to originate
anything. It can do whatever we know how to order it to perform. It can
follow analysis; but it has no power of anticipating any analytical
relations or truths."

And so she knew what the capabilites of the Analytical Engine were,
exactly what programming was, what and what it could not achieve, and
how set out making it achieve what it could achieved. And so she had it,
and in a sense, ADA solved all issues.

And no formal computer science education. Of course.
--
Check out Basic Algorithms and my other books:
https://www.lulu.com/spotlight/bgy1mm
David Brown
2024-03-04 14:41:43 UTC
Reply
Permalink
Post by Malcolm McLean
Post by Chris M. Thomasson
Post by David LaRue
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-
invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No.  The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now.  This is a
very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages.
They
can be hacked just as easily as C and C++ and many other languages.  The
government should worry about things they really need to control, which is
less not more, IMHO.  They obviously know very little about computer
development.
[...]
I remember a while back when some people would try to tell me that ADA
solves all issues...
And there's ADA, and there's Ada, the lady.
No, there's Ada the programming language, named after Lady Ada Lovelace.

For those that perhaps don't understand these things, all-caps names are
usually used for acronyms, such as BASIC, or languages from before small
letters were universal in computer systems, such as early FORTRAN.
Programming languages named after people are generally capitalised the
same way people's names are - thus Ada and Pascal.
Post by Malcolm McLean
And she wrote.
"The Analytical Engine has no pretensions whatever to originate
anything. It can do whatever we know how to order it to perform. It can
follow analysis; but it has no power of anticipating any analytical
relations or truths."
And so she knew what the capabilites of the Analytical Engine were,
exactly what programming was, what and what it could not achieve, and
how set out making it achieve what it could achieved. And so she had it,
and in a sense, ADA solved all issues.
What I think you are trying to say, but got completely lost in the last
sentence, is that Lady Ada Lovelace is often regarded (perhaps
incorrectly) as the first computer programmer.
Post by Malcolm McLean
And no formal computer science education. Of course.
She had a great deal of education in mathematics - just like most
computer science pioneers.
Scott Lurndal
2024-03-04 15:28:35 UTC
Reply
Permalink
Post by Malcolm McLean
Post by Chris M. Thomasson
Post by David LaRue
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-
invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No.  The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now.  This is a
very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages.
They
can be hacked just as easily as C and C++ and many other languages.  The
government should worry about things they really need to control, which is
less not more, IMHO.  They obviously know very little about computer
development.
[...]
I remember a while back when some people would try to tell me that ADA
solves all issues...
And there's ADA, and there's Ada, the lady.
No, there's Ada the programming language, named after Lady Ada Lovelace.\
Indeed. And ADA has a very different meaning stateside.
Malcolm McLean
2024-03-04 18:51:03 UTC
Reply
Permalink
Post by David Brown
Post by Malcolm McLean
Post by Chris M. Thomasson
Post by David LaRue
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-
invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No.  The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now.  This is a
very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure
languages. They
can be hacked just as easily as C and C++ and many other languages.
The
government should worry about things they really need to control, which is
less not more, IMHO.  They obviously know very little about computer
development.
[...]
I remember a while back when some people would try to tell me that
ADA solves all issues...
And there's ADA, and there's Ada, the lady.
No, there's Ada the programming language, named after Lady Ada Lovelace.
For those that perhaps don't understand these things, all-caps names are
usually used for acronyms, such as BASIC, or languages from before small
letters were universal in computer systems, such as early FORTRAN.
Programming languages named after people are generally capitalised the
same way people's names are - thus Ada and Pascal.
Post by Malcolm McLean
And she wrote.
"The Analytical Engine has no pretensions whatever to originate
anything. It can do whatever we know how to order it to perform. It
can follow analysis; but it has no power of anticipating any
analytical relations or truths."
And so she knew what the capabilites of the Analytical Engine were,
exactly what programming was, what and what it could not achieve, and
how set out making it achieve what it could achieved. And so she had
it, and in a sense, ADA solved all issues.
What I think you are trying to say, but got completely lost in the last
sentence, is that Lady Ada Lovelace is often regarded (perhaps
incorrectly) as the first computer programmer.
So what I'm trying to say is that she did it, and everyone else just
knocked out the code. Once you understand what you are doing in this
way, it's wrapped up. She solved it. So early on.

Look at sentence two. She knew what that machine was.
--
Check out Basic Algorithms and my other books:
https://www.lulu.com/spotlight/bgy1mm
Lawrence D'Oliveiro
2024-03-04 21:11:08 UTC
Reply
Permalink
... Lady Ada Lovelace is often regarded (perhaps
incorrectly) as the first computer programmer.
She was the first, in written records, to appreciate some of the not-so-
obvious issues in computer programming.
David Brown
2024-03-05 10:31:11 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
... Lady Ada Lovelace is often regarded (perhaps
incorrectly) as the first computer programmer.
She was the first, in written records, to appreciate some of the not-so-
obvious issues in computer programming.
Yes. That includes realising that computers could do more than number
crunching. She was also involved in checking, correcting and commenting
some of Babbage's programs, and also was the first to publish an
algorithm (for Bernouli numbers) designed specifically for executing on
a computer. And she did all this without a working computer.

So while calling her "the first computer programmer" is inaccurate, she
was definitely a key computer science pioneer.
Lawrence D'Oliveiro
2024-03-06 00:25:08 UTC
Reply
Permalink
Post by David Brown
That includes realising that computers could do more than number
crunching.
Or, conversely, realizing that all forms of computation (including symbol
manipulation) can be expressed as arithmetic? Maybe that came later, cf
“Gödel numbering”.
David Brown
2024-03-06 13:40:46 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by David Brown
That includes realising that computers could do more than number
crunching.
Or, conversely, realizing that all forms of computation (including symbol
manipulation) can be expressed as arithmetic?
That's also a reasonable way to put it. I have not read any of her
writings, so I don't know exactly how she described things.
Post by Lawrence D'Oliveiro
Maybe that came later, cf
“Gödel numbering”.
That's getting a few steps further on - it is treating programs as data,
and I don't think there's any reason to suspect that was something Ada
Lovelace thought about. It's also very theoretical, while Ada was more
interested in the practical applications of computers.
Derek
2024-03-04 12:18:25 UTC
Reply
Permalink
All,
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point,
but it won't be easy."
They make the mistake of blaming the tools rather than
how the tools are used
https://shape-of-code.com/2024/03/03/the-whitehouse-report-on-adopting-memory-safety/
Chris M. Thomasson
2024-03-04 20:52:27 UTC
Reply
Permalink
Post by Derek
All,
Post by Lynn McGuire
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
They make the mistake of blaming the tools rather than
how the tools are used
https://shape-of-code.com/2024/03/03/the-whitehouse-report-on-adopting-memory-safety/
Akin to giving somebody a hammer and they proceed to smash their own
hand with it. Then they say, well, that hammer is dangerous and the
person that gave it to me should be sued for negligence... Wow, lets
think about writing up a 1000 pages on why hammers should be banned?
Hyper sarcastic, I know, but it the key fits... ;^)

Sorry for the sarcasm.
Mr. Man-wai Chang
2024-03-05 13:51:26 UTC
Reply
Permalink
Post by Lynn McGuire
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
A responsible, good progreammer or a better C/C++ pre-processor can
avoid a lot of problems!!
Mr. Man-wai Chang
2024-03-06 07:43:46 UTC
Reply
Permalink
Post by Mr. Man-wai Chang
Post by Lynn McGuire
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
A responsible, good progreammer or a better C/C++ pre-processor can
avoid a lot of problems!!
Or maybe A.I.-assisted code analyzer?? But there are still blind spots...
Loading...