Discussion:
Announcement of new C++11 library to handle measures
(too old to reply)
c***@gmail.com
2014-09-29 16:31:54 UTC
Permalink
I am developing an open-source header-only multi-platform C++11 library to handle physical measures and angles.
Now it has reached a good-enough maturation level to need some real user feedback.

You can browse or download sources here:
https://github.com/carlomilanesi/cpp-measures
Test code is quite cumbersome and available only for Windows.

You can browse documentation here:
https://github.com/carlomilanesi/cpp-measures/wiki
and particularly here:
https://github.com/carlomilanesi/cpp-measures/wiki/Tutorial

Actually I need someone programming engineering software,
who would like to read the tutorial and tell me
what is missing for his/her software application needs.
--
Carlo Milanesi
http://carlomilanesi.wordpress.com/
Wouter van Ooijen
2014-09-29 17:26:24 UTC
Permalink
Post by c***@gmail.com
I am developing an open-source header-only multi-platform C++11 library to handle physical measures and angles.
Now it has reached a good-enough maturation level to need some real user feedback.
https://github.com/carlomilanesi/cpp-measures
Test code is quite cumbersome and available only for Windows.
https://github.com/carlomilanesi/cpp-measures/wiki
https://github.com/carlomilanesi/cpp-measures/wiki/Tutorial
Actually I need someone programming engineering software,
who would like to read the tutorial and tell me
what is missing for his/her software application needs.
I hope you are aware that something like this exists in boost? In what
sense is your work different or even better than the boost solution?

Some points that are IMO important for such a library, especially for
small microntrollers:
- do you differentiate between absolute and relative values (for
instance for time, but also for location/distance)
- can you work with non-floating-point base types (especially
fixed-point types implemented on top of integers)?
- can you work with mixed base types (for instance fixed-point types
based on integers of various size and scaling)?

(Sorry for being to lazy to read all documentation myself before I ask.)

Wouter van Ooijen
c***@gmail.com
2014-09-30 21:13:10 UTC
Permalink
Post by Wouter van Ooijen
I hope you are aware that something like this exists in boost?
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
Post by Wouter van Ooijen
In what sense is your work different
or even better than the boost solution?
I don't really know Boost.Units,
but here are some apparent differences.

Boost.Units supports 12-year-old compilers, while Cpp-Measures
requires, and takes advantage of, the parts of C++11 available
in GCC and VC++ 2012.

Boost.Units includes many definitions of magnitudes and units
in the library, while Cpp-Measures requires that the application
programmer defines the needed magnitudes and the units,
although many examples will be available in documentation.

Boost, when expanded, is 500 megabytes large,
while Cpp-Measures is 200 KB of library code
for the application programmer,
and less than 1 MB with all tests and documentation.
It is not clear to me how to install only the Boost.Units library
and its dependencies instead of all Boost.

Application code using Cpp-Measures is less verbose.
For example, the following Boost.Units expression

quantity<absolute<fahrenheit::temperature> >
T1p(32.0*absolute<fahrenheit::temperature>());

corresponds to the following Cpp-Measures expression

point1<fahrenheit> T1p(32);

Application code using Cpp-Measures is compiled faster
and produces less machine code.
For example, the example provided by Boost.Units Quick Start page,
when compiled using GCC for Windows, with stripping and optimization,
takes 3 times the time to compile the equivalent code
using Cpp-Measures, and generates an executable 7 times as large.

Cpp-Measures supports 2-dimensional and 3-dimensional measures,
with algebraic operations, dot product and cross product,
while I couldn't find such features in Boost.Units.

Cpp-Measures supports signed and unsigned angles modulo one turn,
while I couldn't find such features in Boost.Units.
Post by Wouter van Ooijen
- do you differentiate between absolute and relative values (for
instance for time, but also for location/distance)
Yes, for example, a variable representing an absolute length measured
in inches, is defined as:

point1<inches> variable_name;

While a variable representing a relative length measured
in inches, is defined as:

vect1<inches> variable_name;
Post by Wouter van Ooijen
- can you work with non-floating-point base types (especially
fixed-point types implemented on top of integers)?
I tested it with the following types:
float, double, long double, int, long, long long, complex<double>.
Not tested yet, and probably not working properly yet
fixed-point, rational, multiple-precision,
and arbitrary-precision types.
Post by Wouter van Ooijen
- can you work with mixed base types (for instance fixed-point types
based on integers of various size and scaling)?
Automatic conversion between fixed-point types is not supported yet,
but you can do something like the following:

auto a = vect1<inches,float>(1.2f) + vect1<inches,double>(2.3);

And you get that "a" is of type "vect1<inches,double>",
and it has value 3.5.

--

Carlo Milanesi
Öö Tiib
2014-10-01 00:01:40 UTC
Permalink
Post by c***@gmail.com
Post by Wouter van Ooijen
I hope you are aware that something like this exists in boost?
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
Post by Wouter van Ooijen
In what sense is your work different
or even better than the boost solution?
I don't really know Boost.Units,
but here are some apparent differences.
You know it far better than most C++ developers.
Post by c***@gmail.com
Boost.Units supports 12-year-old compilers, while Cpp-Measures
requires, and takes advantage of, the parts of C++11 available
in GCC and VC++ 2012.
Yes boost maybe supports too exotic compilers and that makes
sometimes its code hard to follow. OTOH the development
tools for most electronic devices around us catch up slowly
so supporting only few latest compilers may narrow target
audience down too lot.
Post by c***@gmail.com
Boost.Units includes many definitions of magnitudes and units
in the library, while Cpp-Measures requires that the application
programmer defines the needed magnitudes and the units,
although many examples will be available in documentation.
Maybe it makes sense to do like boost, standardized systems of
dimensions like SI or CGS do not change too often.
Post by c***@gmail.com
Boost, when expanded, is 500 megabytes large,
while Cpp-Measures is 200 KB of library code
for the application programmer,
and less than 1 MB with all tests and documentation.
It is not clear to me how to install only the Boost.Units library
and its dependencies instead of all Boost.
That is odd floppy-drive era argument I read so often.
Size of good quality movie file is like 17-40 gigabytes.
Just erase one and you can install another 34-80
versions of Boost. :)
Post by c***@gmail.com
Application code using Cpp-Measures is less verbose.
For example, the following Boost.Units expression
quantity<absolute<fahrenheit::temperature> >
T1p(32.0*absolute<fahrenheit::temperature>());
corresponds to the following Cpp-Measures expression
point1<fahrenheit> T1p(32);
Names like "quantity absolute" and "quantity" feel bit
more intuitive than "point1" and "vec1" but YMMV.
If "point1" makes more sense in some problem domain then
one can likely alias that:

template<class T> using point1 = quantity<absolute<T>>;
Post by c***@gmail.com
Application code using Cpp-Measures is compiled faster
and produces less machine code.
For example, the example provided by Boost.Units Quick Start page,
when compiled using GCC for Windows, with stripping and optimization,
takes 3 times the time to compile the equivalent code
using Cpp-Measures, and generates an executable 7 times as large.
That is most important plus to you if it actually holds. I mean
efficiency and performance benchmarking is tricky work.
Post by c***@gmail.com
Cpp-Measures supports 2-dimensional and 3-dimensional measures,
with algebraic operations, dot product and cross product,
while I couldn't find such features in Boost.Units.
I suspect that the existing linear algebra libraries
(like Eigen, MTL4, boost.uBLAS or Armadillo) do not integrate
neither with your cpp-measures nor with boost.units too well.
OTOH it is likely hard to beat performance and quality of such
libraries.

So instead of building linear algebra into your dimensioned
values library it might be worth considering seeking
interoperability with one of those. Two good things that
play together often result with great outcome.
c***@gmail.com
2014-10-01 21:04:03 UTC
Permalink
Post by Öö Tiib
Post by c***@gmail.com
Boost.Units supports 12-year-old compilers, while Cpp-Measures
requires, and takes advantage of, the parts of C++11 available
in GCC and VC++ 2012.
Yes boost maybe supports too exotic compilers and that makes
sometimes its code hard to follow. OTOH the development
tools for most electronic devices around us catch up slowly
so supporting only few latest compilers may narrow target
audience down too lot.
You are right, but I target mainly engineering and scientific
(but not theoretical physics) software, not small micro-controllers,
for which C is generally preferred to C++. And I found
very useful the "decltype" keyword, that I used a lot.
Post by Öö Tiib
Post by c***@gmail.com
Boost.Units includes many definitions of magnitudes and units
in the library, while Cpp-Measures requires that the application
programmer defines the needed magnitudes and the units,
although many examples will be available in documentation.
Maybe it makes sense to do like boost, standardized systems of
dimensions like SI or CGS do not change too often.
Many engineers and scientists use units not belonging
to standardized systems. Never heard about energy measured
in electron-volts, or force (not mass) measured in kilograms?
In addition, having all magnitudes and units
application-programmer defined keeps small the code base.
Post by Öö Tiib
Post by c***@gmail.com
Application code using Cpp-Measures is less verbose.
For example, the following Boost.Units expression
quantity<absolute<fahrenheit::temperature> >
T1p(32.0*absolute<fahrenheit::temperature>());
corresponds to the following Cpp-Measures expression
point1<fahrenheit> T1p(32);
Names like "quantity absolute" and "quantity" feel bit
more intuitive than "point1" and "vec1" but YMMV.
I feel that, after you have learned that "point1" means
"one-dimension absolute measure" and "vect1" means
"one-dimension relative measure", the latter expression
is more understandable than the former one.
But as my library is still in development,
I accept suggestions for a renaming.
Post by Öö Tiib
Post by c***@gmail.com
Cpp-Measures supports 2-dimensional and 3-dimensional measures,
with algebraic operations, dot product and cross product,
while I couldn't find such features in Boost.Units.
I suspect that the existing linear algebra libraries
(like Eigen, MTL4, boost.uBLAS or Armadillo) do not integrate
neither with your cpp-measures nor with boost.units too well.
OTOH it is likely hard to beat performance and quality of such
libraries.
So instead of building linear algebra into your dimensioned
values library it might be worth considering seeking
interoperability with one of those. Two good things that
play together often result with great outcome.
If you want to represent the position (X=10", Y=12")
of an object in a plane, and move it by (X=3", Y=8")
to reach position (X=13", Y=20"),
using Cpp-Measures you can write:

point2<inches> p(10, 12);
p += vect2<inches>(3, 8);
cout << p << endl; // It outputs: 13 20"

How can you do that using Boost.Units or another units library
combined with a vector algebra package?

Cpp-Measures can perform unit checking on 2D and 3D measures,
while interfacing it with other libraries,
you lose unit checking, as other libraries
sometime perform unit-forbidden operations.

However, you can interface Cpp-Units with other libraries.
For example, to compute the cross product between
two 3D measures using both Cpp-Measures and Eigen,
you can write:

// Define and initialize the two 3D measures.
vect3<inches> v1(12, 13, 14);
vect3<inches> v2(15, 16, 40);

// Output their cross product as:
// 296 -270 -3"2
// where '"2' means 'square inches'.
cout << cross_product(v1, v2) << endl;

// Pass to Eigen the address of the measures.
Map<Vector3d> v1e(v1.data());
Map<Vector3d> v2e(v2.data());

// Output their cross product without unit as:
// 296
// -270
// -3
cout << v1e.cross(v2e) << endl;

--
Carlo Milanesi
Öö Tiib
2014-10-04 13:34:11 UTC
Permalink
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
Boost.Units supports 12-year-old compilers, while Cpp-Measures
requires, and takes advantage of, the parts of C++11 available
in GCC and VC++ 2012.
Yes boost maybe supports too exotic compilers and that makes
sometimes its code hard to follow. OTOH the development
tools for most electronic devices around us catch up slowly
so supporting only few latest compilers may narrow target
audience down too lot.
You are right, but I target mainly engineering and scientific
(but not theoretical physics) software, not small micro-controllers,
for which C is generally preferred to C++. And I found
very useful the "decltype" keyword, that I used a lot.
You seemingly say that inside industrial devices or vehicles
(be it bottle washer, crane or ship) there are some sort of
weak 8 bit micro-controllers? No, there are typically piles
of quite powerful processors in all equipment especially where
it has to deal with temperatures, pressures, rotation speeds,
distances, voltages and just name it.

Claiming that software for those must be written in C is like
claiming that engineering or scientific analysis software has
to be written in Fortran.
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
Boost.Units includes many definitions of magnitudes and units
in the library, while Cpp-Measures requires that the application
programmer defines the needed magnitudes and the units,
although many examples will be available in documentation.
Maybe it makes sense to do like boost, standardized systems of
dimensions like SI or CGS do not change too often.
Many engineers and scientists use units not belonging
to standardized systems. Never heard about energy measured
in electron-volts, or force (not mass) measured in kilograms?
In addition, having all magnitudes and units
application-programmer defined keeps small the code base.
Boost.Units doesn't remove opportunity to define your
own exotic units. However on common case we measure things
with standard units and so these are not bad to have as part
of library. More defining and declaring work for user means
more tyops and more inconvenience.
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
Application code using Cpp-Measures is less verbose.
For example, the following Boost.Units expression
quantity<absolute<fahrenheit::temperature> >
T1p(32.0*absolute<fahrenheit::temperature>());
corresponds to the following Cpp-Measures expression
point1<fahrenheit> T1p(32);
Names like "quantity absolute" and "quantity" feel bit
more intuitive than "point1" and "vec1" but YMMV.
I feel that, after you have learned that "point1" means
"one-dimension absolute measure" and "vect1" means
"one-dimension relative measure", the latter expression
is more understandable than the former one.
All what I said is that I would avoid enforcing my users to
learn meaning of unknown in most problem domains
abbreviations but your mileage may wary there.
Post by c***@gmail.com
But as my library is still in development,
I accept suggestions for a renaming.
What I suggested is to use "relative measure" or "relative
quantity" typed out literally. Note that "one-dimensional"
feels irrelevant for temperature. In what problem domain we
have three-dimensional temperatures? A single value is
indeed technically an array of values with one element but
we usually do not emphasize on that.
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
Cpp-Measures supports 2-dimensional and 3-dimensional measures,
with algebraic operations, dot product and cross product,
while I couldn't find such features in Boost.Units.
I suspect that the existing linear algebra libraries
(like Eigen, MTL4, boost.uBLAS or Armadillo) do not integrate
neither with your cpp-measures nor with boost.units too well.
OTOH it is likely hard to beat performance and quality of such
libraries.
So instead of building linear algebra into your dimensioned
values library it might be worth considering seeking
interoperability with one of those. Two good things that
play together often result with great outcome.
If you want to represent the position (X=10", Y=12")
of an object in a plane, and move it by (X=3", Y=8")
to reach position (X=13", Y=20"),
point2<inches> p(10, 12);
p += vect2<inches>(3, 8);
cout << p << endl; // It outputs: 13 20"
That is too far from math that is needed for dealing
with engines pulling around objects that are attached to
each other in real or emulated world (IOW scientific and
engineering applications).
Post by c***@gmail.com
How can you do that using Boost.Units or another units library
combined with a vector algebra package?
My impression is that *none* of those linear algebra libraries
and "measures" or "quantities" libraries are designed to play
well together. My suggestion was to do something that stands out
of pack in that respect.
Post by c***@gmail.com
Cpp-Measures can perform unit checking on 2D and 3D measures,
while interfacing it with other libraries,
you lose unit checking, as other libraries
sometime perform unit-forbidden operations.
Similarly linear algebra library gives compile time
error if you try to multiply 4x4 matrix with 3x3 matrix.
Post by c***@gmail.com
However, you can interface Cpp-Units with other libraries.
I can interface between anything be it Haskell or Fortran
or Javascript; after all it is C++ (read One Ring) that I
wield. However it is *inconvenient*. Why it must be always
so inconvenient? Why must I always squeeze the bits out of
one library thru badly documented loopholes and then pluck
them into other? Especially when both proudly claim being
meant for my "convenience" of writing scientific and
engineering applications. :D
Ronald
2014-10-04 22:19:19 UTC
Permalink
Post by Öö Tiib
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
Boost.Units supports 12-year-old compilers, while Cpp-Measures
requires, and takes advantage of, the parts of C++11 available
in GCC and VC++ 2012.
Yes boost maybe supports too exotic compilers and that makes
sometimes its code hard to follow. OTOH the development
tools for most electronic devices around us catch up slowly
so supporting only few latest compilers may narrow target
audience down too lot.
You are right, but I target mainly engineering and scientific
(but not theoretical physics) software, not small micro-controllers,
for which C is generally preferred to C++. And I found
very useful the "decltype" keyword, that I used a lot.
You seemingly say that inside industrial devices or vehicles
(be it bottle washer, crane or ship) there are some sort of
weak 8 bit micro-controllers? No, there are typically piles
of quite powerful processors in all equipment especially where
it has to deal with temperatures, pressures, rotation speeds,
distances, voltages and just name it.
Claiming that software for those must be written in C is like
claiming that engineering or scientific analysis software has
to be written in Fortran.
I can second that: I've been working on embedded devices for over a decade
now and while I've worked with micro-controllers that required very small
footprints, by far most devices I've seen had quite powerful processors,
and all could be programmed in C++.

For me, the important part of an engineering units library would be that it
upholds the "zero overhead" principle: if behind the scenes a "Volt" is
simply a float (or a double), it should take no more place than that float
and should be no more costly to work with. Ideally, it would also know, at
compile-time, that Volts multiplied by Amperes give a Watts, and that it
doesn't make sense to add them; that a dimensionless value divided by
seconds would give Hertz, etc., with compile-time checks for the operations
that make sense and no run-time overhead.

That, btw, is something C cannot do (its type system is too weak).

<snip>

rlc
c***@gmail.com
2014-10-05 09:24:50 UTC
Permalink
Post by Ronald
Post by Öö Tiib
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
Boost.Units supports 12-year-old compilers, while Cpp-Measures
requires, and takes advantage of, the parts of C++11 available
in GCC and VC++ 2012.
Yes boost maybe supports too exotic compilers and that makes
sometimes its code hard to follow. OTOH the development
tools for most electronic devices around us catch up slowly
so supporting only few latest compilers may narrow target
audience down too lot.
You are right, but I target mainly engineering and scientific
(but not theoretical physics) software, not small micro-controllers,
for which C is generally preferred to C++. And I found
very useful the "decltype" keyword, that I used a lot.
You seemingly say that inside industrial devices or vehicles
(be it bottle washer, crane or ship) there are some sort of
weak 8 bit micro-controllers? No, there are typically piles
of quite powerful processors in all equipment especially where
it has to deal with temperatures, pressures, rotation speeds,
distances, voltages and just name it.
Claiming that software for those must be written in C is like
claiming that engineering or scientific analysis software has
to be written in Fortran.
I can second that: I've been working on embedded devices for over a decade
now and while I've worked with micro-controllers that required very small
footprints, by far most devices I've seen had quite powerful processors,
and all could be programmed in C++.
I think that small micro-controllers, that is computers that address no more than 64 KB of code, are still used, and that for that C++ language is not actually used and should not be actually used.
Though, now most micro-controllers are not so small, and for them C++ should be used, and a units-of-measurement library should be used.

C++ is evolved since 2011, and I think that for successful processors C++ compilers have evolved or will evolve accordingly.
Therefore, I think that requiring C++11 compliance should not exclude too many C++ programmers in forthcoming years.
Post by Ronald
For me, the important part of an engineering units library would be that it
upholds the "zero overhead" principle: if behind the scenes a "Volt" is
simply a float (or a double), it should take no more place than that float
and should be no more costly to work with.
I did that:
sizeof (vect1<radians,float>) == 4;
Post by Ronald
Ideally, it would also know, at
compile-time, that Volts multiplied by Amperes give a Watts, and that it
doesn't make sense to add them;
After having defined
DEFINE_MAGNITUDE(ElectricPotential, volts, " V")
DEFINE_MAGNITUDE(ElectricCurrent, amperes, " A")
DEFINE_MAGNITUDE(Power, watts, " W")
DEFINE_DERIVED_MAGNITUDE_SCALAR_SCALAR(ElectricPotential,
ElectricCurrent, Power)

This statement
cout << vect1<volts>(3) * vect1<amperes>(4) << endl;
outputs "12 W",
and this statement generates a compilation error
cout << vect1<volts>(3) + vect1<amperes>(4) << endl;
Post by Ronald
that a dimensionless value divided by
seconds would give Hertz, etc., with compile-time checks for the operations
that make sense and no run-time overhead.
After having defined
DEFINE_MAGNITUDE(Unitless, units, " u.")
DEFINE_MAGNITUDE(Time, seconds, " s")
DEFINE_MAGNITUDE(Frequency, hertz, " Hz")
DEFINE_DERIVED_MAGNITUDE_SCALAR_SCALAR(Time, Frequency, Unitless)
this statement
cout << vect1<units>(10) / vect1<seconds>(2) << endl;
outputs "5 Hz".

--
Carlo Milanesi
Wouter van Ooijen
2014-10-05 10:03:56 UTC
Permalink
Post by c***@gmail.com
Post by Ronald
Post by Öö Tiib
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
Boost.Units supports 12-year-old compilers, while Cpp-Measures
requires, and takes advantage of, the parts of C++11 available
in GCC and VC++ 2012.
Yes boost maybe supports too exotic compilers and that makes
sometimes its code hard to follow. OTOH the development
tools for most electronic devices around us catch up slowly
so supporting only few latest compilers may narrow target
audience down too lot.
You are right, but I target mainly engineering and scientific
(but not theoretical physics) software, not small micro-controllers,
for which C is generally preferred to C++. And I found
very useful the "decltype" keyword, that I used a lot.
You seemingly say that inside industrial devices or vehicles
(be it bottle washer, crane or ship) there are some sort of
weak 8 bit micro-controllers? No, there are typically piles
of quite powerful processors in all equipment especially where
it has to deal with temperatures, pressures, rotation speeds,
distances, voltages and just name it.
Claiming that software for those must be written in C is like
claiming that engineering or scientific analysis software has
to be written in Fortran.
I can second that: I've been working on embedded devices for over a decade
now and while I've worked with micro-controllers that required very small
footprints, by far most devices I've seen had quite powerful processors,
and all could be programmed in C++.
I think that small micro-controllers, that is computers that address no
more than 64 KB of code, are still used, and that for that C++
language is not actually used and should not be actually used.
Though, now most micro-controllers are not so small, and for them
C++ should be used, and a units-of-measurement library should be used.
Your 'that address no more than 64 Kb of code' is a bit ambiguous: there
are small 32-bit uc chips that can *address* 2^32 bytes of code (minus
some RAM and peripherals) but *contain* only a few Kb of Flash.
For such chips C++ is perfect, provided that it is used in an
appropriate way.
Post by c***@gmail.com
C++ is evolved since 2011, and I think that for successful processors C++ compilers have evolved or will evolve accordingly.
Therefore, I think that requiring C++11 compliance should not exclude too many C++ programmers in forthcoming years.
Post by Ronald
For me, the important part of an engineering units library would be that it
upholds the "zero overhead" principle: if behind the scenes a "Volt" is
simply a float (or a double), it should take no more place than that float
and should be no more costly to work with.
sizeof (vect1<radians,float>) == 4;
Post by Ronald
Ideally, it would also know, at
compile-time, that Volts multiplied by Amperes give a Watts, and that it
doesn't make sense to add them;
After having defined
DEFINE_MAGNITUDE(ElectricPotential, volts, " V")
DEFINE_MAGNITUDE(ElectricCurrent, amperes, " A")
DEFINE_MAGNITUDE(Power, watts, " W")
DEFINE_DERIVED_MAGNITUDE_SCALAR_SCALAR(ElectricPotential,
ElectricCurrent, Power)
This statement
cout << vect1<volts>(3) * vect1<amperes>(4) << endl;
outputs "12 W",
and this statement generates a compilation error
cout << vect1<volts>(3) + vect1<amperes>(4) << endl;
Post by Ronald
that a dimensionless value divided by
seconds would give Hertz, etc., with compile-time checks for the operations
that make sense and no run-time overhead.
After having defined
DEFINE_MAGNITUDE(Unitless, units, " u.")
DEFINE_MAGNITUDE(Time, seconds, " s")
DEFINE_MAGNITUDE(Frequency, hertz, " Hz")
DEFINE_DERIVED_MAGNITUDE_SCALAR_SCALAR(Time, Frequency, Unitless)
this statement
cout << vect1<units>(10) / vect1<seconds>(2) << endl;
outputs "5 Hz".
--
Carlo Milanesi
c***@gmail.com
2014-10-05 10:41:31 UTC
Permalink
Post by Wouter van Ooijen
Post by c***@gmail.com
I think that small micro-controllers, that is computers that address no
more than 64 KB of code, are still used, and that for that C++
language is not actually used and should not be actually used.
Though, now most micro-controllers are not so small, and for them
C++ should be used, and a units-of-measurement library should be used.
Your 'that address no more than 64 Kb of code' is a bit ambiguous: there
are small 32-bit uc chips that can *address* 2^32 bytes of code (minus
some RAM and peripherals) but *contain* only a few Kb of Flash.
For such chips C++ is perfect, provided that it is used in an
appropriate way.
I have no experience with using C++ for systems containing only few KB of memory. And I am in no place to get such experience.
I let other to evaluate feasibility of using C++ compiler and libraries on such system.

For myself, I think that portability on micro-controllers is not a good-enough argument to avoid the use of the "decltype" keyword.

May be another problem is the fact that the library includes the standard header <unordered_map> to create 4 small hash-tables used only by dynamic measures. If that is a problem, it may be possible to implement the hash-table code inside the library, or remove dynamic measures by a conditional compilation.

--
Carlo Milanesi
Öö Tiib
2014-10-05 12:06:11 UTC
Permalink
Post by c***@gmail.com
Post by Wouter van Ooijen
Post by c***@gmail.com
I think that small micro-controllers, that is computers that address no
more than 64 KB of code, are still used, and that for that C++
language is not actually used and should not be actually used.
Though, now most micro-controllers are not so small, and for them
C++ should be used, and a units-of-measurement library should be used.
Your 'that address no more than 64 Kb of code' is a bit ambiguous: there
are small 32-bit uc chips that can *address* 2^32 bytes of code (minus
some RAM and peripherals) but *contain* only a few Kb of Flash.
For such chips C++ is perfect, provided that it is used in an
appropriate way.
I have no experience with using C++ for systems containing only few
KB of memory. And I am in no place to get such experience.
I let other to evaluate feasibility of using C++ compiler and
libraries on such system.
Relax, it is fine. The C++ language is efficient to edge when used
correctly. To the same edge that C targets. C++ libraries are mostly
templates and header-only.
We only get what we actually use from library and from that only the
part that compiler could not do compile-time.
Post by c***@gmail.com
For myself, I think that portability on micro-controllers is not
a good-enough argument to avoid the use of the "decltype" keyword.
Forget thinking "micro-controllers". I bet your TV-set if you have
one (I don't) runs Linux somewhere inside of it. It is not 1966
anymore. That is OK that you want to use C++11 features, these are
cool, just that you did put it up as some sort of first benefit
when comparing to Boost while in real industry it is usually
painful impediment.

If I would write such a library I would also want to use 'auto',
constexpr, variadic templates and 'static_assert' besides
that 'decltype' from C++11. These things help a lot when I want
to deal with or to detect or to give diagnostics about usage of
lot of types in mix at compile-time.

Boost contains some hacky workarounds and poor man substitutes
that can be used conditionally when real language features are not
available and that is actually large benefit of Boost.
Post by c***@gmail.com
May be another problem is the fact that the library includes
the standard header <unordered_map> to create 4 small hash-tables
used only by dynamic measures. If that is a problem, it may be
possible to implement the hash-table code inside the library,
or remove dynamic measures by a conditional compilation.
I imagine such dimensioned type system as static and nothing of it
feels dynamically polymorphic. So I can't imagine need for virtual
functions or hash tables or maps of function pointers.
Can you elaborate what is the purpose and use cases supported by
your dynamic unordered map?
c***@gmail.com
2014-10-05 12:45:40 UTC
Permalink
Post by Öö Tiib
Post by c***@gmail.com
I have no experience with using C++ for systems containing only few
KB of memory. And I am in no place to get such experience.
I let other to evaluate feasibility of using C++ compiler and libraries on such system.
Relax, it is fine. The C++ language is efficient to edge when used
correctly. To the same edge that C targets. C++ libraries are mostly
templates and header-only.
We only get what we actually use from library and from that only the
part that compiler could not do compile-time.
But if I use a non-empty standard container (like "vector" or "unordered_map"), a memory allocator and an exception handler are included to my executable. For some application that is not desirable.
Post by Öö Tiib
Post by c***@gmail.com
For myself, I think that portability on micro-controllers is not
a good-enough argument to avoid the use of the "decltype" keyword.
Forget thinking "micro-controllers". I bet your TV-set if you have
one (I don't) runs Linux somewhere inside of it.
But then it uses more that few KB of RAM.
Post by Öö Tiib
That is OK that you want to use C++11 features, these are
cool, just that you did put it up as some sort of first benefit
when comparing to Boost while in real industry it is usually
painful impediment.
Well, I presented it as a difference. It is a benefit for Boost.
Post by Öö Tiib
Boost contains some hacky workarounds and poor man substitutes
that can be used conditionally when real language features are not
available and that is actually large benefit of Boost.
I cannot imagine how to to the following with C++98.
typename <typename T1, typename T2, typename T3>
T3 add(T1 a1, T2 a2) { return a1 + a2; }
Given that T1, T2, and T3 are floating-point number types, which is the type of T3?
Using C++11 I can write
typename <typename T1, typename T2>
decltype(T1()+T2()) add(T1 a1, T2 a2) { return a1 + a2; }
Post by Öö Tiib
Post by c***@gmail.com
May be another problem is the fact that the library includes
the standard header <unordered_map> to create 4 small hash-tables
used only by dynamic measures. If that is a problem, it may be
possible to implement the hash-table code inside the library,
or remove dynamic measures by a conditional compilation.
I imagine such dimensioned type system as static and nothing of it
feels dynamically polymorphic. So I can't imagine need for virtual
functions or hash tables or maps of function pointers.
Can you elaborate what is the purpose and use cases supported by
your dynamic unordered map?
My library supports two kinds of measures, for different purposes:
* Measures with statically-defined unit, to be used when the application programmer knows which unit to use for a variable.
* Measures with dynamically-defined unit, not so efficient, to be when the programmer knows which magnitude to use for a variable, and knows also a set of possible units to use for it, but the actual unit is defined only at run-time as it depends on input data or on system configuration.

In the latter case, some lookup tables are used for the following purpose.
When two measures are multiplied, say a force is multiplied by a length, the unit of the resulting measure must be chosen among the available units for energy. And as the units of force and of length are dynamic, i.e. stored in variables, the unit of energy cannot be computed at compile-time, but looking it up in a container.

--
Carlo Milanesi
Öö Tiib
2014-10-05 15:38:03 UTC
Permalink
Post by c***@gmail.com
Post by Öö Tiib
Boost contains some hacky workarounds and poor man substitutes
that can be used conditionally when real language features are not
available and that is actually large benefit of Boost.
I cannot imagine how to to the following with C++98.
typename <typename T1, typename T2, typename T3>
T3 add(T1 a1, T2 a2) { return a1 + a2; }
Given that T1, T2, and T3 are floating-point number types, which is the type of T3?
Impossible in C++98 but Boost uses pre-C++11 extensions that
majority of compilers had (like 'typeof' or '__typeof').
Post by c***@gmail.com
Using C++11 I can write
typename <typename T1, typename T2>
decltype(T1()+T2()) add(T1 a1, T2 a2) { return a1 + a2; }
Yes. C++11 resolved lot of things that were sloppy in C++98/C++03.
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
May be another problem is the fact that the library includes
the standard header <unordered_map> to create 4 small hash-tables
used only by dynamic measures. If that is a problem, it may be
possible to implement the hash-table code inside the library,
or remove dynamic measures by a conditional compilation.
I imagine such dimensioned type system as static and nothing of it
feels dynamically polymorphic. So I can't imagine need for virtual
functions or hash tables or maps of function pointers.
Can you elaborate what is the purpose and use cases supported by
your dynamic unordered map?
* Measures with statically-defined unit, to be used when the
application programmer knows which unit to use for a variable.
* Measures with dynamically-defined unit, not so efficient, to be
when the programmer knows which magnitude to use for a variable,
and knows also a set of possible units to use for it, but the actual
unit is defined only at run-time as it depends on input data or on
system configuration.
Ok, let me try to think about it.
1) Most efficient is to use the values with fixed units that involve
least amount of conversions done during math.
2) However user may want to see or to enter milliseconds or minutes
instead of seconds or something like that.
3) Also users differ so there is always controversy.

A solution: We let user to dynamically pick a variable (lets say
to pick an index of member of tuple or variant) that has unit that
he wants to use during I/O and assign our internal variables to
or from it.

However that user-picked member can fully have fixed unit and still
it can be used for I/O. So a need for dynamic units does not follow
only need for dynamic choice between different variables.
Post by c***@gmail.com
In the latter case, some lookup tables are used for the following purpose.
When two measures are multiplied, say a force is multiplied by a length,
the unit of the resulting measure must be chosen among the available
units for energy. And as the units of force and of length are dynamic,
i.e. stored in variables, the unit of energy cannot be computed at
compile-time, but looking it up in a container.
Using variables with such dynamic types in math feels inefficient
since there are likely (dynamically decided) conversions needed on the
fly. Especially when there is a lot of math to do it hurts.

It is less crucial in I/O. In I/O the throughput and latency are the
bottle-necks. Modern processors are idle most of the time during I/O
so can use the spare time to convert things and what not. That is why
most web services are Java, C# or even PHP. Only most huge sites like
youtube or google use C++.

I suggest you consider there a bit ... might be that you can get rid
of needless dynamic units or might be you can move them to I/O layer
or might be that you have some more reasons why these are needed
in math.
Wouter van Ooijen
2014-10-05 16:44:07 UTC
Permalink
Post by Öö Tiib
Post by c***@gmail.com
Post by Öö Tiib
Boost contains some hacky workarounds and poor man substitutes
that can be used conditionally when real language features are not
available and that is actually large benefit of Boost.
I cannot imagine how to to the following with C++98.
typename <typename T1, typename T2, typename T3>
T3 add(T1 a1, T2 a2) { return a1 + a2; }
Given that T1, T2, and T3 are floating-point number types, which is the type of T3?
Impossible in C++98 but Boost uses pre-C++11 extensions that
majority of compilers had (like 'typeof' or '__typeof').
Post by c***@gmail.com
Using C++11 I can write
typename <typename T1, typename T2>
decltype(T1()+T2()) add(T1 a1, T2 a2) { return a1 + a2; }
Yes. C++11 resolved lot of things that were sloppy in C++98/C++03.
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
May be another problem is the fact that the library includes
the standard header <unordered_map> to create 4 small hash-tables
used only by dynamic measures. If that is a problem, it may be
possible to implement the hash-table code inside the library,
or remove dynamic measures by a conditional compilation.
I imagine such dimensioned type system as static and nothing of it
feels dynamically polymorphic. So I can't imagine need for virtual
functions or hash tables or maps of function pointers.
Can you elaborate what is the purpose and use cases supported by
your dynamic unordered map?
* Measures with statically-defined unit, to be used when the
application programmer knows which unit to use for a variable.
* Measures with dynamically-defined unit, not so efficient, to be
when the programmer knows which magnitude to use for a variable,
and knows also a set of possible units to use for it, but the actual
unit is defined only at run-time as it depends on input data or on
system configuration.
Ok, let me try to think about it.
1) Most efficient is to use the values with fixed units that involve
least amount of conversions done during math.
2) However user may want to see or to enter milliseconds or minutes
instead of seconds or something like that.
3) Also users differ so there is always controversy.
A solution: We let user to dynamically pick a variable (lets say
to pick an index of member of tuple or variant) that has unit that
he wants to use during I/O and assign our internal variables to
or from it.
However that user-picked member can fully have fixed unit and still
it can be used for I/O. So a need for dynamic units does not follow
only need for dynamic choice between different variables.
Post by c***@gmail.com
In the latter case, some lookup tables are used for the following purpose.
When two measures are multiplied, say a force is multiplied by a length,
the unit of the resulting measure must be chosen among the available
units for energy. And as the units of force and of length are dynamic,
i.e. stored in variables, the unit of energy cannot be computed at
compile-time, but looking it up in a container.
Using variables with such dynamic types in math feels inefficient
since there are likely (dynamically decided) conversions needed on the
fly. Especially when there is a lot of math to do it hurts.
It is less crucial in I/O. In I/O the throughput and latency are the
bottle-necks. Modern processors are idle most of the time during I/O
so can use the spare time to convert things and what not. That is why
most web services are Java, C# or even PHP. Only most huge sites like
youtube or google use C++.
I suggest you consider there a bit ... might be that you can get rid
of needless dynamic units or might be you can move them to I/O layer
or might be that you have some more reasons why these are needed
in math.
But as far as I understand, as a user, those dynamic units cost me
nothing when I don't use them. If that is correct the cost of having
them is small (= skipping the (ir)relevant parts of the documentation),
so even a rare use case for them might be enough for them to be worthwhile.
Öö Tiib
2014-10-05 19:40:02 UTC
Permalink
Post by Wouter van Ooijen
Post by Öö Tiib
I suggest you consider there a bit ... might be that you can get rid
of needless dynamic units or might be you can move them to I/O layer
or might be that you have some more reasons why these are needed
in math.
But as far as I understand, as a user, those dynamic units cost me
nothing when I don't use them. If that is correct the cost of having
them is small (= skipping the (ir)relevant parts of the documentation),
so even a rare use case for them might be enough for them to be worthwhile.
AFAIK all compilers are in difficulty to optimize out any sort of
dynamic polymorphism even if it is not used. If the efficiency drop
is of any significance then it may be is an unneeded choice that
will be made one way as rule by experienced users. Author has still
to maintain code and documentation (useless work). So better idea
is to consider it beforehand.
c***@gmail.com
2014-10-06 17:50:23 UTC
Permalink
Post by Öö Tiib
Post by Wouter van Ooijen
Post by Öö Tiib
I suggest you consider there a bit ... might be that you can get rid
of needless dynamic units or might be you can move them to I/O layer
or might be that you have some more reasons why these are needed
in math.
But as far as I understand, as a user, those dynamic units cost me
nothing when I don't use them. If that is correct the cost of having
them is small (= skipping the (ir)relevant parts of the documentation),
so even a rare use case for them might be enough for them to be worthwhile.
AFAIK all compilers are in difficulty to optimize out any sort of
dynamic polymorphism even if it is not used. If the efficiency drop
is of any significance then it may be is an unneeded choice that
will be made one way as rule by experienced users. Author has still
to maintain code and documentation (useless work). So better idea
is to consider it beforehand.
As I wrote elsewhere, such dynamic units are much less efficient, by at least 6 times, according a rough benchmark.

My idea was to avoid any impact on performance when they are not needed.
But actually, with current implementation, they have a small impact, as they allocate and fill some small collections at program start-up. Therefore they require some memory for data and some memory for the machine code of "vector", "unordered_set", and the required default memory allocator and exception handling mechanism.
For those who want absolutely zero-overhead, implementation has to be reworked.
I will work on that.

But I wonder if such kind of measures are really needed by anyone.
I mean, if an application must let the user choose between inches or millimetres, or between degrees and radians, it is reasonable to force the application programmer to write machine code for just one unit per magnitude, and convert the input value from the unit required by user and to such unit at output, or alternatively, it is better to force the application programmer to generate machine code for all the supported units (possibly using templates)?

Here is some sample code that gets from the user the desired unit of measurement, a value in such unit, does some computation (computes the triple of such value), and prints the result in the same unit of the input value.

With no library:
int unit = get_desired_unit();
double value = get_desired_value();
double result = value * 3; // fast computation
print_value_and_unit(result, unit);

With dynamically-defined unit measures:
int unit = get_desired_unit();
double value = get_desired_value();
dyn_vect1<Space> value(unit, value); // encapsulate
dyn_vect1<Space> result = value * 3; // slow computation
print_value_and_unit(result); // decapsulate

With several cases of statically-defined unit measures:
int unit = get_desired_unit();
double value = get_desired_value();
switch (unit)
{
case inches_id:
vect1<inches> value_inches(value);
vect1<inches> result_inches
= value_inches * 3; // fast computation
print_value_and_unit(result_inches); // overload
break;
case millimetres_id:
vect1<millimetres> value_millimetres(value);
vect1<millimetres> result_millimetres
= value_millimetres * 3; // fast computation
print_value_and_unit(result_millimetres); // overload
break;
}

With single case of statically-defined unit measures:
int unit = get_desired_unit();
double value = get_desired_value();
// The value is always converted
// from the specified unit to inches.
vect1<inches> value_inches(
convert_from_to(unit, inches_id, value));
// Computation is done always in inches.
// fast computation
vect1<inches> result_inches = v_inches * 3;
// The value of the result is always converted
// from inches to the specified unit.
double result = convert_from_to(inches_id, unit,
result_inches.value());
print_value_and_unit(result); // Unique output routine

Which solution is better?

--
Carlo Milanesi
Wouter van Ooijen
2014-10-06 18:46:57 UTC
Permalink
Post by c***@gmail.com
Post by Öö Tiib
Post by Wouter van Ooijen
Post by Öö Tiib
I suggest you consider there a bit ... might be that you can get rid
of needless dynamic units or might be you can move them to I/O layer
or might be that you have some more reasons why these are needed
in math.
But as far as I understand, as a user, those dynamic units cost me
nothing when I don't use them. If that is correct the cost of having
them is small (= skipping the (ir)relevant parts of the documentation),
so even a rare use case for them might be enough for them to be worthwhile.
AFAIK all compilers are in difficulty to optimize out any sort of
dynamic polymorphism even if it is not used. If the efficiency drop
is of any significance then it may be is an unneeded choice that
will be made one way as rule by experienced users. Author has still
to maintain code and documentation (useless work). So better idea
is to consider it beforehand.
As I wrote elsewhere, such dynamic units are much less efficient, by at least 6 times, according a rough benchmark.
My idea was to avoid any impact on performance when they are not needed.
But actually, with current implementation, they have a small impact, as they allocate and fill some small collections at program start-up. Therefore they require some memory for data and some memory for the machine code of "vector", "unordered_set", and the required default memory allocator and exception handling mechanism.
For those who want absolutely zero-overhead, implementation has to be reworked.
I will work on that.
From the perspective of very small real time systems (microcontroller
with Kb's or 10's of Kb's of memory): some small code and data overhead
is in most cases not a problem, but such systems often have no heap and
no exception handling (which in itself often 'drags in' a substatial
code & data overhead), so being forced to have either of those could be
a killer for the use of the library in such systems.
Post by c***@gmail.com
But I wonder if such kind of measures are really needed by anyone.
I mean, if an application must let the user choose between inches or millimetres, or between degrees and radians, it is reasonable to force the application programmer to write machine code for just one unit per magnitude, and convert the input value from the unit required by user and to such unit at output, or alternatively, it is better to force the application programmer to generate machine code for all the supported units (possibly using templates)?
Here is some sample code that gets from the user the desired unit of measurement, a value in such unit, does some computation (computes the triple of such value), and prints the result in the same unit of the input value.
int unit = get_desired_unit();
double value = get_desired_value();
double result = value * 3; // fast computation
print_value_and_unit(result, unit);
int unit = get_desired_unit();
double value = get_desired_value();
dyn_vect1<Space> value(unit, value); // encapsulate
dyn_vect1<Space> result = value * 3; // slow computation
print_value_and_unit(result); // decapsulate
int unit = get_desired_unit();
double value = get_desired_value();
switch (unit)
{
vect1<inches> value_inches(value);
vect1<inches> result_inches
= value_inches * 3; // fast computation
print_value_and_unit(result_inches); // overload
break;
vect1<millimetres> value_millimetres(value);
vect1<millimetres> result_millimetres
= value_millimetres * 3; // fast computation
print_value_and_unit(result_millimetres); // overload
break;
}
int unit = get_desired_unit();
double value = get_desired_value();
// The value is always converted
// from the specified unit to inches.
vect1<inches> value_inches(
convert_from_to(unit, inches_id, value));
// Computation is done always in inches.
// fast computation
vect1<inches> result_inches = v_inches * 3;
// The value of the result is always converted
// from inches to the specified unit.
double result = convert_from_to(inches_id, unit,
result_inches.value());
print_value_and_unit(result); // Unique output routine
Which solution is better?
--
Carlo Milanesi
Öö Tiib
2014-10-06 21:44:35 UTC
Permalink
Post by c***@gmail.com
Post by Öö Tiib
Post by Wouter van Ooijen
Post by Öö Tiib
I suggest you consider there a bit ... might be that you can get rid
of needless dynamic units or might be you can move them to I/O layer
or might be that you have some more reasons why these are needed
in math.
But as far as I understand, as a user, those dynamic units cost me
nothing when I don't use them. If that is correct the cost of having
them is small (= skipping the (ir)relevant parts of the documentation),
so even a rare use case for them might be enough for them to be worthwhile.
AFAIK all compilers are in difficulty to optimize out any sort of
dynamic polymorphism even if it is not used. If the efficiency drop
is of any significance then it may be is an unneeded choice that
will be made one way as rule by experienced users. Author has still
to maintain code and documentation (useless work). So better idea
is to consider it beforehand.
As I wrote elsewhere, such dynamic units are much less efficient,
by at least 6 times, according a rough benchmark.
My idea was to avoid any impact on performance when they are not needed.
But actually, with current implementation, they have a small impact,
as they allocate and fill some small collections at program start-up.
Therefore they require some memory for data and some memory for the
machine code of "vector", "unordered_set", and the required default
memory allocator and exception handling mechanism.
For those who want absolutely zero-overhead, implementation has to be reworked.
I will work on that.
That is nice.
Post by c***@gmail.com
But I wonder if such kind of measures are really needed by anyone.
I was also thinking that those are may be not needed at all.
Post by c***@gmail.com
I mean, if an application must let the user choose between inches
or millimetres, or between degrees and radians, it is reasonable
to force the application programmer to write machine code for just
one unit per magnitude, and convert the input value from the unit
required by user and to such unit at output, or alternatively, it
is better to force the application programmer to generate machine
code for all the supported units (possibly using templates)?
Here is some sample code that gets from the user the desired
unit of measurement, a value in such unit, does some computation
(computes the triple of such value), and prints the result in the
same unit of the input value.
int unit = get_desired_unit();
double value = get_desired_value();
double result = value * 3; // fast computation
print_value_and_unit(result, unit);
int unit = get_desired_unit();
double value = get_desired_value();
dyn_vect1<Space> value(unit, value); // encapsulate
dyn_vect1<Space> result = value * 3; // slow computation
print_value_and_unit(result); // decapsulate
int unit = get_desired_unit();
double value = get_desired_value();
switch (unit)
{
vect1<inches> value_inches(value);
vect1<inches> result_inches
= value_inches * 3; // fast computation
print_value_and_unit(result_inches); // overload
break;
vect1<millimetres> value_millimetres(value);
vect1<millimetres> result_millimetres
= value_millimetres * 3; // fast computation
print_value_and_unit(result_millimetres); // overload
break;
}
int unit = get_desired_unit();
double value = get_desired_value();
// The value is always converted
// from the specified unit to inches.
vect1<inches> value_inches(
convert_from_to(unit, inches_id, value));
// Computation is done always in inches.
// fast computation
vect1<inches> result_inches = v_inches * 3;
// The value of the result is always converted
// from inches to the specified unit.
double result = convert_from_to(inches_id, unit,
result_inches.value());
print_value_and_unit(result); // Unique output routine
Which solution is better?
Last. In my high school (ages ago) our physics teacher always
demanded us as first thing to convert all assignment data
into SI units, then do the math and then convert results to
units required for answer. Behaving like that is most robust
since we needed only to know the formulas with base SI units
and for rest of the units we only needed to know how to
convert from/to base SI units.
Ronald
2014-10-05 16:23:37 UTC
Permalink
Post by c***@gmail.com
Post by Ronald
Post by Öö Tiib
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
Boost.Units supports 12-year-old compilers, while Cpp-Measures
requires, and takes advantage of, the parts of C++11 available
in GCC and VC++ 2012.
Yes boost maybe supports too exotic compilers and that makes
sometimes its code hard to follow. OTOH the development
tools for most electronic devices around us catch up slowly
so supporting only few latest compilers may narrow target
audience down too lot.
You are right, but I target mainly engineering and scientific
(but not theoretical physics) software, not small micro-controllers,
for which C is generally preferred to C++. And I found
very useful the "decltype" keyword, that I used a lot.
You seemingly say that inside industrial devices or vehicles
(be it bottle washer, crane or ship) there are some sort of
weak 8 bit micro-controllers? No, there are typically piles
of quite powerful processors in all equipment especially where
it has to deal with temperatures, pressures, rotation speeds,
distances, voltages and just name it.
Claiming that software for those must be written in C is like
claiming that engineering or scientific analysis software has
to be written in Fortran.
I can second that: I've been working on embedded devices for over a decade
now and while I've worked with micro-controllers that required very small
footprints, by far most devices I've seen had quite powerful processors,
and all could be programmed in C++.
I think that small micro-controllers, that is computers that address no
more than 64 KB of code, are still used, and that for that C++ language
is not actually used and should not be actually used.
[putting flamethrower away]
Let's agree to disagree, shall we?

<snip>
Post by c***@gmail.com
Post by Ronald
For me, the important part of an engineering units library would be that it
upholds the "zero overhead" principle: if behind the scenes a "Volt" is
simply a float (or a double), it should take no more place than that float
and should be no more costly to work with.
sizeof (vect1<radians,float>) == 4;
Post by Ronald
Ideally, it would also know, at
compile-time, that Volts multiplied by Amperes give a Watts, and that it
doesn't make sense to add them;
After having defined
DEFINE_MAGNITUDE(ElectricPotential, volts, " V")
DEFINE_MAGNITUDE(ElectricCurrent, amperes, " A")
DEFINE_MAGNITUDE(Power, watts, " W")
DEFINE_DERIVED_MAGNITUDE_SCALAR_SCALAR(ElectricPotential,
ElectricCurrent, Power)
This statement
cout << vect1<volts>(3) * vect1<amperes>(4) << endl;
outputs "12 W",
and this statement generates a compilation error
cout << vect1<volts>(3) + vect1<amperes>(4) << endl;
Post by Ronald
that a dimensionless value divided by
seconds would give Hertz, etc., with compile-time checks for the operations
that make sense and no run-time overhead.
After having defined
DEFINE_MAGNITUDE(Unitless, units, " u.")
DEFINE_MAGNITUDE(Time, seconds, " s")
DEFINE_MAGNITUDE(Frequency, hertz, " Hz")
DEFINE_DERIVED_MAGNITUDE_SCALAR_SCALAR(Time, Frequency, Unitless)
this statement
cout << vect1<units>(10) / vect1<seconds>(2) << endl;
outputs "5 Hz".
That looks nice!
I'm assuming you'd have some "using namespace"s to make this actually work?
Also, when compiled, there's no difference between a volt * amperes and a
float * float?

I'll go take a look :)

rlc
c***@gmail.com
2014-10-05 17:41:25 UTC
Permalink
Post by Ronald
Post by c***@gmail.com
Post by Ronald
Ideally, it would also know, at
compile-time, that Volts multiplied by Amperes give a Watts, and that it
doesn't make sense to add them;
After having defined
DEFINE_MAGNITUDE(ElectricPotential, volts, " V")
DEFINE_MAGNITUDE(ElectricCurrent, amperes, " A")
DEFINE_MAGNITUDE(Power, watts, " W")
DEFINE_DERIVED_MAGNITUDE_SCALAR_SCALAR(ElectricPotential,
ElectricCurrent, Power)
This statement
cout << vect1<volts>(3) * vect1<amperes>(4) << endl;
outputs "12 W",
and this statement generates a compilation error
cout << vect1<volts>(3) + vect1<amperes>(4) << endl;
Post by Ronald
that a dimensionless value divided by
seconds would give Hertz, etc., with compile-time checks for the operations
that make sense and no run-time overhead.
After having defined
DEFINE_MAGNITUDE(Unitless, units, " u.")
DEFINE_MAGNITUDE(Time, seconds, " s")
DEFINE_MAGNITUDE(Frequency, hertz, " Hz")
DEFINE_DERIVED_MAGNITUDE_SCALAR_SCALAR(Time, Frequency, Unitless)
this statement
cout << vect1<units>(10) / vect1<seconds>(2) << endl;
outputs "5 Hz".
That looks nice!
I'm assuming you'd have some "using namespace"s to make this actually work?
To work, it needs also the following statements:
#include <measures_io.hpp>
using namespace measures;
using namespace std;
Post by Ronald
Also, when compiled, there's no difference between a volt * amperes and a
float * float?
vect1<volt,float> * vect1<ampere,float> should generate the same machine code than float * float, but only if compiled with optimization on.
I didn't check machine code, but run times are the same.

vect1<volt> * vect1<ampere> is equivalent to double * double
Post by Ronald
I'll go take a look :)
Then I suggest you to start from the Tutorial, and not try to build the tests.

--
Carlo Milanesi
Geoff
2014-10-05 19:32:51 UTC
Permalink
Post by c***@gmail.com
After having defined
DEFINE_MAGNITUDE(ElectricPotential, volts, " V")
DEFINE_MAGNITUDE(ElectricCurrent, amperes, " A")
DEFINE_MAGNITUDE(Power, watts, " W")
DEFINE_DERIVED_MAGNITUDE_SCALAR_SCALAR(ElectricPotential,
ElectricCurrent, Power)
This statement
cout << vect1<volts>(3) * vect1<amperes>(4) << endl;
outputs "12 W",
and this statement generates a compilation error
cout << vect1<volts>(3) + vect1<amperes>(4) << endl;
Speaking as an Electronic/Electrical engineer of some 38 years in
practice:

Volts * Amps is Watts only if both values are scalar. (e.g., It's a DC
circuit or it's an AC circuit with no reactive components.)

Volts * Amps is always VA in reactive AC circuits when the values are
scalar. (i.e., you only know the magnitudes of the vectors and not the
angles.) Power plant operators only speak of Mega-VARs when talking
about generating capacity, seldom Watts. A 2000MW hydro-electric plant
would actually be producing some 2500 MVARs at PF=0.8, a very real
difference.

In DC circuits VA is Watts but only used when the context is known.

If Volts and Amps and the power factor (PF) are known then Watts =
Volts * Amps * PF when they are all scalar.

If Volts and Amps are vectors, only then can you compute Watts
directly.

A proper class to handle these computations would convert scalars to
vectors of the form v = x + iy where iy is zero. Then it would display
Watts if the product had a zero imaginary component and VA if not.

I downloaded your source but I have not been able to successfully
compile it.
c***@gmail.com
2014-10-06 17:58:26 UTC
Permalink
Post by Geoff
Post by c***@gmail.com
After having defined
DEFINE_MAGNITUDE(ElectricPotential, volts, " V")
DEFINE_MAGNITUDE(ElectricCurrent, amperes, " A")
DEFINE_MAGNITUDE(Power, watts, " W")
DEFINE_DERIVED_MAGNITUDE_SCALAR_SCALAR(ElectricPotential,
ElectricCurrent, Power)
This statement
cout << vect1<volts>(3) * vect1<amperes>(4) << endl;
outputs "12 W",
and this statement generates a compilation error
cout << vect1<volts>(3) + vect1<amperes>(4) << endl;
Speaking as an Electronic/Electrical engineer of some 38 years in
Volts * Amps is Watts only if both values are scalar. (e.g., It's a DC
circuit or it's an AC circuit with no reactive components.)
Volts * Amps is always VA in reactive AC circuits when the values are
scalar. (i.e., you only know the magnitudes of the vectors and not the
angles.) Power plant operators only speak of Mega-VARs when talking
about generating capacity, seldom Watts. A 2000MW hydro-electric plant
would actually be producing some 2500 MVARs at PF=0.8, a very real
difference.
In DC circuits VA is Watts but only used when the context is known.
If Volts and Amps and the power factor (PF) are known then Watts =
Volts * Amps * PF when they are all scalar.
If Volts and Amps are vectors, only then can you compute Watts
directly.
A proper class to handle these computations would convert scalars to
vectors of the form v = x + iy where iy is zero. Then it would display
Watts if the product had a zero imaginary component and VA if not.
The "measures" library has no knowledge of electricity.
It just assumes that every magnitude is primitive or it is obtained by multiplying or by dividing two other magnitudes, or by squaring or by taking the square root of another magnitude, it assumes also and that every unit of a magnitude is the only base unit of its magnitude, or it may be obtained by multiplying the base unit by a constant ratio and translating its origin by a constant offset.
Post by Geoff
I downloaded your source but I have not been able to successfully
compile it.
The library is header-only. You just need to include it and compile your source. The other files are there for testing of the library; but to run them, a complex setting is required, not yet documented.

To try the library, read the Tutorial document.

--
Carlo Milanesi
Dombo
2014-10-05 20:37:59 UTC
Permalink
Post by c***@gmail.com
I think that small micro-controllers, that is computers that address
no more than 64 KB of code, are still used, and that for that C++
language is not actually used and should not be actually used.
The Arduino people felt differently; few people realize that the Sketch
programming language used to program a lowly 8-bit Atmel AVR
micro-controller with significantly less memory than 64KB is actually
C++. I wouldn't say that that is the best example of C++ on small
micro-controllers, but I see few problems with judicious use of C++ on
those other than that the benefits of C++ may be less significant for
small programs.
c***@gmail.com
2014-10-06 17:38:08 UTC
Permalink
Post by Dombo
Post by c***@gmail.com
I think that small micro-controllers, that is computers that address
no more than 64 KB of code, are still used, and that for that C++
language is not actually used and should not be actually used.
The Arduino people felt differently; few people realize that the Sketch
programming language used to program a lowly 8-bit Atmel AVR
micro-controller with significantly less memory than 64KB is actually
C++. I wouldn't say that that is the best example of C++ on small
micro-controllers, but I see few problems with judicious use of C++ on
those other than that the benefits of C++ may be less significant for
small programs.
According the TIOBE index ( http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html ), there are more than three times as many C programmers as C++ programmers. What kind of software do they develop, if even lowly 8-bit micro-controllers run C++ software?
Here I don't want to speak about which language is better,
but I heard that C++ is much less used than C (or assembly) language for applications that need to keep code less than 64 KB.
However, as I now see so much interest, I will try to keep those platforms into account, but without removing the requirement for some C++11 conformance.

--
Carlo Milanesi
Scott Lurndal
2014-10-06 18:15:39 UTC
Permalink
=20
Post by c***@gmail.com
I think that small micro-controllers, that is computers that address
no more than 64 KB of code, are still used, and that for that C++
language is not actually used and should not be actually used.
=20
The Arduino people felt differently; few people realize that the Sketch=
=20
programming language used to program a lowly 8-bit Atmel AVR=20
micro-controller with significantly less memory than 64KB is actually=20
C++. I wouldn't say that that is the best example of C++ on small=20
micro-controllers, but I see few problems with judicious use of C++ on=20
those other than that the benefits of C++ may be less significant for=20
small programs.
According the TIOBE index ( http://www.tiobe.com/index.php/content/paperinf=
o/tpci/index.html ), there are more than three times as many C programmers =
as C++ programmers. What kind of software do they develop, if even lowly 8-=
bit micro-controllers run C++ software?
TIOBE data is meaningless.

There is 40 years of legacy C code out there, including several operating
systems, Oracle's RDBMS and many other very large codebases. Nobody in their
right mind would spend the $$ to rewrite it in C++.
Luca Risolia
2014-10-06 19:35:06 UTC
Permalink
Post by Scott Lurndal
There is 40 years of legacy C code out there, including several operating
systems, Oracle's RDBMS and many other very large codebases.
Nobody in their right mind would spend the $$ to rewrite it in C++.
More importantly, nobody would spend the $$ to write anything similar in C.
Wouter van Ooijen
2014-10-06 19:00:53 UTC
Permalink
Post by c***@gmail.com
Post by Dombo
Post by c***@gmail.com
I think that small micro-controllers, that is computers that address
no more than 64 KB of code, are still used, and that for that C++
language is not actually used and should not be actually used.
The Arduino people felt differently; few people realize that the Sketch
programming language used to program a lowly 8-bit Atmel AVR
micro-controller with significantly less memory than 64KB is actually
C++. I wouldn't say that that is the best example of C++ on small
micro-controllers, but I see few problems with judicious use of C++ on
those other than that the benefits of C++ may be less significant for
small programs.
According the TIOBE index ( http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html ), there are more than three times as many C programmers as C++ programmers. What kind of software do they develop, if even lowly 8-bit micro-controllers run C++ software?
There is a big difference between 'the chip *can* run C++' and 'everyone
uses C++ on that chip'. IMO nearly all users of C that have a real
option (management, compiler, other tools) to use C++ would be better
off doing so, even if that would mean using only the 'better C' subset.
But eveidently not everyone agrees...
Post by c***@gmail.com
Here I don't want to speak about which language is better,
but I heard that C++ is much less used than C (or assembly) language for applications that need to keep code less than 64 KB.
However, as I now see so much interest, I will try to keep those platforms into account, but without removing the requirement for some C++11 conformance.
Those chips (< 64 Kb in your characterization) often deal with pysical
values (sensors, actuators, time), so they have ample opportunity to
make the errors the library aims probably prevent (and there are
benefits beyond just preveting errors).

But IME the "let's do it the way we did the previous project, that
worked OK' attitude is even more prevalent in such projects than in
PC-level programming. I think acceptence in those circles requires that
the impact of using the library, especially in code/data size,
performance, and secondary requirements like heap and exception
handling, is be minimal. Being header-only and not being part of a hughe
lib like Boots (whether that is a valid argument or not) is surely a
good start.
Post by c***@gmail.com
--
Carlo Milanesi
David Brown
2014-10-07 08:22:58 UTC
Permalink
Post by c***@gmail.com
Post by Dombo
Post by c***@gmail.com
I think that small micro-controllers, that is computers that
address no more than 64 KB of code, are still used, and that for
that C++ language is not actually used and should not be actually
used.
The Arduino people felt differently; few people realize that the
Sketch programming language used to program a lowly 8-bit Atmel AVR
micro-controller with significantly less memory than 64KB is
actually C++. I wouldn't say that that is the best example of C++
on small micro-controllers, but I see few problems with judicious
use of C++ on those other than that the benefits of C++ may be less
significant for small programs.
According the TIOBE index (
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html ),
there are more than three times as many C programmers as C++
programmers. What kind of software do they develop, if even lowly
8-bit micro-controllers run C++ software? Here I don't want to speak
about which language is better, but I heard that C++ is much less
used than C (or assembly) language for applications that need to keep
code less than 64 KB. However, as I now see so much interest, I will
try to keep those platforms into account, but without removing the
requirement for some C++11 conformance.
As others have noted, TIOBE is /completely/ pointless as a judge of how
much something is used.

There are massive amounts of C code in common use on all sorts of
platforms, and much of it is under continuous development.

Regarding embedded systems, there is no magical "64KB" boundary as you
seem to think. There is, however, a stronger bias towards C rather than
C++ as systems get smaller. At the bottom end, small microcontrollers
often have very limited cpus - they are barely suitable for C
programming, never mind C++ style programming. Toolchain vendors for
such devices are limited, and their tools are limited - often there
simply are no C++ compilers available.

For bigger processors, C is still the preferred choice for a lot of
embedded programming - and when C++ is used, it is often used in a way
that differs significantly from desktop or "big system" C++ programming.

In particular, in small embedded systems there is an emphasis on code
size (thus one avoids large libraries), static behaviour (heaps, dynamic
memory, virtual functions, etc., are banned or discouraged), clear code
flow (so you avoid exceptions), and code correctness (this also means
knowing all your code, and therefore keeping source sizes to the minimum).

I think, however, there is a trend towards more C++ even in small
systems (just as assembly has been mostly pushed out in favour of C).
Part of this is that "small" microcontrollers have been getting "bigger"
(in particular, Cortex M cores have pushed out a lot of 8-bit cores).
Part of this is that the tools are getting better, and part of it is
that the language is getting better (C++11 has a lot of improvements).

Regarding compatibility of a C++ library with "small embedded C++",
there are a few things to consider:

There should be no use of exceptions or RTTI - these are almost always
disabled on small systems. Some embedded development tools do not have
support for them at all, and even when they are supported they lead to
very significant extra library code, limits on optimisation, extra code
in use, and most importantly, exceptions make it hard to be sure that
everything is correct because they introduce "hidden gotos".

There should be no reliance on the heap. With small systems
programming, you have limited memory, and you need to know that
everything fits - you do not want to leave memory management to
run-time. As much as you possibly can, you want all memory to be
allocated statically so that you can see exactly what is used, and check
it at compile-time. It also means smaller and faster code - on some
processors, several times faster. One "innocent" use of std::vector
rather than std::array can use up half your code flash space and make it
impossible to analyse your memory usage fully.

Virtual functions, and polymorphism in general, should be avoided unless
they really help. Multiple inheritance is right out. In general, you
want the compiler to know as much as possible at compile time. It
doesn't matter if that means re-compiling lots of code in every build -
compile time is cheap, but run time is expensive.
c***@gmail.com
2014-10-07 10:12:38 UTC
Permalink
Post by David Brown
Post by c***@gmail.com
Post by Dombo
Post by c***@gmail.com
I think that small micro-controllers, that is computers that
address no more than 64 KB of code, are still used, and that for
that C++ language is not actually used and should not be actually
used.
The Arduino people felt differently; few people realize that the
Sketch programming language used to program a lowly 8-bit Atmel AVR
micro-controller with significantly less memory than 64KB is
actually C++. I wouldn't say that that is the best example of C++
on small micro-controllers, but I see few problems with judicious
use of C++ on those other than that the benefits of C++ may be less
significant for small programs.
According the TIOBE index (
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html ),
there are more than three times as many C programmers as C++
programmers. What kind of software do they develop, if even lowly
8-bit micro-controllers run C++ software? Here I don't want to speak
about which language is better, but I heard that C++ is much less
used than C (or assembly) language for applications that need to keep
code less than 64 KB. However, as I now see so much interest, I will
try to keep those platforms into account, but without removing the
requirement for some C++11 conformance.
As others have noted, TIOBE is /completely/ pointless as a judge of how
much something is used.
I am eager to know which are your more reliable sources regarding the use of programming languages and programming tools.
Post by David Brown
There are massive amounts of C code in common use on all sorts of
platforms, and much of it is under continuous development.
I agree.
Post by David Brown
Regarding embedded systems, there is no magical "64KB" boundary as you
seem to think.
There is a magical boundary, that is the fact that a 16-bit pointer can address only 65536 different memory locations. Many processors have machine instructions containing 16-bit pointers, for example Zilog Z80 and Intel 8086.
Post by David Brown
There is, however, a stronger bias towards C rather than
C++ as systems get smaller. At the bottom end, small microcontrollers
often have very limited cpus - they are barely suitable for C
programming, never mind C++ style programming. Toolchain vendors for
such devices are limited, and their tools are limited - often there
simply are no C++ compilers available.
I agree.
Post by David Brown
For bigger processors, C is still the preferred choice for a lot of
embedded programming - and when C++ is used, it is often used in a way
that differs significantly from desktop or "big system" C++ programming.
I agree.
Post by David Brown
In particular, in small embedded systems there is an emphasis on code
size (thus one avoids large libraries), static behaviour (heaps, dynamic
memory, virtual functions, etc., are banned or discouraged), clear code
flow (so you avoid exceptions), and code correctness (this also means
knowing all your code, and therefore keeping source sizes to the minimum).
I agree.
Post by David Brown
I think, however, there is a trend towards more C++ even in small
systems (just as assembly has been mostly pushed out in favour of C).
Part of this is that "small" microcontrollers have been getting "bigger"
(in particular, Cortex M cores have pushed out a lot of 8-bit cores).
Part of this is that the tools are getting better, and part of it is
that the language is getting better (C++11 has a lot of improvements).
I agree, but actually I don't know how much of C++11 is supported by embedded systems development tools.
Post by David Brown
There should be no use of exceptions or RTTI ...
There should be no reliance on the heap. ...
Virtual functions, and polymorphism in general, should be avoided ...
I developed my library targeted for non-real-time systems with more that 1 MB of code space (I prefer to use numbers instead of generic phrases as "big memory"), but as I see that it is considered more useful for small embedded-systems, I am going to change it, re-targeted also for small-memory real-time systems.

--
Carlo Milanesi
Wouter van Ooijen
2014-10-07 10:24:56 UTC
Permalink
Post by c***@gmail.com
Post by David Brown
Post by c***@gmail.com
Post by Dombo
Post by c***@gmail.com
I think that small micro-controllers, that is computers that
address no more than 64 KB of code, are still used, and that for
that C++ language is not actually used and should not be actually
used.
The Arduino people felt differently; few people realize that the
Sketch programming language used to program a lowly 8-bit Atmel AVR
micro-controller with significantly less memory than 64KB is
actually C++. I wouldn't say that that is the best example of C++
on small micro-controllers, but I see few problems with judicious
use of C++ on those other than that the benefits of C++ may be less
significant for small programs.
According the TIOBE index (
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html ),
there are more than three times as many C programmers as C++
programmers. What kind of software do they develop, if even lowly
8-bit micro-controllers run C++ software? Here I don't want to speak
about which language is better, but I heard that C++ is much less
used than C (or assembly) language for applications that need to keep
code less than 64 KB. However, as I now see so much interest, I will
try to keep those platforms into account, but without removing the
requirement for some C++11 conformance.
As others have noted, TIOBE is /completely/ pointless as a judge of how
much something is used.
I am eager to know which are your more reliable sources regarding the use of programming languages and programming tools.
Post by David Brown
There are massive amounts of C code in common use on all sorts of
platforms, and much of it is under continuous development.
I agree.
Post by David Brown
Regarding embedded systems, there is no magical "64KB" boundary as you
seem to think.
There is a magical boundary, that is the fact that a 16-bit pointer can address only 65536 different memory locations. Many processors have machine instructions containing 16-bit pointers, for example Zilog Z80 and Intel 8086.
Post by David Brown
There is, however, a stronger bias towards C rather than
C++ as systems get smaller. At the bottom end, small microcontrollers
often have very limited cpus - they are barely suitable for C
programming, never mind C++ style programming. Toolchain vendors for
such devices are limited, and their tools are limited - often there
simply are no C++ compilers available.
I agree.
Post by David Brown
For bigger processors, C is still the preferred choice for a lot of
embedded programming - and when C++ is used, it is often used in a way
that differs significantly from desktop or "big system" C++ programming.
I agree.
Post by David Brown
In particular, in small embedded systems there is an emphasis on code
size (thus one avoids large libraries), static behaviour (heaps, dynamic
memory, virtual functions, etc., are banned or discouraged), clear code
flow (so you avoid exceptions), and code correctness (this also means
knowing all your code, and therefore keeping source sizes to the minimum).
I agree.
Post by David Brown
I think, however, there is a trend towards more C++ even in small
systems (just as assembly has been mostly pushed out in favour of C).
Part of this is that "small" microcontrollers have been getting "bigger"
(in particular, Cortex M cores have pushed out a lot of 8-bit cores).
Part of this is that the tools are getting better, and part of it is
that the language is getting better (C++11 has a lot of improvements).
I agree, but actually I don't know how much of C++11 is supported by embedded systems development tools.
GCC targets a lot of the small chips.
Post by c***@gmail.com
Post by David Brown
There should be no use of exceptions or RTTI ...
I agree
Post by c***@gmail.com
Post by David Brown
There should be no reliance on the heap. ...
I agree halfway: new/new[] is ok, but delete/delete[] is not. Hence the
only practical use of new is in the startup.
Post by c***@gmail.com
Post by David Brown
Virtual functions, and polymorphism in general, should be avoided ...
Here I am not so sure. run-time polymorphism can and probably should be
used when it offers an advantage (code size!). compile-time polymorphism
is OK.
Post by c***@gmail.com
I developed my library targeted for non-real-time systems with more that 1 MB of code space (I prefer to use numbers instead of generic phrases as "big memory"), but as I see that it is considered more useful for small embedded-systems, I am going to change it, re-targeted also for small-memory real-time systems.
--
Carlo Milanesi
David Brown
2014-10-07 12:34:30 UTC
Permalink
<snip>
Post by Victor Bazarov
Post by David Brown
There should be no use of exceptions or RTTI ...
I agree
Post by David Brown
There should be no reliance on the heap. ...
I agree halfway: new/new[] is ok, but delete/delete[] is not. Hence the
only practical use of new is in the startup.
I halfway agree with your halfway agreement...

Without delete/delete[]/free, you don't get memory fragmentation or
non-deterministic calls, and once you've finished your startup you
either have enough memory, or you don't. Your malloc (underlying the
new/new[]) just treats memory as a simple stack.

However, static allocation is still more amenable to compile-time
checking, and still leads to smaller and faster code (especially on
chips that have poor address registers).
Post by Victor Bazarov
Post by David Brown
Virtual functions, and polymorphism in general, should be avoided ...
Here I am not so sure. run-time polymorphism can and probably should be
used when it offers an advantage (code size!). compile-time polymorphism
is OK.
Yes, compile-time polymorphism is fine. Run-time polymorphism and
virtual functions /can/ be a good thing, and often compare well to
alternatives such as tables of function pointers or large switch
statements. But they should only be used when they really are useful.

In particular, you want to avoid unnecessary layers of abstractions in
embedded systems. It can be tempting to make things like a "GPIO"
class, and then have subclasses like "ActiveLowGPIO" and end up with a
really nice, flexible and extendible hierarchy to isolate application
code from the details of controlling pins on the microcontroller. But
then you find that activating your pin leads to virtual function
lookups, and large amounts of code and runtime when you actually just
wanted a single assembly instruction. Compile-time polymorphism and
careful use of templates with inline functions can give you better results.
Wouter van Ooijen
2014-10-07 13:04:38 UTC
Permalink
Post by Ronald
<snip>
Post by Victor Bazarov
Post by David Brown
There should be no use of exceptions or RTTI ...
I agree
Post by David Brown
There should be no reliance on the heap. ...
I agree halfway: new/new[] is ok, but delete/delete[] is not. Hence the
only practical use of new is in the startup.
I halfway agree with your halfway agreement...
Without delete/delete[]/free, you don't get memory fragmentation or
non-deterministic calls, and once you've finished your startup you
either have enough memory, or you don't. Your malloc (underlying the
new/new[]) just treats memory as a simple stack.
However, static allocation is still more amenable to compile-time
checking, and still leads to smaller and faster code (especially on
chips that have poor address registers).
I fully agree that static (or stack) allocation is by far to be
preferred, if only because potential problems show up at link time.
Post by Ronald
Post by Victor Bazarov
Post by David Brown
Virtual functions, and polymorphism in general, should be avoided ...
Here I am not so sure. run-time polymorphism can and probably should be
used when it offers an advantage (code size!). compile-time polymorphism
is OK.
Yes, compile-time polymorphism is fine. Run-time polymorphism and
virtual functions /can/ be a good thing, and often compare well to
alternatives such as tables of function pointers or large switch
statements. But they should only be used when they really are useful.
In particular, you want to avoid unnecessary layers of abstractions in
embedded systems. It can be tempting to make things like a "GPIO"
class, and then have subclasses like "ActiveLowGPIO" and end up with a
really nice, flexible and extendible hierarchy to isolate application
code from the details of controlling pins on the microcontroller. But
then you find that activating your pin leads to virtual function
lookups, and large amounts of code and runtime when you actually just
wanted a single assembly instruction. Compile-time polymorphism and
careful use of templates with inline functions can give you better results.
been there, done that :)

http://www.embedded.com/design/programming-languages-and-tools/4428377/Objects--No--thanks---Using-C--effectively-on-small-systems-

(I'll give a talk about this at meeting C++ in Berlin)

Wouter
David Brown
2014-10-07 13:51:32 UTC
Permalink
Post by Wouter van Ooijen
been there, done that :)
http://www.embedded.com/design/programming-languages-and-tools/4428377/Objects--No--thanks---Using-C--effectively-on-small-systems-
There is a lot in that article that is /exactly/ what I meant, and it
will save me some time with the details. Thanks!
peter koch
2014-10-07 20:12:59 UTC
Permalink
Post by Wouter van Ooijen
Post by David Brown
Yes, compile-time polymorphism is fine. Run-time polymorphism and
virtual functions /can/ be a good thing, and often compare well to
alternatives such as tables of function pointers or large switch
statements. But they should only be used when they really are useful.
In particular, you want to avoid unnecessary layers of abstractions in
embedded systems. It can be tempting to make things like a "GPIO"
class, and then have subclasses like "ActiveLowGPIO" and end up with a
really nice, flexible and extendible hierarchy to isolate application
code from the details of controlling pins on the microcontroller. But
then you find that activating your pin leads to virtual function
lookups, and large amounts of code and runtime when you actually just
wanted a single assembly instruction. Compile-time polymorphism and
careful use of templates with inline functions can give you better results.
been there, done that :)
http://www.embedded.com/design/programming-languages-and-tools/4428377/Objects--No--thanks---Using-C--effectively-on-small-systems-
(I'll give a talk about this at meeting C++ in Berlin)
With all due respect, the article in that link gives an example that is very bad C++ code. A standard solution would not use inheritance - it would use a template resulting in more generic code without any virtual function call overhead.

/Peter
Wouter van Ooijen
2014-10-07 20:50:08 UTC
Permalink
Post by peter koch
Post by Wouter van Ooijen
Post by David Brown
Yes, compile-time polymorphism is fine. Run-time polymorphism and
virtual functions /can/ be a good thing, and often compare well to
alternatives such as tables of function pointers or large switch
statements. But they should only be used when they really are useful.
In particular, you want to avoid unnecessary layers of abstractions in
embedded systems. It can be tempting to make things like a "GPIO"
class, and then have subclasses like "ActiveLowGPIO" and end up with a
really nice, flexible and extendible hierarchy to isolate application
code from the details of controlling pins on the microcontroller. But
then you find that activating your pin leads to virtual function
lookups, and large amounts of code and runtime when you actually just
wanted a single assembly instruction. Compile-time polymorphism and
careful use of templates with inline functions can give you better results.
been there, done that :)
http://www.embedded.com/design/programming-languages-and-tools/4428377/Objects--No--thanks---Using-C--effectively-on-small-systems-
(I'll give a talk about this at meeting C++ in Berlin)
With all due respect, the article in that link gives an example that is very bad C++ code. A standard solution would not use inheritance - it would use a template resulting in more generic code without any virtual function call overhead.
Which example in the article do you refer to? The first C++ example is
meant to demonstrate that the 'standard' OO solution of classes and
virtual methods is indeed (in most circumstances) not a good idea on a
small chip. Are the last examples what you describe as 'the standard
solution', if not, can you explain the difference?

Wouter
peter koch
2014-10-07 20:57:25 UTC
Permalink
Post by Wouter van Ooijen
Post by peter koch
Post by Wouter van Ooijen
Post by David Brown
Yes, compile-time polymorphism is fine. Run-time polymorphism and
virtual functions /can/ be a good thing, and often compare well to
alternatives such as tables of function pointers or large switch
statements. But they should only be used when they really are useful.
In particular, you want to avoid unnecessary layers of abstractions in
embedded systems. It can be tempting to make things like a "GPIO"
class, and then have subclasses like "ActiveLowGPIO" and end up with a
really nice, flexible and extendible hierarchy to isolate application
code from the details of controlling pins on the microcontroller. But
then you find that activating your pin leads to virtual function
lookups, and large amounts of code and runtime when you actually just
wanted a single assembly instruction. Compile-time polymorphism and
careful use of templates with inline functions can give you better results.
been there, done that :)
http://www.embedded.com/design/programming-languages-and-tools/4428377/Objects--No--thanks---Using-C--effectively-on-small-systems-
(I'll give a talk about this at meeting C++ in Berlin)
With all due respect, the article in that link gives an example that is very bad C++ code. A standard solution would not use inheritance - it would use a template resulting in more generic code without any virtual function call overhead.
Which example in the article do you refer to? The first C++ example is
meant to demonstrate that the 'standard' OO solution of classes and
virtual methods is indeed (in most circumstances) not a good idea on a
small chip. Are the last examples what you describe as 'the standard
solution', if not, can you explain the difference?
Wouter
Sorry - I did not realize that there was a page 2 on the link. The second solution is exactly what I meant was the correct C++ solution.

/Peter
Wouter van Ooijen
2014-10-08 05:37:38 UTC
Permalink
Post by peter koch
Post by Wouter van Ooijen
Post by peter koch
Post by Wouter van Ooijen
Post by David Brown
Yes, compile-time polymorphism is fine. Run-time polymorphism and
virtual functions /can/ be a good thing, and often compare well to
alternatives such as tables of function pointers or large switch
statements. But they should only be used when they really are useful.
In particular, you want to avoid unnecessary layers of abstractions in
embedded systems. It can be tempting to make things like a "GPIO"
class, and then have subclasses like "ActiveLowGPIO" and end up with a
really nice, flexible and extendible hierarchy to isolate application
code from the details of controlling pins on the microcontroller. But
then you find that activating your pin leads to virtual function
lookups, and large amounts of code and runtime when you actually just
wanted a single assembly instruction. Compile-time polymorphism and
careful use of templates with inline functions can give you better results.
been there, done that :)
http://www.embedded.com/design/programming-languages-and-tools/4428377/Objects--No--thanks---Using-C--effectively-on-small-systems-
(I'll give a talk about this at meeting C++ in Berlin)
With all due respect, the article in that link gives an example that is very bad C++ code. A standard solution would not use inheritance - it would use a template resulting in more generic code without any virtual function call overhead.
Which example in the article do you refer to? The first C++ example is
meant to demonstrate that the 'standard' OO solution of classes and
virtual methods is indeed (in most circumstances) not a good idea on a
small chip. Are the last examples what you describe as 'the standard
solution', if not, can you explain the difference?
Wouter
Sorry - I did not realize that there was a page 2 on the link. The second solution is exactly what I meant was the correct C++ solution.
OK. But a pity you did not have something else in mind - I was hoping
for yet another type of solution.

BTW are you aware of any existing library for small chips that takes
this approach?

Wouter
Richard Damon
2014-10-11 16:44:46 UTC
Permalink
Post by Wouter van Ooijen
OK. But a pity you did not have something else in mind - I was hoping
for yet another type of solution.
BTW are you aware of any existing library for small chips that takes
this approach?
Wouter
One comment, in one sense the 1st example tries to stack the deck
against C++ by making the program be "more capable" than the C program,
in that the C program knows exactly what type of pin it is toggling,
while the C++ program can toggle ANY sort of I/O bit with a suitable
class defined.

IF the example had been closer:

int main(){
lpc1114_gpio pin( 1, 0 );
for(;;){
pin.set( 1 );
delay();
pin.set( 0 );
delay();
}
}

I.E., directly using the pin object in the code, and not through a
pointer to a generic base class, the code generated by the compiler can
much closer to the original since the compiler can bypass the virtual
call mechanism as it knows the real type of the object.

I find, at least in my own code, that by far most of the actual
references to I/O devices are done with known type objects or in
non-virtual functions that are part of the device definition, and thus
are no less efficient than the equivalent C code.
Wouter van Ooijen
2014-10-11 18:14:26 UTC
Permalink
Post by Richard Damon
Post by Wouter van Ooijen
OK. But a pity you did not have something else in mind - I was hoping
for yet another type of solution.
BTW are you aware of any existing library for small chips that takes
this approach?
Wouter
One comment, in one sense the 1st example tries to stack the deck
against C++ by making the program be "more capable" than the C program,
in that the C program knows exactly what type of pin it is toggling,
while the C++ program can toggle ANY sort of I/O bit with a suitable
class defined.
int main(){
lpc1114_gpio pin( 1, 0 );
for(;;){
pin.set( 1 );
delay();
pin.set( 0 );
delay();
}
}
I.E., directly using the pin object in the code, and not through a
pointer to a generic base class, the code generated by the compiler can
much closer to the original since the compiler can bypass the virtual
call mechanism as it knows the real type of the object.
Of course, but IMO that approach has little or no advantage over C in
abstraction power.
Post by Richard Damon
I find, at least in my own code, that by far most of the actual
references to I/O devices are done with known type objects or in
non-virtual functions that are part of the device definition, and thus
are no less efficient than the equivalent C code.
That is not my experience: a GPIO point can be pin of the
microcontroller, but it can also be a pin of an I/O extender chip, or
either of such pins, but inverted, etc.

What I wanted to prove in the article is that using C++ templates you
can have your cake and eat it: compile-time polymorphism, without the
run-time costs.

Wouter
Richard Damon
2014-10-11 21:40:16 UTC
Permalink
Post by Wouter van Ooijen
Post by Richard Damon
Post by Wouter van Ooijen
OK. But a pity you did not have something else in mind - I was hoping
for yet another type of solution.
BTW are you aware of any existing library for small chips that takes
this approach?
Wouter
One comment, in one sense the 1st example tries to stack the deck
against C++ by making the program be "more capable" than the C program,
in that the C program knows exactly what type of pin it is toggling,
while the C++ program can toggle ANY sort of I/O bit with a suitable
class defined.
int main(){
lpc1114_gpio pin( 1, 0 );
for(;;){
pin.set( 1 );
delay();
pin.set( 0 );
delay();
}
}
I.E., directly using the pin object in the code, and not through a
pointer to a generic base class, the code generated by the compiler can
much closer to the original since the compiler can bypass the virtual
call mechanism as it knows the real type of the object.
Of course, but IMO that approach has little or no advantage over C in
abstraction power.
Post by Richard Damon
I find, at least in my own code, that by far most of the actual
references to I/O devices are done with known type objects or in
non-virtual functions that are part of the device definition, and thus
are no less efficient than the equivalent C code.
That is not my experience: a GPIO point can be pin of the
microcontroller, but it can also be a pin of an I/O extender chip, or
either of such pins, but inverted, etc.
What I wanted to prove in the article is that using C++ templates you
can have your cake and eat it: compile-time polymorphism, without the
run-time costs.
Wouter
Where the advantage comes is now move the declaration of the pin to a
header file that defines your system hardware configuration (or even to
be a member of a class defining a higher level device). By using
preprocessor "magic" or just editing the file, you can define the pins
operation. When accessing the pin, the programmer there doesn't need to
know the type of I/O pin being used, he can just use the generic
interface and operate on it. (The key here is that the compiler DOES
know based on the declaration what type of port it is, so can generate
the efficient code).

I suppose the difference may be that I tend to write things that do
specific things to specific signals under specific signals, and don't
have many cases of writing a program to toggle an arbitrary signal under
some condition.
Wouter van Ooijen
2014-10-12 07:42:35 UTC
Permalink
Post by Richard Damon
Post by Wouter van Ooijen
Post by Richard Damon
Post by Wouter van Ooijen
OK. But a pity you did not have something else in mind - I was hoping
for yet another type of solution.
BTW are you aware of any existing library for small chips that takes
this approach?
Wouter
One comment, in one sense the 1st example tries to stack the deck
against C++ by making the program be "more capable" than the C program,
in that the C program knows exactly what type of pin it is toggling,
while the C++ program can toggle ANY sort of I/O bit with a suitable
class defined.
int main(){
lpc1114_gpio pin( 1, 0 );
for(;;){
pin.set( 1 );
delay();
pin.set( 0 );
delay();
}
}
I.E., directly using the pin object in the code, and not through a
pointer to a generic base class, the code generated by the compiler can
much closer to the original since the compiler can bypass the virtual
call mechanism as it knows the real type of the object.
Of course, but IMO that approach has little or no advantage over C in
abstraction power.
Post by Richard Damon
I find, at least in my own code, that by far most of the actual
references to I/O devices are done with known type objects or in
non-virtual functions that are part of the device definition, and thus
are no less efficient than the equivalent C code.
That is not my experience: a GPIO point can be pin of the
microcontroller, but it can also be a pin of an I/O extender chip, or
either of such pins, but inverted, etc.
What I wanted to prove in the article is that using C++ templates you
can have your cake and eat it: compile-time polymorphism, without the
run-time costs.
Wouter
Where the advantage comes is now move the declaration of the pin to a
header file that defines your system hardware configuration (or even to
be a member of a class defining a higher level device). By using
preprocessor "magic" or just editing the file, you can define the pins
operation. When accessing the pin, the programmer there doesn't need to
know the type of I/O pin being used, he can just use the generic
interface and operate on it. (The key here is that the compiler DOES
know based on the declaration what type of port it is, so can generate
the efficient code).
I suppose the difference may be that I tend to write things that do
specific things to specific signals under specific signals, and don't
have many cases of writing a program to toggle an arbitrary signal under
some condition.
Indeed. If you never need to do something with 'abstract' pins the C
approach is sufficient.

I want for instance be able to write a bit-banged I2C master, that uses
an abstract pin. In actual use that pin can be a regular input-output
pin, or an open-collector pin. The regular input-output pin needs to be
handled a bit different (low => output and low, high => input). The I2C
code does not concern itself with such details, but the resulting
machine code is as efficient as if it fully knew.

Or maybe I was in a funny mood, and the I2C pins were pins on an
MCP23017 I/0 extender chip.

But I do agree, if you can write your code directly for the I/O pins
that you use there is no advantage in the abstraction I describe.

Wouter
Richard Damon
2014-10-12 12:40:09 UTC
Permalink
Post by Wouter van Ooijen
Post by Richard Damon
Post by Wouter van Ooijen
Post by Richard Damon
Post by Wouter van Ooijen
OK. But a pity you did not have something else in mind - I was hoping
for yet another type of solution.
BTW are you aware of any existing library for small chips that takes
this approach?
Wouter
One comment, in one sense the 1st example tries to stack the deck
against C++ by making the program be "more capable" than the C program,
in that the C program knows exactly what type of pin it is toggling,
while the C++ program can toggle ANY sort of I/O bit with a suitable
class defined.
int main(){
lpc1114_gpio pin( 1, 0 );
for(;;){
pin.set( 1 );
delay();
pin.set( 0 );
delay();
}
}
I.E., directly using the pin object in the code, and not through a
pointer to a generic base class, the code generated by the compiler can
much closer to the original since the compiler can bypass the virtual
call mechanism as it knows the real type of the object.
Of course, but IMO that approach has little or no advantage over C in
abstraction power.
Post by Richard Damon
I find, at least in my own code, that by far most of the actual
references to I/O devices are done with known type objects or in
non-virtual functions that are part of the device definition, and thus
are no less efficient than the equivalent C code.
That is not my experience: a GPIO point can be pin of the
microcontroller, but it can also be a pin of an I/O extender chip, or
either of such pins, but inverted, etc.
What I wanted to prove in the article is that using C++ templates you
can have your cake and eat it: compile-time polymorphism, without the
run-time costs.
Wouter
Where the advantage comes is now move the declaration of the pin to a
header file that defines your system hardware configuration (or even to
be a member of a class defining a higher level device). By using
preprocessor "magic" or just editing the file, you can define the pins
operation. When accessing the pin, the programmer there doesn't need to
know the type of I/O pin being used, he can just use the generic
interface and operate on it. (The key here is that the compiler DOES
know based on the declaration what type of port it is, so can generate
the efficient code).
I suppose the difference may be that I tend to write things that do
specific things to specific signals under specific signals, and don't
have many cases of writing a program to toggle an arbitrary signal under
some condition.
Indeed. If you never need to do something with 'abstract' pins the C
approach is sufficient.
I want for instance be able to write a bit-banged I2C master, that uses
an abstract pin. In actual use that pin can be a regular input-output
pin, or an open-collector pin. The regular input-output pin needs to be
handled a bit different (low => output and low, high => input). The I2C
code does not concern itself with such details, but the resulting
machine code is as efficient as if it fully knew.
Or maybe I was in a funny mood, and the I2C pins were pins on an
MCP23017 I/0 extender chip.
But I do agree, if you can write your code directly for the I/O pins
that you use there is no advantage in the abstraction I describe.
Wouter
Actually, I find the insufficient as the piece of code to manipulate the
pin then needs to change based on the pin type. With my method, there is
a single line of code, in a header that is visible to the code that
automatically reconfigures the code that is providing the "higher level"
driver.

In my method, the I2C Master interface that you describe would get its
pin definitions from an configuration include file, as opposed to having
some setup call build a pointer for them. This does say I need to
duplicate the code if I want to build two bit banged I2C ports in a
given application, or accept the added inefficiency of the virtual calls
(but for bit banged I2C, is it really significant?)
Wouter van Ooijen
2014-10-12 14:52:40 UTC
Permalink
Post by Richard Damon
Post by Wouter van Ooijen
Post by Richard Damon
Post by Wouter van Ooijen
Post by Richard Damon
Post by Wouter van Ooijen
OK. But a pity you did not have something else in mind - I was hoping
for yet another type of solution.
BTW are you aware of any existing library for small chips that takes
this approach?
Wouter
One comment, in one sense the 1st example tries to stack the deck
against C++ by making the program be "more capable" than the C program,
in that the C program knows exactly what type of pin it is toggling,
while the C++ program can toggle ANY sort of I/O bit with a suitable
class defined.
int main(){
lpc1114_gpio pin( 1, 0 );
for(;;){
pin.set( 1 );
delay();
pin.set( 0 );
delay();
}
}
I.E., directly using the pin object in the code, and not through a
pointer to a generic base class, the code generated by the compiler can
much closer to the original since the compiler can bypass the virtual
call mechanism as it knows the real type of the object.
Of course, but IMO that approach has little or no advantage over C in
abstraction power.
Post by Richard Damon
I find, at least in my own code, that by far most of the actual
references to I/O devices are done with known type objects or in
non-virtual functions that are part of the device definition, and thus
are no less efficient than the equivalent C code.
That is not my experience: a GPIO point can be pin of the
microcontroller, but it can also be a pin of an I/O extender chip, or
either of such pins, but inverted, etc.
What I wanted to prove in the article is that using C++ templates you
can have your cake and eat it: compile-time polymorphism, without the
run-time costs.
Wouter
Where the advantage comes is now move the declaration of the pin to a
header file that defines your system hardware configuration (or even to
be a member of a class defining a higher level device). By using
preprocessor "magic" or just editing the file, you can define the pins
operation. When accessing the pin, the programmer there doesn't need to
know the type of I/O pin being used, he can just use the generic
interface and operate on it. (The key here is that the compiler DOES
know based on the declaration what type of port it is, so can generate
the efficient code).
I suppose the difference may be that I tend to write things that do
specific things to specific signals under specific signals, and don't
have many cases of writing a program to toggle an arbitrary signal under
some condition.
Indeed. If you never need to do something with 'abstract' pins the C
approach is sufficient.
I want for instance be able to write a bit-banged I2C master, that uses
an abstract pin. In actual use that pin can be a regular input-output
pin, or an open-collector pin. The regular input-output pin needs to be
handled a bit different (low => output and low, high => input). The I2C
code does not concern itself with such details, but the resulting
machine code is as efficient as if it fully knew.
Or maybe I was in a funny mood, and the I2C pins were pins on an
MCP23017 I/0 extender chip.
But I do agree, if you can write your code directly for the I/O pins
that you use there is no advantage in the abstraction I describe.
Wouter
Actually, I find the insufficient as the piece of code to manipulate the
pin then needs to change based on the pin type.
Yes and no: The i2c lib must of course state that it wants an
open-collector pin. But the actual 'conversions' of input-output and
open-collector to open-collector are written once, and are also used by
other protocols that need an open-collector pin, for instance the dallas
1-wire interface. All in the name of 'write it once' or 'don't repeat
yourself.

The relevant part of the i2c interface:

template<
class arg_scl,
class arg_sda
class i2c_bus_master_bb_scl_sda {

// use the pins in an appropriate way
// (and assert that they can be used as such)
typedef pin_oc_from< arg_scl > scl;
typedef pin_oc_from< arg_sda > sda;

...
};

All code in the class template uses the scl and sda, not the arg_scl and
arg_sda.

The pin_oc_from class template is specialized for the 3 cases:
input-output pin, open-collector pin, and the default that generates an
appropriate compiler error message. The actual code:

// fallback: compile-time error
template<
class unsupported,
class dummy = void
Post by Richard Damon
struct pin_oc_from {
static_assert(
sizeof( unsupported ) == 0,
"pin_oc_from<> requires "
"a pin_oc, or pin_in_out"
);
};

// from itself: delegate
template< class pin >
struct pin_oc_from <
pin,
typename pin::has_pin_oc
public pin_oc_archetype
{
static void init(){ pin::init(); }
static bool get(){ return pin::get(); }
static void set( bool x ){ pin::set( x ); }
};

// from a pin_in_out
template< class pin >
struct pin_oc_from <
pin,
typename pin::has_pin_in_out
public pin_oc_archetype
{

static void init(){
pin::init();
}

static void set( bool x ){

// to make a pin_in_out behave like a pin_oc
if( x ){

// make it float when it is set high
pin::direction_set_input();

} else {

// make it output and low when it is set low
pin::direction_set_output();
pin::set( 0 );
}
}

static bool get(){
return pin::get();
}

};
Post by Richard Damon
With my method, there is
a single line of code, in a header that is visible to the code that
automatically reconfigures the code that is providing the "higher level"
driver.
In my method, the I2C Master interface that you describe would get its
pin definitions from an configuration include file, as opposed to having
some setup call build a pointer for them. This does say I need to
duplicate the code if I want to build two bit banged I2C ports in a
given application, or accept the added inefficiency of the virtual calls
(but for bit banged I2C, is it really significant?)
Maybe not for I2C, but you give up that possibility for IMO no gain. And
for (for instance) dallas 1-wire or SPI I can very well imagine more
than one bus of a certain type per system.

What you essentially do is compile-time polymorphism by modifying the
source file to include the right 'input' source file. One thing that I
certainly don't like about that is that it requires you to copy the
library file (bb i2c interface) to your project and modify it. (Or some
other scheme, where #include something that is essentially a source
file, not a header). There are other points, for instance that the
interface between the points and the i2c part is essentially by global
functions.

Note that my scheme is not limited to GPIO's. There are many other
abstractions that can be encapsulated similarly in static class templates.

Wouter
Richard Damon
2014-10-13 02:22:03 UTC
Permalink
Post by Wouter van Ooijen
Post by Richard Damon
With my method, there is
a single line of code, in a header that is visible to the code that
automatically reconfigures the code that is providing the "higher level"
driver.
In my method, the I2C Master interface that you describe would get its
pin definitions from an configuration include file, as opposed to having
some setup call build a pointer for them. This does say I need to
duplicate the code if I want to build two bit banged I2C ports in a
given application, or accept the added inefficiency of the virtual calls
(but for bit banged I2C, is it really significant?)
Maybe not for I2C, but you give up that possibility for IMO no gain. And
for (for instance) dallas 1-wire or SPI I can very well imagine more
than one bus of a certain type per system.
What you essentially do is compile-time polymorphism by modifying the
source file to include the right 'input' source file. One thing that I
certainly don't like about that is that it requires you to copy the
library file (bb i2c interface) to your project and modify it. (Or some
other scheme, where #include something that is essentially a source
file, not a header). There are other points, for instance that the
interface between the points and the i2c part is essentially by global
functions.
Note that my scheme is not limited to GPIO's. There are many other
abstractions that can be encapsulated similarly in static class templates.
Wouter
While templates work here, I find that sometimes embedded compilers have
limitations in there template support.

What I do doesn't require editing any of the library files.

For example, if we have a file BBi2c.cpp providing the base source file
for this sort of driver, with a header file BBi2c.h to allow clients to
get definitions to call the driver, there will also be a BBi2c_conf.h
that is included by the library, but not provided by it, that will be
defined by the project that defines the pins to use (with fixed variable
names).

So the project defines BBi2c_conf.h, and adds BBi2c.cpp to the project
(It doesn't need to copy it)

One advantage of this is that I can use a similar method if I have to
use C because I don't have a supported C++ compiler for the system.
Wouter van Ooijen
2014-10-13 06:42:54 UTC
Permalink
Post by Richard Damon
Post by Wouter van Ooijen
Post by Richard Damon
With my method, there is
a single line of code, in a header that is visible to the code that
automatically reconfigures the code that is providing the "higher level"
driver.
In my method, the I2C Master interface that you describe would get its
pin definitions from an configuration include file, as opposed to having
some setup call build a pointer for them. This does say I need to
duplicate the code if I want to build two bit banged I2C ports in a
given application, or accept the added inefficiency of the virtual calls
(but for bit banged I2C, is it really significant?)
Maybe not for I2C, but you give up that possibility for IMO no gain. And
for (for instance) dallas 1-wire or SPI I can very well imagine more
than one bus of a certain type per system.
What you essentially do is compile-time polymorphism by modifying the
source file to include the right 'input' source file. One thing that I
certainly don't like about that is that it requires you to copy the
library file (bb i2c interface) to your project and modify it. (Or some
other scheme, where #include something that is essentially a source
file, not a header). There are other points, for instance that the
interface between the points and the i2c part is essentially by global
functions.
Note that my scheme is not limited to GPIO's. There are many other
abstractions that can be encapsulated similarly in static class templates.
Wouter
While templates work here, I find that sometimes embedded compilers have
limitations in there template support.
That's a valid argument, but it is an argument against those compilers
rather than against the technique I use.
Post by Richard Damon
What I do doesn't require editing any of the library files.
For example, if we have a file BBi2c.cpp providing the base source file
for this sort of driver, with a header file BBi2c.h to allow clients to
get definitions to call the driver, there will also be a BBi2c_conf.h
that is included by the library, but not provided by it, that will be
defined by the project that defines the pins to use (with fixed variable
names).
Ah, the 'callback-include' mechanism. I admit it is less dirty than the
two alternatives I mentioned, but I still don't like it one bit. (But I
admit I used it in the libraries for a compiler I once wrote.)
Post by Richard Damon
One advantage of this is that I can use a similar method if I have to
use C because I don't have a supported C++ compiler for the system.
With that reasoning you should not use any C++, correct?

Wouter

Scott Lurndal
2014-10-07 13:58:37 UTC
Permalink
Post by Ronald
<snip>
Post by Victor Bazarov
Post by David Brown
There should be no use of exceptions or RTTI ...
I agree
Post by David Brown
There should be no reliance on the heap. ...
I agree halfway: new/new[] is ok, but delete/delete[] is not. Hence the
only practical use of new is in the startup.
I halfway agree with your halfway agreement...
Without delete/delete[]/free, you don't get memory fragmentation or
non-deterministic calls, and once you've finished your startup you
either have enough memory, or you don't. Your malloc (underlying the
new/new[]) just treats memory as a simple stack.
I've worked on two large-scale (hundreds of processors) operating systems
written in C++, and written two hypervisors in C++.

While we did not implement a heap allocator per-se, and the global
new and delete operators had function definitions that invoked a
kernel panic, we did overload the new and delete operators for classes
implementing objects with dynamic lifetimes using a pool-based allocator
to allocate from. The pool-based allocator invoked the OS page-allocator
for backing store.

David's remaining points (no RTTI, no exceptions, limited multiple inheritence[*])
remain.

[*] It was ok to inherit from multiple abstract (pure virtual) classes, but only
one concrete or partially virtual class.
Post by Ronald
Yes, compile-time polymorphism is fine. Run-time polymorphism and
virtual functions /can/ be a good thing, and often compare well to
alternatives such as tables of function pointers or large switch
statements. But they should only be used when they really are useful.
Implementing interfaces (in the java sense) is the prime use for
virtual functions in real-time code.

For example:

/**
* Pure virtual interface class for dispatchable work items.
*
* A class that needs to schedule a work task for the next available
* dispatching opportunity (which come in two flavors: next core idle
* or next guest intercept) will implement this interface and provide
* a ::do_work function to handle the work item.
*/
class c_worker {
public:

/**
* Function called when the dispatcher schedules a work item.
*
* @param arg1 An opaque argument.
* @param arg2 An opaque argument.
* @param arg3 An opaque argument.
* @param arg4 An opaque argument.
*/
virtual void do_work(void *arg1, void *arg2, void *arg3, void *arg4) = 0;
virtual ~c_worker() {};
};

/**
* Idle core entry point. Wait for work and dispatch as necessary.
*/
void
c_dispatcher::idle(void)
{
while (true) {
bool istate;
s_workitem *wip = NULL;

istate = d_lock.lock_noint();
if (!d_idlelist.is_empty()) {
wip = (s_workitem *)d_idlelist.flink();
wip->remove();
d_state = WORKING;
} else {
if (!d_interceptlist.is_empty()) {
wip = (s_workitem *)d_interceptlist.flink();
wip->remove();
d_state = WORKING;
} else {
d_idle.init();
d_state = IDLE;
}
}
d_lock.unlock_noint(istate);
if (wip != NULL) {
void *arg1 = wip->wi_arg1;
void *arg2 = wip->wi_arg2;
void *arg3 = wip->wi_arg3;
void *arg4 = wip->wi_arg4;
c_worker *wp = wip->wi_worker;

d_workerpool.free(wip);

wp->do_work(arg1, arg2, arg3, arg4);
continue;
}
d_idle.wait();
}
}

/**
* This class manages IDE and ATAPI devices.
*/
class c_ide: public c_worker {

...
/**
* Queued bootstrap task for IDE subsystem initialization. Queued to
* the bootstrap core during node initialization. Responsible for
* identification and initialization of any IDE or ATAPI devices.
*
* @param arg1 Ignored
* @param arg2 Ignored
* @param arg3 Ignored
* @param arg4 Ignored
*/
void
c_ide::do_work(void *arg1, void *arg2, void *arg3, void *arg4)
{
i_hda.init("hda");
i_hdb.init("hdb");
i_hdc.init("hdc");
i_hdd.init("hdd");

debugger.register_command("ide_read",
"ide_read [hda|hdb|hdc|hdd] sector-number",
&read);
}
David Brown
2014-10-07 12:21:50 UTC
Permalink
Post by c***@gmail.com
Post by David Brown
Post by c***@gmail.com
Post by Dombo
Post by c***@gmail.com
I think that small micro-controllers, that is computers that
address no more than 64 KB of code, are still used, and that
for that C++ language is not actually used and should not be
actually used.
The Arduino people felt differently; few people realize that
the Sketch programming language used to program a lowly 8-bit
Atmel AVR micro-controller with significantly less memory than
64KB is actually C++. I wouldn't say that that is the best
example of C++ on small micro-controllers, but I see few
problems with judicious use of C++ on those other than that the
benefits of C++ may be less significant for small programs.
According the TIOBE index (
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
), there are more than three times as many C programmers as C++
programmers. What kind of software do they develop, if even
lowly 8-bit micro-controllers run C++ software? Here I don't want
to speak about which language is better, but I heard that C++ is
much less used than C (or assembly) language for applications
that need to keep code less than 64 KB. However, as I now see so
much interest, I will try to keep those platforms into account,
but without removing the requirement for some C++11 conformance.
As others have noted, TIOBE is /completely/ pointless as a judge of
how much something is used.
I am eager to know which are your more reliable sources regarding the
use of programming languages and programming tools.
Me too :-)

There are various surveys done that give some indication. But TIOBE
makes ratings based on the number of questions or searches on particular
topics, which has very poor correlation with the actual usage. When
there is a trendy new language, you will get lots of hits even though
few people actually use it - the people who have used C for the last ten
years will continue to do so without searching google every time they
want to use a new function, and without being noticed by TIOBE.
Post by c***@gmail.com
Post by David Brown
There are massive amounts of C code in common use on all sorts of
platforms, and much of it is under continuous development.
I agree.
Post by David Brown
Regarding embedded systems, there is no magical "64KB" boundary as
you seem to think.
There is a magical boundary, that is the fact that a 16-bit pointer
can address only 65536 different memory locations. Many processors
have machine instructions containing 16-bit pointers, for example
Zilog Z80 and Intel 8086.
I have used a /lot/ of 8-bit, 16-bit and 32-bit microcontrollers. Yes,
a 16-bit pointer alone can only address 64 KB. But there are plenty of
8-bit and 16-bit microcontrollers that address more than 64 KB using
various methods, and there are plenty of 32-bit microcontrollers with
far less than 64 KB memory (flash and ram).

There are 8051/8052 devices that can address megabytes of memory - you
would have to be seriously masochistic to program them in "real" C++
(i.e., using heaps, exceptions, polymorphism, etc.). There are Cortex
M3 devices with 16K flash that can handle these things perfectly well.

That is why it is better to think in terms of "small" or "resource
limited" microcontrollers, rather than giving an artificial boundary
such as 64 KB code space. And since there are no clear quantitative
boundaries, it is better to talk about "small" and "big" systems, unless
of course you have specific size data to deal with.

And for such small systems, there is often a limited - but very useful
and powerful - subset of C++ that works well. I am very glad that you
are thinking of such systems in your library, but I would like you to be
aware of what that really means. For example, compact code size is
important, but not nearly as important as avoiding exceptions or dynamic
memory.
Post by c***@gmail.com
Post by David Brown
There is, however, a stronger bias towards C rather than C++ as
systems get smaller. At the bottom end, small microcontrollers
often have very limited cpus - they are barely suitable for C
programming, never mind C++ style programming. Toolchain vendors
for such devices are limited, and their tools are limited - often
there simply are no C++ compilers available.
I agree.
Post by David Brown
For bigger processors, C is still the preferred choice for a lot
of embedded programming - and when C++ is used, it is often used in
a way that differs significantly from desktop or "big system" C++
programming.
I agree.
Post by David Brown
In particular, in small embedded systems there is an emphasis on
code size (thus one avoids large libraries), static behaviour
(heaps, dynamic memory, virtual functions, etc., are banned or
discouraged), clear code flow (so you avoid exceptions), and code
correctness (this also means knowing all your code, and therefore
keeping source sizes to the minimum).
I agree.
Post by David Brown
I think, however, there is a trend towards more C++ even in small
systems (just as assembly has been mostly pushed out in favour of
C). Part of this is that "small" microcontrollers have been getting
"bigger" (in particular, Cortex M cores have pushed out a lot of
8-bit cores). Part of this is that the tools are getting better,
and part of it is that the language is getting better (C++11 has a
lot of improvements).
I agree, but actually I don't know how much of C++11 is supported by
embedded systems development tools.
gcc is far and away the most popular C++ compiler for embedded systems,
and has had solid C++11 support for a good while. Of course, in the
embedded world it is common to have to use older tools, or more limited
tools, especially for older devices. But you are writing a new library,
for use in new code and new systems - if C++11 makes your code better or
easier to use (such as by using "auto", explicit conversion operators,
and inferred return types), then use them. Most of your target audience
in the embedded world will be using Cortex M devices with a fairly
recent gcc (or llvm/clang, as ARM now prefers) and will have good C++11
support. Some potential users will miss out - but you can't please
everyone, and C++11 gives many advantages.
Post by c***@gmail.com
Post by David Brown
There should be no use of exceptions or RTTI ... There should be no
reliance on the heap. ... Virtual functions, and polymorphism in
general, should be avoided ...
I developed my library targeted for non-real-time systems with more
that 1 MB of code space (I prefer to use numbers instead of generic
phrases as "big memory"), but as I see that it is considered more
useful for small embedded-systems, I am going to change it,
re-targeted also for small-memory real-time systems.
-- Carlo Milanesi
Scott Lurndal
2014-10-07 13:43:49 UTC
Permalink
Post by David Brown
As others have noted, TIOBE is /completely/ pointless as a judge of how
much something is used.
I am eager to know which are your more reliable sources regarding the use o=
f programming languages and programming tools.
Noting that TIOBE is useless doesn't imply that there is some other
source with better data. It is unlikely that any source has valid
data regarding the worldwide use of compiled languages.
Post by David Brown
Regarding embedded systems, there is no magical "64KB" boundary as you
seem to think.
There is a magical boundary, that is the fact that a 16-bit pointer can add=
ress only 65536 different memory locations. Many processors have machine in=
structions containing 16-bit pointers, for example Zilog Z80 and Intel 8086=
Both of which have been obsolete for over two decades. ARM A-profile
processors now support 64-bit code, and I would expect that much like
Atom, 64-bit support will migrate into some ARM M-profile parts in
the future.
Post by David Brown
I think, however, there is a trend towards more C++ even in small
systems (just as assembly has been mostly pushed out in favour of C).
Part of this is that "small" microcontrollers have been getting "bigger"
(in particular, Cortex M cores have pushed out a lot of 8-bit cores).
Part of this is that the tools are getting better, and part of it is
that the language is getting better (C++11 has a lot of improvements).
I agree, but actually I don't know how much of C++11 is supported by embedd=
ed systems development tools.
Very little. Many projects are limited by validation cycles on the
toolsets, and compatability with proprietary binary libraries. In many
cases, this limits them to using C++98 features or earlier.

scott
David Brown
2014-10-07 15:02:37 UTC
Permalink
Post by Scott Lurndal
Post by David Brown
Regarding embedded systems, there is no magical "64KB" boundary as you
seem to think.
There is a magical boundary, that is the fact that a 16-bit pointer can add=
ress only 65536 different memory locations. Many processors have machine in=
structions containing 16-bit pointers, for example Zilog Z80 and Intel 8086=
Both of which have been obsolete for over two decades.
Direct descendants of the Z80 were used in embedded systems long after
that - I believe it is only a few years since they went out of
production. The 8051 core has been "obsolete" for over three decades,
in that there have been other cpu cores available that are better in
almost every way - and yet there are still new devices being made with
(roughly) 8051 cores in them.
Post by Scott Lurndal
ARM A-profile
processors now support 64-bit code, and I would expect that much like
Atom, 64-bit support will migrate into some ARM M-profile parts in
the future.
It is highly unlikely that real 64-bit support will make it to Cortex M
devices for many years - if at all. 64-bit simply has nothing to offer
such devices. 64-bit floating point, and vector processing SIMD with
64-bit (or more) lumps will probably turn up, but you do not need more
than 32-bit direct addressing on a microcontroller.
Post by Scott Lurndal
Post by David Brown
I think, however, there is a trend towards more C++ even in small
systems (just as assembly has been mostly pushed out in favour of C).
Part of this is that "small" microcontrollers have been getting "bigger"
(in particular, Cortex M cores have pushed out a lot of 8-bit cores).
Part of this is that the tools are getting better, and part of it is
that the language is getting better (C++11 has a lot of improvements).
I agree, but actually I don't know how much of C++11 is supported by embedd=
ed systems development tools.
Very little. Many projects are limited by validation cycles on the
toolsets, and compatability with proprietary binary libraries. In many
cases, this limits them to using C++98 features or earlier.
Most projects (but certainly not all) in these categories are limited to
C - and some of these are stuck at ANSI/C90. But a large amount of
embedded development for new projects is done using more modern tools,
dominated by gcc, with support for C++11 mostly complete since about
version 4.7. clang/llvm is an alternative choice, which also has
excellent C++11 support.

The days of expensive, slow-update-cycle proprietary embedded toolchains
are numbered. They will be slow to die out in certain niche markets -
in particular, aeronautic and automotive industries are very keen on
them. This is not because they produce better code, have more features,
or have fewer bugs - but their price tag and validation certificates
give their users certain legal protection. But for more mainstream
embedded development, such tools are increasingly seen as old-fashioned
and limited, and it is harder and harder for the vendors to compete with
free or low-priced gcc and clang toolchains that generate better code,
support newer standards, and have more features.

You can see this with ARM's own toolchains. Originally, they made their
own compilers. Then as the market for expensive toolchains shrunk and
the cost of keeping up-to-date increased, they bought Keil and dropped
their own compiler. Now they have dropped Keil's compiler too, and are
basing their future toolchains on clang/llvm.
Juha Nieminen
2014-10-08 07:44:03 UTC
Permalink
Post by David Brown
For bigger processors, C is still the preferred choice for a lot of
embedded programming - and when C++ is used, it is often used in a way
that differs significantly from desktop or "big system" C++ programming.
In particular, in small embedded systems there is an emphasis on code
size (thus one avoids large libraries), static behaviour (heaps, dynamic
memory, virtual functions, etc., are banned or discouraged), clear code
flow (so you avoid exceptions), and code correctness (this also means
knowing all your code, and therefore keeping source sizes to the minimum).
I actually fail to see any relevant difference to my style of C++
programming.

If I can easily avoid allocating memory dynamically, I do so (because
dynamic memory allocation is awfully slow in languages that use the
libc allocator). I never use virtual functions just for the sake of
using them (I don't have a problem in using them if they are the best
solution to the problem at hand, but I don't just throw 'virtual'
uselessly there just to make it look more C++'ish). If you are not
programming in a manner that code flow is clear, then you are doing
it wrong, no matter what the target system is. And code correctness?
I thought that was a given. Or are there really C++ programmers who
don't care for code correctness?

I know that a lot of misguided C++ programmers will just eg. throw
dynamic data containers into situations that really don't need them
at all (such as using std::vector for a small array that always has
a fixed size known at compile time), but I am savvier than that.
It's actually surprising how much can be done with no, or minimal,
dynamic memory allocation ("minimal" in the sense of how many
'new' calls are done.)

--- news://freenews.netfront.net/ - complaints: ***@netfront.net ---
David Brown
2014-10-08 09:07:36 UTC
Permalink
Post by Juha Nieminen
Post by David Brown
For bigger processors, C is still the preferred choice for a lot of
embedded programming - and when C++ is used, it is often used in a way
that differs significantly from desktop or "big system" C++ programming.
In particular, in small embedded systems there is an emphasis on code
size (thus one avoids large libraries), static behaviour (heaps, dynamic
memory, virtual functions, etc., are banned or discouraged), clear code
flow (so you avoid exceptions), and code correctness (this also means
knowing all your code, and therefore keeping source sizes to the minimum).
I actually fail to see any relevant difference to my style of C++
programming.
I get what you are saying. Different people have different styles and
habits, so the difference between "small systems C++" and "big systems
C++" is going to vary significantly. I'm only talking about general
guidelines and priorities, not fixed rules.
Post by Juha Nieminen
If I can easily avoid allocating memory dynamically, I do so (because
dynamic memory allocation is awfully slow in languages that use the
libc allocator).
Many people /do/ use dynamic memory unnecessarily, such as by using
std::vector when std::array (or a plain C array) would do the job just
as well. Many people have habits of using "new" to create objects when
it would be possible to have them on the stack. Sometimes it is a
matter of emphasising flexibility over the disadvantages of dynamic
memory, other times it is just habit, style, emphasis of development
time over runtime, or even laziness. Certainly it is common to view
dynamic memory on big systems as almost free - both in terms of speed
and quantity. Priorities /should/ be different here between "big
systems" and "small systems".
Post by Juha Nieminen
I never use virtual functions just for the sake of
using them (I don't have a problem in using them if they are the best
solution to the problem at hand, but I don't just throw 'virtual'
uselessly there just to make it look more C++'ish).
Again, many people do use virtual functions when not strictly necessary,
perhaps with an aim to making their classes more flexible.
Post by Juha Nieminen
If you are not
programming in a manner that code flow is clear, then you are doing
it wrong, no matter what the target system is. And code correctness?
I thought that was a given. Or are there really C++ programmers who
don't care for code correctness?
Yes, there are really programmers of all sorts who are not particularly
concerned about code correctness. You may have noticed that some
programs are shipped with bugs in them? Those programs are not correct.

"Code correctness" is more than just "write something that makes sense,
test and see that it works". It is a whole range of ideas, including
formal test suites, code reviews, mathematical proof of correctness,
advanced static checkers, run-time checkers, coding styles, development
strategies, etc. At the top end of the scale, where people write code
for things like flight control systems, development teams can be happy
with average coding rates of a couple of lines per day per programmer.
For most programming tasks, far higher coding rates are required.

I realise this is a generalisation, and all generalisations are false,
but small system embedded development usually places more emphasis on
code correctness than PC or "big system" programming.
Post by Juha Nieminen
I know that a lot of misguided C++ programmers will just eg. throw
dynamic data containers into situations that really don't need them
at all (such as using std::vector for a small array that always has
a fixed size known at compile time), but I am savvier than that.
As noted above, developers vary enormously. I am not trying to say that
/you/, as a "big systems" developer, don't care about code correctness
and are happy to use dynamic memory at all opportunities. I am merely
giving general patterns.

And I think that a lot of "big systems" programmers would benefit from
working for a while in a serious embedded development arena, to learn
from techniques that are more common there and take them back to their
Windows or *nix programming - the results would often be higher code
quality.
Post by Juha Nieminen
It's actually surprising how much can be done with no, or minimal,
dynamic memory allocation ("minimal" in the sense of how many
'new' calls are done.)
Jorgen Grahn
2014-10-09 13:24:37 UTC
Permalink
On Wed, 2014-10-08, David Brown wrote:
...
Post by David Brown
And I think that a lot of "big systems" programmers would benefit from
working for a while in a serious embedded development arena, to learn
from techniques that are more common there and take them back to their
Windows or *nix programming - the results would often be higher code
quality.
Any examples? I wonder if it's techniques I already use without
thinking about it ...

I've done a lot of embedded work, and so far I'm not very impressed.
Although you may be thinking of smaller systems than the ones I've
mostly worked with.

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Jorgen Grahn
2014-10-07 11:27:33 UTC
Permalink
Post by Dombo
Post by c***@gmail.com
I think that small micro-controllers, that is computers that address
no more than 64 KB of code, are still used, and that for that C++
language is not actually used and should not be actually used.
The Arduino people felt differently; few people realize that the Sketch
programming language used to program a lowly 8-bit Atmel AVR
micro-controller with significantly less memory than 64KB is actually
C++. I wouldn't say that that is the best example of C++ on small
micro-controllers, but I see few problems with judicious use of C++ on
those other than that the benefits of C++ may be less significant for
small programs.
According the TIOBE index [...]
Here we go again.
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
c***@gmail.com
2014-10-05 11:14:54 UTC
Permalink
Post by Öö Tiib
Post by c***@gmail.com
But as my library is still in development,
I accept suggestions for a renaming.
What I suggested is to use "relative measure" or "relative
quantity" typed out literally.
I will take that into account.
Post by Öö Tiib
Note that "one-dimensional"
feels irrelevant for temperature. In what problem domain we
have three-dimensional temperatures? A single value is
indeed technically an array of values with one element but
we usually do not emphasize on that.
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
Cpp-Measures supports 2-dimensional and 3-dimensional measures,
with algebraic operations, dot product and cross product,
while I couldn't find such features in Boost.Units.
I suspect that the existing linear algebra libraries
(like Eigen, MTL4, boost.uBLAS or Armadillo) do not integrate
neither with your cpp-measures nor with boost.units too well.
OTOH it is likely hard to beat performance and quality of such
libraries.
So instead of building linear algebra into your dimensioned
values library it might be worth considering seeking
interoperability with one of those. Two good things that
play together often result with great outcome.
If you want to represent the position (X=10", Y=12")
of an object in a plane, and move it by (X=3", Y=8")
to reach position (X=13", Y=20"),
point2<inches> p(10, 12);
p += vect2<inches>(3, 8);
cout << p << endl; // It outputs: 13 20"
That is too far from math that is needed for dealing
with engines pulling around objects that are attached to
each other in real or emulated world (IOW scientific and
engineering applications).
My mileage (that is in CAD-CAM applications) is that engineering software uses a lot multi-dimensional quantities, typically as positions or movements in a plane or in space; but probably you are right that for scalar magnitudes, like time and temperature, and for those who work only with one-dimensional quantities, it is weird to emphasize that these quantities have only one dimension.

--
Carlo Milanesi
Öö Tiib
2014-10-05 12:48:05 UTC
Permalink
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
If you want to represent the position (X=10", Y=12")
of an object in a plane, and move it by (X=3", Y=8")
to reach position (X=13", Y=20"),
point2<inches> p(10, 12);
p += vect2<inches>(3, 8);
cout << p << endl; // It outputs: 13 20"
That is too far from math that is needed for dealing
with engines pulling around objects that are attached to
each other in real or emulated world (IOW scientific and
engineering applications).
My mileage (that is in CAD-CAM applications) is that engineering
software uses a lot multi-dimensional quantities, typically as
positions or movements in a plane or in space; ...
Yes and my point was that those multi-dimensionally meaningful
spins, growths, velocities and skews are typically feed into
linear algebra library. Upside of linear algebra libraries is
that those do large amounts of calculations rather efficiently
and close to optimally. That makes it unlikely that you can
compete with those. Downside of such libraries is that those
leave meaningfulness of such calculations up to user so matrix
of doubles multiplied by matrix of doubles is a matrix of
doubles. So my suggestion was to cooperate and to interface
with such libraries for all that math rather than try to have
the vectors and matrixes and linear algebra reinvented inside
of your library.
Post by c***@gmail.com
... but probably you are right that for scalar magnitudes,
like time and temperature, and for those who work only with
one-dimensional quantities, it is weird to emphasize that
these quantities have only one dimension.
That was only minor nit really about naming that you brought
up yourself. People can put up with such quirks and if they
don't like it they can hide it away with aliases either way.
Major goal is to provide value and convenience without
hindering efficiency ... all semantic issues are secondary.
c***@gmail.com
2014-10-06 19:39:59 UTC
Permalink
Post by Öö Tiib
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
If you want to represent the position (X=10", Y=12")
of an object in a plane, and move it by (X=3", Y=8")
to reach position (X=13", Y=20"),
point2<inches> p(10, 12);
p += vect2<inches>(3, 8);
cout << p << endl; // It outputs: 13 20"
That is too far from math that is needed for dealing
with engines pulling around objects that are attached to
each other in real or emulated world (IOW scientific and
engineering applications).
My mileage (that is in CAD-CAM applications) is that engineering
software uses a lot multi-dimensional quantities, typically as
positions or movements in a plane or in space; ...
Yes and my point was that those multi-dimensionally meaningful
spins, growths, velocities and skews are typically feed into
linear algebra library. Upside of linear algebra libraries is
that those do large amounts of calculations rather efficiently
and close to optimally. That makes it unlikely that you can
compete with those. Downside of such libraries is that those
leave meaningfulness of such calculations up to user so matrix
of doubles multiplied by matrix of doubles is a matrix of
doubles. So my suggestion was to cooperate and to interface
with such libraries for all that math rather than try to have
the vectors and matrixes and linear algebra reinvented inside
of your library.
I think that using my library I can write much more readable code than using a vector algebra library.
A problem I actually had when I was developing a typographic application was how to handle positions and displacements on a sheet.
Using My library, I can write the code cited at the beginning of this post.
Instead, using only one-dimensional measures and the Eigen vector library I should write the following code:

// Use arrays to store coordinates.
point1<inches> p[2];
vect1<inches> v[2];
vect1<mm> vmm[2];

// Map values to Eigen.
Map<Vector2d> pe(&p[0].value());
Map<Vector2d> ve(&v[0].value());
Map<Vector2d> vmme(&vmm[0].value());

// Apply correct operation using Eigen.
pe += ve;

// Apply incorrect operation using Eigen.
vect1<mm> p[2];
pe += pe; // position += position

// Apply incorrect operation using Eigen.
pe += vmme; // inches += mm

// Output one dimension at a time with its unit.
cout << p[0] << ", " << p[1] << endl;

// Output the whole position, but without unit.
cout << p << endl; // It outputs: 13 20"

I think that for such kind of tasks,
using 2D or 3D unit-decorated variables is invaluable.

--
Carlo Milanesi
Öö Tiib
2014-10-07 00:42:14 UTC
Permalink
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
Post by Öö Tiib
Post by c***@gmail.com
If you want to represent the position (X=10", Y=12")
of an object in a plane, and move it by (X=3", Y=8")
to reach position (X=13", Y=20"),
point2<inches> p(10, 12);
p += vect2<inches>(3, 8);
cout << p << endl; // It outputs: 13 20"
That is too far from math that is needed for dealing
with engines pulling around objects that are attached to
each other in real or emulated world (IOW scientific and
engineering applications).
My mileage (that is in CAD-CAM applications) is that engineering
software uses a lot multi-dimensional quantities, typically as
positions or movements in a plane or in space; ...
Yes and my point was that those multi-dimensionally meaningful
spins, growths, velocities and skews are typically feed into
linear algebra library. Upside of linear algebra libraries is
that those do large amounts of calculations rather efficiently
and close to optimally. That makes it unlikely that you can
compete with those. Downside of such libraries is that those
leave meaningfulness of such calculations up to user so matrix
of doubles multiplied by matrix of doubles is a matrix of
doubles. So my suggestion was to cooperate and to interface
with such libraries for all that math rather than try to have
the vectors and matrixes and linear algebra reinvented inside
of your library.
I think that using my library I can write much more readable code
than using a vector algebra library.
Indeed, when you need to add two vectors. We do not use linear
algebra library for to add two vectors or few scalar values.
We use it for complex linear algebra and geometry. The more
complex it gets the more unfortunate it is that linear algebra
libraries do not care about units. So if we need to rotate or
to translate your inches then we have to part with unit-safety
in most complex parts of our calculations.
Luca Risolia
2014-10-05 15:14:37 UTC
Permalink
Post by c***@gmail.com
I feel that, after you have learned that "point1" means
"one-dimension absolute measure" and "vect1" means
"one-dimension relative measure", the latter expression
is more understandable than the former one.
But as my library is still in development,
I accept suggestions for a renaming.
To make the things more readable, I suggest that you also provide one
factory for all the points and one factory for all the vect's, which
return the right type according to the number of passed arguments, for
example;

auto x = make_vect<meters>(0, 0); // vect2
auto y = make_vect<meters>(0, 0, 0); // vect3

auto p = make_point<meters>(0, 0); // point2
auto q = make_point<meters>(0, 0, 0); // point3
Wouter van Ooijen
2014-10-01 06:25:52 UTC
Permalink
Post by c***@gmail.com
Post by Wouter van Ooijen
I hope you are aware that something like this exists in boost?
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
Post by Wouter van Ooijen
In what sense is your work different
or even better than the boost solution?
I don't really know Boost.Units,
but here are some apparent differences.
Eh, you really don't know B.U yet at the same time you are not satisfied
by it??
Post by c***@gmail.com
Boost.Units supports 12-year-old compilers, while Cpp-Measures
requires, and takes advantage of, the parts of C++11 available
in GCC and VC++ 2012.
Does that mean anything for me as user of either package?
Post by c***@gmail.com
Boost.Units includes many definitions of magnitudes and units
in the library, while Cpp-Measures requires that the application
programmer defines the needed magnitudes and the units,
although many examples will be available in documentation.
Boost, when expanded, is 500 megabytes large,
while Cpp-Measures is 200 KB of library code
for the application programmer,
and less than 1 MB with all tests and documentation.
It is not clear to me how to install only the Boost.Units library
and its dependencies instead of all Boost.
IMO that is totally irrelevant.
Post by c***@gmail.com
Application code using Cpp-Measures is less verbose.
For example, the following Boost.Units expression
quantity<absolute<fahrenheit::temperature> >
T1p(32.0*absolute<fahrenheit::temperature>());
corresponds to the following Cpp-Measures expression
point1<fahrenheit> T1p(32);
Now that IS relevant.
Post by c***@gmail.com
Application code using Cpp-Measures is compiled faster
and produces less machine code.
Faster compilation is not that interesting, but samll code certainly is
(from my point of view: working with very small microcontrollers).
Post by c***@gmail.com
For example, the example provided by Boost.Units Quick Start page,
when compiled using GCC for Windows, with stripping and optimization,
takes 3 times the time to compile the equivalent code
using Cpp-Measures, and generates an executable 7 times as large.
But it makes me wonder what makes B.U generate more code: just
clumsiness, or does it do other (more) usefull things?
Post by c***@gmail.com
Cpp-Measures supports 2-dimensional and 3-dimensional measures,
with algebraic operations, dot product and cross product,
while I couldn't find such features in Boost.Units.
Cpp-Measures supports signed and unsigned angles modulo one turn,
while I couldn't find such features in Boost.Units.
Post by Wouter van Ooijen
- do you differentiate between absolute and relative values (for
instance for time, but also for location/distance)
Yes, for example, a variable representing an absolute length measured
point1<inches> variable_name;
While a variable representing a relative length measured
vect1<inches> variable_name;
Nice. I assume that adding points is not possible, substracting points
yields a vect, etc?
Post by c***@gmail.com
Post by Wouter van Ooijen
- can you work with non-floating-point base types (especially
fixed-point types implemented on top of integers)?
float, double, long double, int, long, long long, complex<double>.
Not tested yet, and probably not working properly yet
fixed-point, rational, multiple-precision,
and arbitrary-precision types.
That last sentence is interesting. After testing that many types
already, why would one of those yet-some-other-types not work out of the
box? That means that a numeric type that I write will likly suffer the
same fate.
Post by c***@gmail.com
Post by Wouter van Ooijen
- can you work with mixed base types (for instance fixed-point types
based on integers of various size and scaling)?
Automatic conversion between fixed-point types is not supported yet,
If you mean conversion by implicit conversion operators: keep that
unsupported!
Post by c***@gmail.com
auto a = vect1<inches,float>(1.2f) + vect1<inches,double>(2.3);
And you get that "a" is of type "vect1<inches,double>",
and it has value 3.5.
--
Carlo Milanesi
If I can find the time I will have a closer look at your work.

Wouter van Ooijen
Jorgen Grahn
2014-10-01 11:30:43 UTC
Permalink
Post by Wouter van Ooijen
Post by c***@gmail.com
Post by Wouter van Ooijen
I hope you are aware that something like this exists in boost?
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
Post by Wouter van Ooijen
In what sense is your work different
or even better than the boost solution?
I don't really know Boost.Units,
but here are some apparent differences.
Eh, you really don't know B.U yet at the same time you are not satisfied
by it??
There is no rule which says you have to know Foo in and out before
you're allowed to write Bar which does roughly the same thing.

(Not that I have any opinion on either Boost.Unit or this one.)

[snip more detailed criticism]

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Wouter van Ooijen
2014-10-01 16:40:50 UTC
Permalink
Post by Jorgen Grahn
Post by Wouter van Ooijen
Post by c***@gmail.com
Post by Wouter van Ooijen
I hope you are aware that something like this exists in boost?
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
Post by Wouter van Ooijen
In what sense is your work different
or even better than the boost solution?
I don't really know Boost.Units,
but here are some apparent differences.
Eh, you really don't know B.U yet at the same time you are not satisfied
by it??
There is no rule which says you have to know Foo in and out before
you're allowed to write Bar which does roughly the same thing.
But it would be peculiar if you state both
- that you don't know Bar
- that the reason for writing Foo is that you are dissatisfied with Bar

Wouter
Victor Bazarov
2014-10-01 17:23:18 UTC
Permalink
Post by Wouter van Ooijen
Post by Jorgen Grahn
Post by Wouter van Ooijen
Post by c***@gmail.com
Post by Wouter van Ooijen
I hope you are aware that something like this exists in boost?
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
Post by Wouter van Ooijen
In what sense is your work different
or even better than the boost solution?
I don't really know Boost.Units,
but here are some apparent differences.
Eh, you really don't know B.U yet at the same time you are not satisfied
by it??
There is no rule which says you have to know Foo in and out before
you're allowed to write Bar which does roughly the same thing.
But it would be peculiar if you state both
- that you don't know Bar
- that the reason for writing Foo is that you are dissatisfied with Bar
Not really. One of the reasons to write Foo is so you don't have to
spend time figuring out Bar. Simple explanation is that Bar is too
complex to easily figure out. Besides, "I don't know Bar" and "I don't
*really* know Bar" are two different statements, don't you think?

V
--
I do not respond to top-posted replies, please don't ask
Wouter van Ooijen
2014-10-01 17:44:13 UTC
Permalink
Post by Victor Bazarov
Post by Wouter van Ooijen
Post by Jorgen Grahn
Post by Wouter van Ooijen
Post by c***@gmail.com
Post by Wouter van Ooijen
I hope you are aware that something like this exists in boost?
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
Post by Wouter van Ooijen
In what sense is your work different
or even better than the boost solution?
I don't really know Boost.Units,
but here are some apparent differences.
Eh, you really don't know B.U yet at the same time you are not satisfied
by it??
There is no rule which says you have to know Foo in and out before
you're allowed to write Bar which does roughly the same thing.
But it would be peculiar if you state both
- that you don't know Bar
- that the reason for writing Foo is that you are dissatisfied with Bar
Not really. One of the reasons to write Foo is so you don't have to
spend time figuring out Bar. Simple explanation is that Bar is too
complex to easily figure out. Besides, "I don't know Bar" and "I don't
*really* know Bar" are two different statements, don't you think?
Dunno, I think that's beyond my grasp of English. But knowing (not just
guessing without looking) that Bar is too complex to use counts as
knowledge about Bar for me.

And I still think that it is peculiar to start designing something
non-trivial as a units library without first looking at what is laready
there (which btw is more than just boost.units).

Wouter
Post by Victor Bazarov
V
Victor Bazarov
2014-10-01 18:27:08 UTC
Permalink
Post by Wouter van Ooijen
Post by Victor Bazarov
Post by Wouter van Ooijen
Post by Jorgen Grahn
Post by Wouter van Ooijen
Post by c***@gmail.com
Post by Wouter van Ooijen
I hope you are aware that something like this exists in boost?
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
Post by Wouter van Ooijen
In what sense is your work different
or even better than the boost solution?
I don't really know Boost.Units,
but here are some apparent differences.
Eh, you really don't know B.U yet at the same time you are not satisfied
by it??
There is no rule which says you have to know Foo in and out before
you're allowed to write Bar which does roughly the same thing.
But it would be peculiar if you state both
- that you don't know Bar
- that the reason for writing Foo is that you are dissatisfied with Bar
Not really. One of the reasons to write Foo is so you don't have to
spend time figuring out Bar. Simple explanation is that Bar is too
complex to easily figure out. Besides, "I don't know Bar" and "I don't
*really* know Bar" are two different statements, don't you think?
Dunno, I think that's beyond my grasp of English. But knowing (not just
guessing without looking) that Bar is too complex to use counts as
knowledge about Bar for me.
Knowing about something and knowing something are two different things,
trust me. Also, knowledge that something ("Bar") is too complex does
not necessarily mean knowing it ("Bar") in depth (or, like some might
put it, "really knowing" it).
Post by Wouter van Ooijen
And I still think that it is peculiar to start designing something
non-trivial as a units library without first looking at what is laready
there (which btw is more than just boost.units).
Wouter
I agree with you that it is peculiar to start designing something
without looking at what is already there. I do not see how it can be
said about Carlo's work, however.

Seeing that it's not the first time you disclaim knowing English to
grasp some obvious and elementary stuff, I think it is peculiar that you
*judge* other posters statements written in English without putting any
effort to understand it enough. Your alleged weak grasp of English does
not preclude you from denying the benefit of the doubt to others.

Carlo *evidently* looked at what was "already there" if you care to see
what he stated in his second post. He knows how to use at least some
mechanisms of Boost.Units. He gives measurements of the size of the
resulting executable and the time it took to compile... Does this not
present itself like "looking at what is already there"?

V
--
I do not respond to top-posted replies, please don't ask
Wouter van Ooijen
2014-10-01 19:11:40 UTC
Permalink
Post by Victor Bazarov
Post by Wouter van Ooijen
Post by Victor Bazarov
Post by Wouter van Ooijen
Post by Jorgen Grahn
Post by Wouter van Ooijen
Post by c***@gmail.com
Post by Wouter van Ooijen
I hope you are aware that something like this exists in boost?
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
Post by Wouter van Ooijen
In what sense is your work different
or even better than the boost solution?
I don't really know Boost.Units,
but here are some apparent differences.
Eh, you really don't know B.U yet at the same time you are not satisfied
by it??
There is no rule which says you have to know Foo in and out before
you're allowed to write Bar which does roughly the same thing.
But it would be peculiar if you state both
- that you don't know Bar
- that the reason for writing Foo is that you are dissatisfied with Bar
Not really. One of the reasons to write Foo is so you don't have to
spend time figuring out Bar. Simple explanation is that Bar is too
complex to easily figure out. Besides, "I don't know Bar" and "I don't
*really* know Bar" are two different statements, don't you think?
Dunno, I think that's beyond my grasp of English. But knowing (not just
guessing without looking) that Bar is too complex to use counts as
knowledge about Bar for me.
Knowing about something and knowing something are two different things,
trust me. Also, knowledge that something ("Bar") is too complex does
not necessarily mean knowing it ("Bar") in depth (or, like some might
put it, "really knowing" it).
Post by Wouter van Ooijen
And I still think that it is peculiar to start designing something
non-trivial as a units library without first looking at what is laready
there (which btw is more than just boost.units).
Wouter
I agree with you that it is peculiar to start designing something
without looking at what is already there. I do not see how it can be
said about Carlo's work, however.
Seeing that it's not the first time you disclaim knowing English to
I don't recall many instances of me doing that here, but you might have
a better memory.
Post by Victor Bazarov
grasp some obvious and elementary stuff, I think it is peculiar that you
*judge* other posters statements written in English without putting any
I did not mean to judge, I meant to express my surprise at the scentence
a responded to. I think the fragment is
Post by Victor Bazarov
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
(snip)
I don't really know Boost.Units,
but here are some apparent differences.
Eh, you really don't know B.U yet at the same time you are not
satisfied by it??
For me (but note again that I am not a native english speaker) "don't
really kown' and "really don't know" are close, but I might be totally
wrong here.

Anyway, what I wanted to express was that I though the combination of
"not satisfied" and "don't really know" strange. It was not meant as
judgement, except maybe about the combination of those two scentences.
For me being "not satistfied" enough to embark to "implement my own"
means that I have looked at the alternative(s) more than just passingly,
even if only to know what not to do. I suspect that Carlo has done so,
and for me that doe snot match "I don't really know Boost.Units".
Post by Victor Bazarov
Carlo *evidently* looked at what was "already there" if you care to see
what he stated in his second post. He knows how to use at least some
mechanisms of Boost.Units. He gives measurements of the size of the
resulting executable and the time it took to compile... Does this not
present itself like "looking at what is already there"?
Indeed he does lateron, but that does not change my surprised reaction
to the two sentences I responeded to. I do not claim that he has no
knowledge of B.U., or that he went on his work unprepared, on any other
jusgement about his work or his preparation, just that I thought that
combination of two scentences strange.

Wouter van Ooijen
Christian Gollwitzer
2014-10-05 20:21:30 UTC
Permalink
Post by Wouter van Ooijen
For me (but note again that I am not a native english speaker) "don't
really kown' and "really don't know" are close, but I might be totally
wrong here.
To me (also non-native), there is a big difference: "I really don't
know" means "I have no clue", "never heard of", whereas "I don't really
know Boost.Units" is "I am no expert in Boost.Units (but know it and may
have used it)". The latter might also be modesty.

Christian
c***@gmail.com
2014-10-01 21:26:06 UTC
Permalink
Post by Wouter van Ooijen
Post by c***@gmail.com
Post by Wouter van Ooijen
I hope you are aware that something like this exists in boost?
Sure, but I was not satisfied by that, and so I designed and
implemented my own library.
Post by Wouter van Ooijen
In what sense is your work different
or even better than the boost solution?
I don't really know Boost.Units,
but here are some apparent differences.
Eh, you really don't know B.U yet at the same time you are not satisfied
by it??
I'm sorry for the misunderstanding, but I meant that I don't know
*well* it.
Four years ago, I just installed it, read four or five screens
of documentation, and I wrote a program of few tens of statements
to test it. Now I just redid that.

I meant that perhaps I may have overlooked some of its features,
but I am not satisfied of what I have seen.
Post by Wouter van Ooijen
Post by c***@gmail.com
Boost.Units supports 12-year-old compilers, while Cpp-Measures
requires, and takes advantage of, the parts of C++11 available
in GCC and VC++ 2012.
Does that mean anything for me as user of either package?
Sure. If you want to use a C++ compiler released before 2012,
you cannot use Cpp-Measures. That's a plus for Boost.Units.
Post by Wouter van Ooijen
Post by c***@gmail.com
Post by Wouter van Ooijen
- do you differentiate between absolute and relative values (for
instance for time, but also for location/distance)
Yes, ..
Nice. I assume that adding points is not possible, substracting points
yields a vect, etc?
Exactly.
Post by Wouter van Ooijen
Post by c***@gmail.com
Post by Wouter van Ooijen
- can you work with non-floating-point base types (especially
fixed-point types implemented on top of integers)?
float, double, long double, int, long, long long, complex<double>.
Not tested yet, and probably not working properly yet
fixed-point, rational, multiple-precision,
and arbitrary-precision types.
That last sentence is interesting. After testing that many types
already, why would one of those yet-some-other-types not work out of the
box? That means that a numeric type that I write will likly suffer the
same fate.
Cpp-Measures needs to apply the "%" operator to integer types,
and the "fmod" function to floating-point types,
selected using type traits. The user-defined numeric type
should instantiate the appropriate type trait.

In addition, Cpp-Measures now does not work with unsigned types
and with "signed char", "signed short".
I will go on to support some user-defined types.
Do you have any suggestions?

--
Carlo Milanesi
Luca Risolia
2014-10-04 00:05:07 UTC
Permalink
Post by c***@gmail.com
Actually I need someone programming engineering software,
who would like to read the tutorial and tell me
what is missing for his/her software application needs.
It would be useful if the library supported all the base units of SI
via literal operators, for example:

using namespace measures::si;
auto length = 10_m; // 10 meters (relative measure)
c***@gmail.com
2014-10-05 10:10:35 UTC
Permalink
Post by Luca Risolia
Post by c***@gmail.com
Actually I need someone programming engineering software,
who would like to read the tutorial and tell me
what is missing for his/her software application needs.
It would be useful if the library supported all the base units of SI
using namespace measures::si;
auto length = 10_m; // 10 meters (relative measure)
I don't think it would be good, for the following reasons:
* User-defined operators are still not supported by the widely-used Visual C++ compiler, and perhaps by other compilers.
* There are actually 4 kinds of one-dimension measures (without considering angles): static relative, dynamic relative, static absolute, dynamic absolute.
* A measure can have a programmer-defined underlying numeric type (float, double, ...).
* There are many SI units, and some of them may use a symbol the programmer wants to use for another purpose.
* SI is used a lot, but many non-SI units are still used, and that would open requirements for every unit used on Earth at some time.

What may be reasonable is includeing in the library the following macros:
#define DEFINE_VECT_STATIC_UNIT_OPERATOR(Unit, Num, Operator) \
vect1<Unit,Num> operator "" _##Operator(long double n)\
{ return vect1<Unit,Num>(n); }
#define DEFINE_POINT_STATIC_UNIT_OPERATOR(Unit, Num, Operator) \
point1<Unit,Num> operator "" _##Operator(long double n)\
{ return point1<Unit,Num>(n); }
#define DEFINE_VECT_DYNAMIC_UNIT_OPERATOR(Unit, Num, Operator) \
dyn_vect1<Unit,Num> operator "" _##Operator(long double n)\
{ return dyn_vect1<Unit,Num>(n); }
#define DEFINE_POINT_DYNAMIC_UNIT_OPERATOR(Unit, Num, Operator) \
dyn_point1<Unit,Num> operator "" _##Operator(long double n)\
{ return dyn_point1<Unit,Num>(n); }

after which the application programmer may write the following statements
DEFINE_MAGNITUDE(Space, meters, " m")
DEFINE_VECT_STATIC_UNIT_OPERATOR(meters, double, m)

after which he/she may write the following statement
auto len = 10.0_m;

--
Carlo Milanesi
Luca Risolia
2014-10-05 14:15:27 UTC
Permalink
Post by c***@gmail.com
* User-defined operators are still not supported by the widely-used Visual C++ compiler, and perhaps by other compilers.
I don't know what other compilers you are considering, but both GCC and
CLANG have been supporting literal operators for quite a while now and
are certainly widely-used and ready for the C++11 era.
Post by c***@gmail.com
* There are actually 4 kinds of one-dimension measures (without considering angles): static relative, dynamic relative, static absolute, dynamic absolute.
Yes, with regard to SI units I was talking about, then choose the
appropriate type. According to the definitions you wrote in your
tutorial, I would expect "auto len = 10_m" to unambiguously refer to a
"static relative" measure, while for a temperature "auto t = 10_K", I'd
expect a "static absolute" measure.
Post by c***@gmail.com
* A measure can have a programmer-defined underlying numeric type (float, double, ...).
I don't see where is the problem. I'd expect the underlying numeric type
to be either integral or floating point, according to what I specify as
operator argument. unsigned long long and long double are the two
numeric parameter types allowed by literal operators, so you can use
them if you cannot find reasonable defaults:

vect1<..., unsigned long long> len = 10_m; // integer literal (ULL)
vect1<..., long double> len = 10.0_m; // floating point literal (long
double)

(if your library supports (or not) operations between measures having
different underlying numeric types is another matter).
Post by c***@gmail.com
* There are many SI units, and some of them may use a symbol the programmer wants to use for another purpose.
There are 7 *base* units in the SI (with well defined names and
symbols). I would not say they are so "many". As I deliberately wrote in
my example, you can probably make the SI symbols available under a
specific namespace if you wonder about possible collisions.
Post by c***@gmail.com
* SI is used a lot, but many non-SI units are still used, and that would open requirements for every unit used on Earth at some time.
True, but IMHO SI is a special case and should be included in the
library from the beginning.
c***@gmail.com
2014-10-06 20:34:38 UTC
Permalink
Post by Luca Risolia
Post by c***@gmail.com
* User-defined operators are still not supported by the widely-used Visual C++ compiler, and perhaps by other compilers.
I don't know what other compilers you are considering, but both GCC and
CLANG have been supporting literal operators for quite a while now and
are certainly widely-used and ready for the C++11 era.
I wrote *perhaps*. But also Intel C++ compiler one year ago didn't supported user-defined literal operators.
Post by Luca Risolia
Post by c***@gmail.com
* There are actually 4 kinds of one-dimension measures (without considering angles): static relative, dynamic relative, static absolute, dynamic absolute.
Yes, with regard to SI units I was talking about, then choose the
appropriate type. According to the definitions you wrote in your
tutorial, I would expect "auto len = 10_m" to unambiguously refer to a
"static relative" measure, while for a temperature "auto t = 10_K", I'd
expect a "static absolute" measure.
I don't know why. 10_m can be both a position (i.e. absolute) and a displacement (i.e. relative); and 10_K can be both a temperature point and a temperature variation.
Post by Luca Risolia
Post by c***@gmail.com
* A measure can have a programmer-defined underlying numeric type (float, double, ...).
I don't see where is the problem. I'd expect the underlying numeric type
to be either integral or floating point, according to what I specify as
operator argument. unsigned long long and long double are the two
numeric parameter types allowed by literal operators, so you can use
vect1<..., unsigned long long> len = 10_m; // integer literal (ULL)
vect1<..., long double> len = 10.0_m; // floating point literal (long
double)
The numeric parameter may be also int, long, float, double, complex, and perhaps some other numeric type.
Post by Luca Risolia
Post by c***@gmail.com
* There are many SI units, and some of them may use a symbol the programmer wants to use for another purpose.
There are 7 *base* units in the SI (with well defined names and
symbols). I would not say they are so "many". As I deliberately wrote in
my example, you can probably make the SI symbols available under a
specific namespace if you wonder about possible collisions.
For every base unit there are 10 multiples and 10 submultiples.
And there are many derived units.
Post by Luca Risolia
Post by c***@gmail.com
* SI is used a lot, but many non-SI units are still used, and that would open requirements for every unit used on Earth at some time.
True, but IMHO SI is a special case and should be included in the
library from the beginning.
I prefer to keep the library physics-convention-agnostic, i.e. independent from conventions established by engineers or physicists.
Anyway, such definitions may be added later to the library.

For now, you may define your own literals using the macros defined in the following program:

#include "measures_io.hpp"
using namespace measures;
using namespace std;

#define DEFINE_VECT_STATIC_UNIT_OPERATOR(Unit, Num, Operator) \
vect1<Unit,Num> operator "" _##Operator(long double n)\
{ return vect1<Unit,Num>(n); }\
vect1<Unit,Num> operator "" _##Operator(unsigned long long n)\
{ return vect1<Unit,Num>(n); }
#define DEFINE_POINT_STATIC_UNIT_OPERATOR(Unit, Num, Operator) \
point1<Unit,Num> operator "" _##Operator(long double n)\
{ return point1<Unit,Num>(n); }\
point1<Unit,Num> operator "" _##Operator(unsigned long long n)\
{ return point1<Unit,Num>(n); }
#define DEFINE_VECT_DYNAMIC_UNIT_OPERATOR(Unit, Num, Operator) \
dyn_vect1<Unit,Num> operator "" _##Operator(long double n)\
{ return dyn_vect1<Unit,Num>(n); }\
dyn_vect1<Unit,Num> operator "" _##Operator(unsigned long long n)\
{ return dyn_vect1<Unit,Num>(n); }
#define DEFINE_POINT_DYNAMIC_UNIT_OPERATOR(Unit, Num, Operator) \
dyn_point1<Unit,Num> operator "" _##Operator(long double n)\
{ return dyn_point1<Unit,Num>(n); }\
dyn_point1<Unit,Num> operator "" _##Operator(unsigned long long n)\
{ return dyn_point1<Unit,Num>(n); }

// Example magnitude definition.
DEFINE_MAGNITUDE(Space, metres, " m")

// Example user-defined literal definition.
DEFINE_VECT_STATIC_UNIT_OPERATOR(metres, double, m)
DEFINE_POINT_STATIC_UNIT_OPERATOR(metres, float, mp)

int main()
{
cout << 12.3_m << "; " << 23_m << endl;
cout << 34.5_mp << "; " << 45_mp << endl;
}

It should output:
12.3 m; 23 m
[34.5] m; [45] m

--
Carlo Milanesi
Loading...