Start of IEEE 754 support (Discussion)

Not all built-ins are equal, but for these things doing practically nothing themselves this is close to the truth. The only way to reduce that is to migrate them to be a VM instruction. flags are too complicated for this though and a specific instruction to handle the float rounding is a bit much to me. is/2 is compiled to the VM if -O is used, as well as in all modes for a few simple operations such as X is Y+1 if X is a fresh variable.

On the long run I’m not against one or more variants for is/2. I’d first like to understand what these should be though and having control over the floats as we have now allows doing all that stuff in Prolog. This first of all allows doing all that stuff and second come up with the proper primitives.

Luckily the C99 semantics and the SWI-Prolog flag semantics wrt threads is the same, this is indeed how it works. As @ridgeworks correctly states, the impact on performance for changing the rounding using flags is quite large though. I fear the only serious way to improve on that is to somehow merge that into the expression evaluation. At the application level it could be an option to compute the upper and lower bounds separately for a larger part of the application logic such that that impact of setup_call_cleanup/3 and the flag changes becomes small.

I have the view that this really isn’t an arithmetic function in the mathematical sense. For example, if internal representation was 32 rather than 64 bits, the same expression would yield a different result. It’s like epsilon in that respect. And the fact that 1.7976931348623157e+308 is nexttoward(1.0Inf,0.0) just re-enforces this view IMO.

I can’t think of another existing function that doesn’t return an arithmetically equivalent value given arithmetically equivalent arguments, e.g., sqrt(2)=:=sqrt(2.0), log(5)=:=log(5.0), floor(1)=:=floor(1.0).

I agree that polymorphic behaviour is desirable, but only where it’s consistent with the function’s semantics. In this case, it isn’t IMO. So I don’t see why we shouldn’t make it equivalent to the C definition. In fact, maybe it shouldn’t be an arithmetic function at all; it should be a predefined predicate nexttoward/3 which maps a floating point value to a nearest neighbour.

On the practical side, it probably doesn’t matter a lot. I’m probably in a small minority of users who would actually use it (like epsilon), and I would never use it with integers (if necessary, use R is X+sign(Y-X) instead).

The “conventional wisdom” seems to be that the IEEE rounding modes are only useful for arcane purposes like interval arithmetic (my primary interest for CLP over reals) and assessing error sensitivity in fp calculations (run same calculation over a bunch of rounding modes; if the answers vary widely you’ve got an issue). Neither of these are mainstream Prolog applications but I would like to know if there are other applications for it.

For my purposes, I would like two primitives (renamed to avoid any interaction with is): lo_bound(Exp,LB) and hi_bound(Exp,UB); that’s it. And I can use the proposed flags to easily do this:

lo_bound(Exp,LB) :-
	current_prolog_flag(rounding_mode, Save),
	set_prolog_flag(rounding_mode, to_negative),
	LB is Exp,
	set_prolog_flag(rounding_mode, Save).

Similarly for hi_bound.

So not difficult, but it would be significantly faster if these predicates were primitives, however that’s done. (I don’t think the complier helps for this but I may be wrong.)

If anybody else has a use case for this, please join the discussion.

I also don’t think this is an either/or situation. The flags can be retained for those use-cases we haven’t thought of. And I’m fine with a less efficient implementation as long as the door is open for future enhancements.

As you probably also want the other bound, what about bounds(Exp, Low, High)? And, as @j4n_bur53 claims, there is typically more to do when evaluating entire expressions that consist of negation, division and other operations that swap the upper and lower bound. Where is that handled? Is that for the application and do we only support the bounds alternatives for single arithmetic computations (a single addition, subtraction, sqrt, etc)? If we go for that we might actually be able to implement something more efficiently than the general is/2.

All good questions.

This doesn’t help. A new lower bound is calculated from the interval argument’s lower bounds, and similarly for the upper bound. So the two expressions aren’t the same.

This is all handled at the application level which is doing the equivalent of breal arithmetic. Of course, I’m open to the suggestion of adding breals to SWIP :wink: I am sympathetic to your view that:

But I think that, with the support of attributed variables, you can build effective constraint libraries on top of SWIP so you can take advantage of the rest of SWIP’s considerable eco-system. I believe I can demonstrate that with a CLP over reals implemented entirely in Prolog, but it’s spending upwards of 25% of the time just calculating interval bounds so I’m somewhat motivated to see if anything can be done about that without upsetting the apple cart too much.

Not to my limited knowledge. I looked at this at one point, but couldn’t find anything. Perhaps you’ll be more successful.

For my purposes this is in the noise - i.e., last digits 7922 … 7940 vs. 7927 … 7936 leaving 15 digits of precision.

In a constraint problem intervals can start at -inf … inf. The default precision for branch and bound purposes is 6-7 digits of precision. So anything more is usually a luxury and paid for dearly in computation time.

That said, I certainly don’t want to throw away precision unnecessarily. In this regard rounding modes help because results don’t have to blindly rounded outwards. And if the bounds are precise (rational numbers), they don’t have to be rounded at all.

Interval midpoints usually don’t mean much. They can be handy for splitting intervals in search algorithms (analogous to enumerate or label in finite domains), but anything close to the middle is usually adequate for this purpose. And for very wide intervals, the geometric mean is better.

No. Expression evaluation is done by a bunch of C functions and N+1 predicates typically do not exist and if they exist (e.g., Quintus compatibility library), they are not used by is/2.

ECLiPSe uses N+1 predicates to evaluate expressions. Keeping it in C implies that arithmetic can be compiled to a stack machine driven by VM instructions (done if -O is in effect) and that there is no reason to use the stack representation for intermediate values, saving conversion steps and stack space.

1 Like

Let me suggest a logical, polymorphic, and definitely non-C alternative to nexttoward. For numeric arguments:

bounded(Num, Lo, Hi) :- number(Num), Lo < Num, Num < Hi.

Note that Num is not bounded by itself.

The question then is what if Lo and/or Hi are logic variables? They will be unified with the closest possible values such that this relationship still holds.

If Num is a float, they are unified respectively with the the next lowest/highest floating point values. This is equivalent to using the C nexttoward function twice using second argument values of -inf and inf respectively. So a specification in Prolog using nexttoward would be:

bounded(Num,Lo,Hi) :- float(Num), 
    (var(Lo) -> Lo is nexttoward(Num,-inf);true),
    (var(Hi) -> Hi is nexttoward(Num,inf);true),
    Lo < Num, Num < Hi.

If Num is an integer:

bounded(Num,Lo,Hi) :- integer(Num), 
    (var(Lo) -> Lo is Num-1;true),
    (var(Hi) -> Hi is Num+1;true),
    Lo < Num, Num < Hi.

If Num is a rational:

bounded(Num,Lo,Hi) :- rational(Num), 
    FNum is float(Num),

So a rational (of arbitrary precision) will bounded by the next lowest and next highest floating point values.

For IEEE specials, the following will hold:


Note that bounded/3 can be used to both generate (the tightest) bounds as well as a bounds test (in a single primitive as opposed to two arithmetic comparisons).

(One could imagine extending this to arbitrary terms, perhaps using the standard term ordering, but for this topic, let’s just leave it as a numeric relation.)

If this was available as a primitive, I don’t see a need for nexttoward as a relation or function (which has issues previously discussed). The only missing functionality is bounding towards an arbitrary value and that can be easily be done in Prolog:

nexttoward(X,To,Xh) :-
nexttoward(X,To,Xl) :-

Since this functionality depends on the bit level representation of floating point numbers, we need primitive (C level) support. I would rather see a Prolog friendly bounded primitive that nexttoward could be built on rather than the other way around. And I would again argue that it isn’t really a sound arithmetic function since nexttoward(1.0, 0) =\= nexttoward(1,0) given Eclipse semantics.

OK, but lets try to keep this discussion on topic, which is “Announce:Start of IEEE 754 support”. We’re looking for a minimalist, useful approach to moving SWIP towards IEEE 754 compliance. That shouldn’t entail adding anything radically different than what’s already there.

If you would like to discuss new numeric types (unums, SoftFloats, breals, etc.) please open a new topic.

1 Like

This is also a nice discrepancy. BigNum to float, i.e. the float/1
evaluable function. Should it use HALF_EVEN?


SICStus 4.5.1 (x86_64-win32-nt-4)
?- X is truncate(float(3*10^23)).
X = 300000000000000008388608 

Jekejeke Prolog 4, Runtime Library 1.4.2
?- X is integer(float(3*10^23)).
X = 300000000000000008388608

Without HALF_EVEN:

YAP 6.3.3 (i686-mingw32)
 ?- X is truncate(float(3*10^23)).
X = 299999999999999974834176

SWI-Prolog (threaded, 64 bits, version 8.1.21)
?- X is truncate(float(3*10^23)).
X = 299999999999999974834176.

Still another possibility for efficient rounding:

Extend the eval/1 artithmetic function to accept a rounding mode which takes one of the rounding mode flag values (to_nearest, to_positive, etc). so you can write things like:

X is 42+eval(X+Y,to_positive).

Max_error is eval(SomeExp,to_positive)-eval(SomeExp,to_negative).

Rounding mode is scoped to the evaluation of the expression and there’s no need to go near the default rounding mode defined by the environment flag (scoped to the thread).

The implementation changes be simple and quite localized (something even I might undertake). Only issue might be whether you want to add an arity 2 version to the current function or give it a different name.

Defer, further investigation required.

Status of unit testing of IEEE support on 8.1.22.

I’ve generated over 300 individual tests covering the new float_XXX flags and arithmetic functions. In general things look pretty good. Basic advertised functionality all works and no backwards compatibility issues detected to this point, i.e., things don’t change if you don’t modify the default settings.

54 assertion level tests fail, usually when the arguments are special IEEE values. A significant percentage of the failures are due to signed NaNs which are discussed further below, with other discussion items.

I have issued a pull request to add to the swipl Tests/ directory.

  1. Single NaN value from Prolog arithmetic.

AFAIK, the IEEE 754 (and IEC 60559) standard says (from a stackoverflow discussion):

“Conversion of a quiet NaN in a supported format to an external character sequence shall produce a language-defined one of “nan” or a sequence that is equivalent except for case (e.g., “NaN”), with an optional preceding sign. (This standard does not interpret the sign of a NaN.)”

So your platform’s conversion functions may print the “sign” of NaN values, but it has no meaning, and you shouldn’t consider it for the purposes of testing.

Edited to be a bit stronger: it is almost always a bug to attach meaning to the “sign bit” of a NaN datum.

The C math libraries are a bit sloppy about this and return either NaN or -NaN with no obvious meaning to the sign.

IEEE NaN’s also contain 53 bits of mantissa with no defined standard meaning. I’ve seen reports of various uses of this (additional diagnostic info, whole type value systems, …) but these are things that are not supported by the C math libraries and I see no reason why they should be accommodated in Prolog arithmetic either. (Prolog has better, more transparent ways of doing this kind of thing, rather than burying them in binary data structures intended for other uses.)

So I propose that the result of any arithmetic evaluation that contains a function evaluation which produces a NaN is the single value of the nan function. In particular it will never produce -nan. This simplifies testing, e.g., 1.5NaN is -nan is true, as is 1.5NaN is 0/0.0. (Another way of doing this test is X is Exp, X=\=X which is only true if X is a NaN value.)

  1. Standard term order of NaN’s.

I would hope that NaN’s don’t live long enough in applications that this becomes an issue, but we need to decide where NaN’s fit in the standard term order. Currently numbers are sorted numerically, but that’s a little tough to do with NaN’s where all arithmetic comparisons other than inequality are false. For term comparisons (only) I propose that 1.5NaN is strictly less than any other number, so they fit between logic variables and -1.0Inf.

  1. Continuation values for integer and rational expressions.

IMO integer, rational, and float arithmetic should, as much as possible, have the same semantic behaviour. So anything that takes a float value should accept an integer or rational value and produce approximately the same result. This should also apply to continuation values -inf, inf, and nan. Examples:

1/0.0 =:= inf and 1/0 =:=inf
ceil(inf) =:= inf
nan is 0.0*inf and nan is 0*inf

The whole idea behind special IEEE continuation values is that arithmetic evaluation over any numeric type never causes exceptions.

  1. ceiling/floor/round/truncate

Currently these return a somewhat nonsensical integer for the IEEE specials, except for round which traps. For proper semantics an IEEE special should be the result, e.g., 1.0Inf is ceiling(inf), which means that result of these functions can be a float.I don’t think this should be very disruptive; it’s just the price you pay for polymorphic arithmetic.

  1. Summary of individual tests (other than FAILs due to signed NaN):

- Eclipse compatibility, float_rounding value, nearest or to_nearest?

- FAIL not(nan =< nan)
- FAIL not(nan >= nan)
- FAIL not(nan =< 0.0)
- FAIL not(nan >= 0.0)

- FAIL (compare(O,1.5NaN,0.0),compare(O,1.5NaN,0 ))

iEEE_div (these all produce infinities)
- FAIL 1.5NaN is -0.0/ 0.0
- FAIL 1.5NaN is -0.0/ -0.0
- FAIL 1.5NaN is 0.0/ 0.0
- FAIL 1.5NaN is 0.0/ -0.0

- FAIL 1.5NaN is sign(nan)

- FAIL -1.0Inf is log( 0.0)
- FAIL -1.0Inf is log(-0.0)

- FAIL -1.0Inf is log10( 0.0)
- FAIL -1.0Inf is log10(-0.0)

- FAIL 1.0 is -inf** 0.0
- FAIL 1.0 is -inf** -0.0
- FAIL 0.0 is -inf** -2
- FAIL 1.0Inf is -inf** 2

- FAIL 1.0Inf is lgamma(-0.0)

- FAIL 1.0Inf is ceiling( inf)
- FAIL -1.0Inf is ceiling(-inf)
- FAIL 1.5NaN is ceiling( nan)

- FAIL 1.0Inf is floor( inf)
- FAIL -1.0Inf is floor(-inf)
- FAIL 1.5NaN is floor( nan)

- FAIL 1.0Inf is round( inf)
- FAIL -1.0Inf is round(-inf)
- FAIL 1.5NaN is round( nan)

- FAIL 1.0Inf is truncate( inf)
- FAIL -1.0Inf is truncate(-inf)
- FAIL 1.5NaN is truncate( nan)

- FAIL 0.0 is max(-0.0, 0.0)
- FAIL -1.0Inf is max( nan, -inf)

- FAIL 0.0 is max(-0.0, 0.0)
- FAIL 1.0Inf is min( inf, nan)