“Steadfastness” of a predicate is a common part of the (ISO) specification. It refers to the fact the predicate, in presence of an “output argument” that is instantiated, behaves as if it were not, with unification of a separately computed “output argument” with the passed argument at the very end.
However, it seems that predicates could profit from taking a peek at the output argument to be able to “fail fast” instead of pressing on to the end and finding out that, when all is said and done, unification will fail. This just drives up resource usage.
I can think of these reasons:
- Easy to standardize on. It is very clear what “steadfastness” implies, but “failfastness” is much more arbitrary and may lead to unwelcome disucssions.
- It’s easy to implement.
- The tradeoff between having to perform additional predicate entry checks in all cases vs. having the ability to fail fast occasionally is not so great (and possibly worse on machines from the mid-90s with 16 MiB RAM and 50 MHz CPU). In any case, it depends on the problem.
But then again, steadfastness is easily added to failfast predicates by using fresh variables for the “output arguments” on call, cutting, then unifying with whatever was expected. Going from steadfast predicates to failfast ones, is, however, impossible.