Granular code coverage tool?

I guess color(true) (false) is the most sensible. As for other formats, I do not know. If there exists a format that is used by IDEs, that surely makes sense. Note that disabling coloring is as easy as either temporary switching off the color flag or do not not pretend the output is a tty.

If your code contains :-use_module(library(apply_macros)), then simply remove it (or comment it out). Note that apply_macros contains a definition for user:goal_expansion/2, so every directive that loads apply_macros needs to be removed (see the description of goal_expansion/2 for details).

I guess emitting HTML is a different beast. That quite naturally would imply using color, as why else would you want HTML? I doubt emitting HTML makes much sense unless you either combine it with the HTML pretty printer in the pldoc package. That means more dependencies :frowning: My motto is that simple is better …

ahh, I thought you were talking about a generic way, not just about library(apply_macros).

There is a remark in the code to make the transformation depend on the optimise flag. Is that something we should do?

I don’t think it is worth it to spend time on this.

Better would be to provide an option listing predicates to ignore in the coverage test.

This would allow better accuracy in the numbers in case term_expansion can’t be handled.


I am thinking about how to capture the relation, think Prolog fact database, between a test case and a clause used during the test.

Currently I run show_coverage/2 with the goal run_tests/0 which is working great but I don’t know

  1. How in show_coverage to identify that tests are running. I know I could check if the show_coverage goal is run_tests but I don’t know all of the ways to kick off a test. I also could not find a hook in the code in
  2. How to capture the values used for the specific test. In other words I want the relationship to record not only the test name but the bindings of variables used.

Your thoughts are welcome. :slightly_smiling_face:


Peter noted below to use message_hook/3 which is working as advertised.

When I ran the coverage tests, I did this, and it displayed the usual messages about which tests were run (my tests are in two separate files; they all ran with run_tests/0).

show_coverage(run_tests, [dir(cov), annotate(true), ext('.cov'), modules([protobufs])]).

I did this from the command line: contrib-protobufs/Makefile at 137a9d87ccc990d29e7ae337bb4a97ad8af05982 · SWI-Prolog/contrib-protobufs · GitHub

Are you referring to the messages displayed back to the top level in green when test are run? I.e.

% PL-Unit: examples ....... done
% All 7 tests passed


That is what I normally see.
If so those don’t have the details I need and I can’t access those easily in code. :slightly_frowning_face:

Code for example. (Click arrow to expand)


:- module(examples,

% -----------------------------------------------------------------------------

:- load_test_files([]).

% ----------------------------------------------------------------------------

:- use_module(library(http/html_write)).

% -----------------

% For use with test_case 07

:- multifile

html_write:expand(my_term(Term)) -->

% -----------------

example_001(Input,TokenizedHtml,HTML) :-

File: examples.plt

:- begin_tests(examples).

:- use_module(examples).



   p('This ~a use of ~s/~d'-[makes,"format",3]),
   [nl(2),<,p,>,nl(1),"This makes use of format/3",</,p,>],
   "\n\n<p>\nThis makes use of format/3</p>"




% -------------------------------------

test(example_001,[true,forall(test_case(_,success,Input,TokenizedHtml,HTML))]) :-

test(example_001,[forall(test_case(_,error,Input)),error(instantiation_error,_)]) :-

:- end_tests(examples).

Top level query:

show_coverage(run_tests,[dir('./annotated files'),modules([html_write])]).

You should be able to get the information programmatically by adding a message hook.

1 Like

Ahh, so you are saying the trick is to hook the messages and not the code. Thanks will take a look. :slightly_smiling_face:


Works as advertised. Now to see if I can interweave the incoming data and assertz/1 as facts.


Using both message_hook/3 for messages from plunit and prolog_trace_interception/4 for the trace callbacks will not work. The problem, as I see it, is that prolog_trace_interception/4 is in a test case when the information is needed from message_hook/3 but message_hook/3 has not sent the message needed because prolog_trace_interception/4 traps on every goal.

Since it looks like the information needed to identify if a test is running, what is the predicate indicator for the test and the arguments for goal (predicate indicator) are in the stack frames, the code will have to check for such information in the stack frames of every prolog_trace_interception/4 call. It will be slow but it should be correct. And as I note often, get the code working correctly first then go after optimizations.

If we want programmatic access I guess we should provide a proper API for that. I’m not against that if there is a good use case. I don’t really see it though. You typically use this tool to figure out how good your test suite is or to get some insight in what is actually used in a big code base. If you want the coverage of a specific test, simply run show_coverage/2 for only that test. What is wrong with that?

Keep it simple …

One concept that I am currently exploring with proof of concept code is to wrap a Prolog level flight recorder around prolog_trace_interception/4 , think something like Java flight recorder, that just dumps raw facts.


A user can then run queries to generate

Granted as it stands now it is not something one would want to run with code that runs in production. Also I don’t plan to create PRs for this or even publish the code at present, it is a proof of concept.