I’d rather the reasoner just gave me data, and let me decide whether and how to format it. It increases the re-usability of the output, so the same knowledge representation can be used for a variety of user-facing tools.
Agreed. It strikes me that there are likely situations in which the “human” description is useful for some things and problematic for others, because it can’t be as reliably parsed. But I also understand there is a performance hit for the human versions. Best case scenario might be to allow you to choose either or both, instead of the current method which allows you to choose only one.
I’m sort of out of my depth, here, but what I will say is that in Prolog, you get bindings, in ASP, you get a model, and in goal-directed ASP, you get bindings and a minimal model, and a justification. So I would say that the “answer” in s(CASP) is the combination of the three, not primarily the model. But again, I don’t know if grouping by bindings makes sense outside of my domain.
We can, but I don’t see why we would want to. I estimate the chances that the actual models, or even the human versions, are going to be displayed to users without post-processing to be approximately zero. The model being too big, and including things that you don’t need, is not an actual pain point for anyone, I don’t think. It is not “having weaker claims included” that is the problem. It is “not being able to clearly identify the weaker claims” that is an issue. Also, there is a semantics associated with what is and is not included, because it is “minimal”. If you start removing things from the model, you are returning a sub-minimal model, which means interpreting the model now depends on knowing what flags were included in the query. That seems sub-optimal. I would say give the user options to add information about the elements in the model, not remove them.
I don’t like the way that abduction is included in the justifications. I have in the past needed to trim out the redundant parts to make the justifications more usable. So I would advocate for removing things from the justifications and adding them to the models.
Here’s a use case for why I would like the abducted elements and the elements provided as facts included in the model:
Let’s say I have an expert system application, and I’m using s(CASP) to determine what inputs might be relevant to the user’s query. In order to tell the expert system what inputs might be relevant, I set the list of all inputs to abducible, run the query, and get a set of models. The abducible elements included in the resulting models are the questions that are relevant to the query. The ones that aren’t are clearly never relevant. I can send the list of “ever relevant” inputs to the app, and the app can decide what order to ask them in, ask one, add that fact, remove the abducibility over that input (if the user had an answer), and repeat the query.
Having to search the justification trees of all of the returned models in order to get the list of ever relevant inputs is needlessly painful. Just let me do a union of all of the models, and then take out the “abduced” elements. Whether the information is in the model, or annotated somehow, is neither here nor there. The point is that having to dig through multiple justification trees to find them all is silly.
The application is going to need to take into account the difference between things that are abducible because they haven’t been put to the user yet, and the things that are abducible because they were put to the user and the user answered “I don’t know.” But I think that’s a problem to solve above the level of s(CASP). What constitutes an “input” is also something that would happen at a higher level, I think. But that’s the case for adding abduced info to the responses, somewhere, not in the justifications.