Neurosymbolic AI

I found this video about the integration of symbolic AI and neural networks really interesting (David Cox from IBM giving a lecture at MIT) . He discusses the drawbacks of deep learning and the advantages of adding symbolic systems for tasks such as reasoning about images, game play and planning.

What du you think is the future of Prolog in this context? There already seem to be some interesting research developments here, e.g. DeepProbLog and maybe also other stuff. I also started to think about inductive logic programming, e.g. Aleph and Cplint, and how this approach relates to the neuro-symbolic one. From watching the video i get the feeling that the neuro part is mostly needed to process messy input (images and natural language). But what about when input is crisp(er). Probably a case of “the right tool for the job”…

Just interested in your thoughts :slight_smile:


I have not followed up on the links you gave but do agree that the combination of Open World AI like neural networks with Closed World AI like Prolog are greater than using them alone for certain problems.

For a current problem I am working on I am considering using a Jetson Nano with a trained Neural Network for interpreting data from the real world like video and natural language and SWI-Prolog for reasoning after the data is converted/classified.


While I am not a user of Macintosh computers, I know they recently came out with the M1 processor.

If you look at the details of the chip image you will notice that it has a built-in Neural Engine. (ref)

Interesting! Do you anticipate any problems when it comes to getting Prolog to talk to the neural network part?

Just to add my 2 cents.

Usually, the interface is rather crisp – a NN output is classified data – which is primitive data for further symbolic processing.

There are additional issues which go under the grounding problem – i.e. how to map and track identify from the real world into a symbolic system.



I was thinking more along the lines that the Jetson Nano would trigger events when it has new information to pass along and then the Prolog code would receive that information and act accordingly, be it updating state or triggering actions for down stream processing by something else that does not even have to be on the same machine or even in Prolog.

So the Prolog code would not be in a two way communication with the neural networks just receiving an input stream from them as events.

In other words the real world is processed with a neural network (think Jetson Nano) and reasoning is done (think SWI-Prolog) and these are just components (think Lego bricks) that can be assembled as needed for necessary task.

For the particular problem I have in mind I am thinking I will need a few neural networks. The first level would classify images and then a second set of different neural networks would be optimized to process the classified images. The classifications being like a page from a book, an image of a bar code on a product, an email page, etc. Likewise for audio.

1 Like

Btw, there is also neuro symbolic work that uses (i.e. train) neural networks to perform symbolic reasoning – wonder how the symbolic community relates to that …

1 Like

“Usually, the interface is rather crisp – a NN output is classified data – which is primitive data for further symbolic processing.”

But i am thinking that if the input is data such as board positions in some computer game (crisp); an NN would be unnecessary?

I think its a question of fundamentals, technique and inference technology:

Fundamentals – i.e. can the problem be stated as a function fitting problem …

Technique – which technique can be applied given available data and other needs – e.g. “explanatory” AI

Inference technology – which inference technology should be deployed – a symbolic system – e.g. like prolog – or a NN inference model – one can envision a prolog system that generates inputs and outputs to train a NN … so that the NN can be deployed.

1 Like

Cool; reminds me of this Playing with Prolog video .

Using NNs for image classification seems very natural given that the main task is often to identify (singular) objects. But, to me at least, a symbolic approach seems most suitable when the problem involves complex relational patterns; between objects and sets of objects.

The promise (or hype?) of neural networks is that one can program without coding – just by using data to train. So, i guess, NN would be the first choice of someone trained in NN to solve problems, including when they involve complex relational patterns.

But, then NN have limits, as symbolic approaches have.

Key, i think is to use the right technique for the right problem and problem context.

If you would like to stay on the symbolic AI without NN, you may be interested in projects like Logic-Vision/LogicalVision2 with this paper explaining experiments where they are used for computer vision tasks.

They are based on ILP system Metagol and from there you could reach (I think greatly superior) Metagol’s successor Louise created by @stassa.p , and she uses it in the experiment vision project vision_thing/ARC. But I didn’t have enough time to take a closer look at ARC yet and it looks still like a draft only.

1 Like

Thanks for the shout out :slight_smile:

The ARC stuff in vision_thing will probably stay a draft. I was initially interested in ARC because it’s a set of logic puzzles with very few examples that seemd like a perfect match for ILP approaches and Louise in particular. But it turns out it’s possible to apply some simple augmentations, like colour changes, rotations and translations, and make a big dataset that can then be used to solve about 25% of the training tasks without anything like reasoning (just with a deep neural net). So I mostly gave up on ARC.

On the other hand, I’m working on a new project for vision_thing, where the idea is to learn L-systems, i.e. grammars that describe plants and some fractals. That is cheating a little, because ILP algorithms are very good at learning grammars and fractals aren’t arbitrary images. For general image recogntition NNs are fine so there’s no very good reason to reinvent the wheel.

Regarding Metagol and Louise - well, there are tradeoffs. Metagol is more expressive, whereas Louise is more efficient. Metagol is better at learning recursive theories from fewer examples, but this expressivity comes at a high computational cost and so it can only learn short programs (up to about 5 clauses). Louise may need more examples to learn a recursive theory but it can learn really large programs (a few thousand clauses).

A few of my colleagues are actively working on Metagol. For neurosymbolic work using Metagol see Wang-Zhou Dai and Stephen Muggleton’s latest paper:

Oh and if you’re interested in NeurSymbolic AI, keep an eye on the first IJCLR, a conference that will bring together people from a few different communities in symbolic machine learning:


Thanks for the tips!

I will have a look! Thanks.

1 Like

I am first and foremost not a programmer, but a philosopher, interested in and concerned with AI. As someone like that, I can’t help but assume that the significance of neural networks, respectively the underlying paradigm of connectionism, is overestimated. Maybe not as a proper tool for certain tasks, but as that paradigm today that most adequately represents AI. Nowadays it seems to be common sense that this paradigm overcomes the disadvantages of symbolic AI. But what is really the main difference? If both paradigms refer to a deterministic system (closed world assumption), it is merely a pragmatic question which paradigm is prevailing, namely the question what tool (if any) suits best for certain tasks. And the outcome of more than one paradigm may even be the same. That is, if we convert one paradigm to another. If we have a closer look, then neural networks depend on two sorts of information, before they can start their learning process: 1. a definition of the neurons and their connections, 2. a selection and categorization of the primary input data. Both derive from the programmer of the network. So where is the source of intelligence here, if not in a human brain? If on the other hand the fundamental premise is that of an open outcome, one which is nondeterministic, then for me neural networks suggest something that they actually can’t uphold with, free intelligence. They may be extremely complex, when it comes to the amount of neurons and their connections or even more to the processed data, but the result is something that we more or less expect in advance. Creativity is just another algorithm here. Expert systems based on symbolic AI may not be any different from that. But they don’t pretend something that is not in their initial condition. And they much more reflect the core of intelligence as such, which always seems to be of a symbolic manner. Moreover, I can imagine that if symbolic statements are complex, or fundamental, enough, even some sort of creativity, in the sense of turning implicit knowledge to something explicit, can be achieved with symbolic AI as well.

However, maybe all that just comes from my prejudices towards intelligence. Also in philosophy, I always preferred a top-down approach in matters of reasoning (regardless of its bearer, being it human or artificial) over a bottom-up one. Nevertheless, according to my impression I am fairly alone with my opinion.

1 Like

I guess, as a programmer, and not a philosopher, I think the keyword here is “artificial” — although the term “intelligence” is also in need of reflection.

Apparently, in the history of math, there was a time when it was not obvious to us humans that a human finger can map onto a sheep – and that 10 fingers can count 10 sheep [1].

I think artificial intelligence isn’t much different – a tool to get things done mediated through some kind of abstraction – functional, or otherwise (relational), derived bottom up, top down, or in some mixed way.

I think the novelty of AI – when it started – was that symbols can mediate knowledge and the assumption that knowledge underpins intelligent behavior of humans.

The emergence of neural networks was initially termed – non-representational – i.e. the mimicking of intelligent behavior without appeal to the symbolic representation of knowledge – and, i guess, was a critique of the, then, prevalent knowledge-based AI.

With the focus on non-representational AI over the past decade, perhaps the pendulum is now swinging back to a need for representation, as the limits of non-representational AI become apparent.


I think the term sub-symbolic is also significant, in the sense that at a lower enough level of detail the resolution of data is such that symbols meaningful to us, humans, can not be assigned to the data – yet, they aggregate (via linked (deep) layers of networks), map and transform coherently into the symbolic realm.



Thank you for pointing at the difference of representational data (representational knowledge) versus nonrepresentational data. If the first kind of AI is meant to supplement human reasoning, instead of being a copy of it (some AI engineers are exactly trying that, copying the human mind; but from a technological point of view the question is: Why? Just out of a need for sympathy?), then for sure AI has for me some kind of relevance. If on the other side AI intents to be nonrepresentational, things get complicated. For such an AI to be meaningful, first of all we had to clarify what sub-symbolic data really is. And the relation of that data to symbolic ones. (It all reminds me of the notorious theory about sense data or qualia in empiricist philosophy.) Maybe that’s not a problem within certain programs that simply bridge the gap, in practice, between sub-symbolic and symbolic. However that doesn’t eradicate the difference fundamental for both forms of data. Always assuming that actually there are those two types. It seems already to be problematic to assign numeric entitites to sub-symbolic data, to make it countable, and so that AI can calculate with it, when it isn’t clear what is even counted here. Putting aside how pure quantities shall ever add up to qualities.

On the other hand, that doesn’t mean, if we decide, facing that, to stick to symbolic AI, that I am not aware of the problematic nature of symbolic AI as a representation. What do we represent in symbolic AI, if it isn’t just a question of nomenclature? But likewise I would put in question whether every symbol has to be a representation. If we don’t want - as a classic text see Richard Rorty, Philosophy and the Mirror of Nature - to entangle ourselves in a series of ontological and epistemic problems. (What, for instance, symbolizes A = A? If equality/identity is still a representational symbolization, then rather it represents itself than anything else.)

This reminds me of the discussion surrounding ontology – the realism stance and the conceptualization stance – with the former referring to the material world and the latter to a shared conceptualization, i guess, of what is perceived.

What about neuro-iconic AI, something along the lines of William Bricken’s thinking :wink: Iconic Logic | Iconic Math

1 Like