> "4. Prolog is good at solving reasoning problems."
Plain Prolog's way of solving reasoning problems is effectively:
for person in [martha, brian, sarah, tyrone]:
if timmy.parent == person:
print "solved!"
You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.
Arguably it lets a human express reasoning problems better than other languages by letting you write high level code in a declarative way, instead of allocating memory and choosing data types and initializing linked lists and so on, so you can focus on the reasoning, but that is no benefit to an LLM which can output any language as easily as any other. And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages. Arguably Python or JavaScript will benefit an LLM most because there are so many training examples inside it, compared to almost any other langauge.
>> You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.
SLD-Resolution with unification (Prolog's automated theorem proving algorithm) is the polar opposite of brute force: as the proof proceeds, the cardinality of the set of possible answers [1] decreases monotonically. Unification itself is nothing but a dirty hack to avoid having to ground the Herbrand base of a predicate before completing a proof; which is basically going from an NP-complete problem to a linear-time one (on average).
Besides which I find it very difficult to see how a language with an automated theorem prover for an interpreter "doesn't do reasoning". If automated theorem proving is not reasoning, what is?
Prolog was introduced to capture natural language - in a logic/symbolic way that didn't prove as powerful as today's LLM for sure, but this still means there is a large corpus of direct English to Prolog mappings available for training, and also the mapping rules are much more straightforward by design. You can pretty much translate simple sentences 1:1 into Prolog clauses as in the classic boring example
% "the boy eats the apple"
eats(boy, apple).
This is being taken advantage of in Prolog code generation using LLMs. In the Quantum Prolog example, the LLM is also instructed not to generate search strategies/algorithms but just planning domain representation and action clauses for changing those domain state clauses which is natural enough in vanilla Prolog.
The results are quite a bit more powerful, close to end user problems, and upward in the food chain compared to the usual LLM coding tasks for Python and JavaScript such as boilerplate code generation and similarly idiosyncratic problems.
Its a Horn clause resolver...that's exactly the kind of reasoning that LLMs are bad at. I have no idea how to graft Prolog to an LLM but if you can graft any programming language to it, you can graft Prolog more easily.
Also, that you push Python and JavaScript makes me think you don't know many languages. Those are terrible languages to try to graft to anything. Just because you only know those 2 languages doesn't make them good choices for something like this. Learn a real language Physicist.
This sparked a really fascinating discussion, I don't know if anyone will see this but thanks everyone for sharing your thoughts :)
I understand your point - to an LLM there's no meaningful difference between once turing complete language and another. I'll concede that I don't have a counter argument, and perhaps it doesn't need to be prolog - though my hunch is that LLM's tend to give better results when using purpose built tools for a given type of problem.
The only loose end I want to address is the idea of "doing reasoning."
This isn't an AGI proposal (I was careful to say "good at writing prolog") just an augmentation that (as a user) I haven't yet seen applied in practice. But neither have I seen it convincingly dismissed.
The idea is the LLM would act like an NLP parser that gradually populates a prolog ontology, like building a logic jail one brick at a time.
The result would be a living breathing knowledge base which constrains and informs the LLM's outputs.
The punchline is that I don't even know any prolog myself, I just think it's a neat idea.
Of course it does "reasoning", what do you think reasoning is? From a quick google: "the action of thinking about something in a logical, sensible way". Prolog searches through a space of logical proposition (constraints) and finds conditions that lead to solutions (if one exists).
(a) Trying adding another 100 or 1000 interlocking proposition to your problem. It will find solutions or tell you one doesn't exist.
(b) You can verify the solutions yourself. You don't get that with imperative descriptions of problems.
(b) Good luck sandboxing Python or JavaScript with the treat of prompt injection still unsolved.
Contrary to what everyone else is saying, I think you're completely correct. Using it for AI or "reasoning" is a hopeless dead end, even if people wish otherwise. However I've found that Prolog is an excellent language for expressing certain types of problems in a very concise way, like parsers, compilers, and assemblers (and many more). The whole concept of using a predicate in different modes is actually very useful in a pragmatic way for a lot of problems.
When you add in the constraint solving extensions (CLP(Z) and CLP(B) and so on) it becomes even more powerful, since you can essentially mix vanilla Prolog code with solver tools.
Even in your example (which is obviously not correct representation of prolog), that code will work X orders magnitude faster and with 100% reliability compared to much more inferior LLM reasoning capabilities.
Everything you've written here is an invalid over-reduction, I presume because you aren't terribly well versed with Prolog. Your simplification is not only outright erroneous in a few places, but essentially excludes every single facet of Prolog that makes it a turing complete logic language. What you are essentially presenting Prolog as would be like presenting C as a language where all you can do is perform operations on constants, not even being able to define functions or preprocessor macros. To assert that's what C is would be completely and obviously ludicrous, but not so many people are familiar enough with Prolog or its underlying formalisms to call you out on this.
Firstly, we must set one thing straight: Prolog definitionally does reasoning. Formal reasoning. This isn't debatable, it's a simple fact. It implements resolution (a computationally friendly inference rule over computationally-friendly logical clauses) that's sound and refutation complete, and made practical through unification. Your example is not even remotely close to how Prolog actually works, and excludes much of the extra-logical aspects that Prolog implements. Stripping it of any of this effectively changes the language beyond recognition.
> Plain Prolog's way of solving reasoning problems is effectively:
No. There is no cognate to what you wrote anywhere in how Prolog works. What you have here doesn't even qualify as a forward chaining system, though that's what it's closest to given it's somewhat how top-down systems work with their ruleset. For it to even approach a weaker forward chaining system like CLIPS, that would have to be a list of rules which require arbitrary computation and may mutate the list of rules it's operating on. A simple iteration over a list testing for conditions doesn't even remotely cut it, and again that's still not Prolog even if we switch to a top-down approach by enabling tabling.
> You hard code some options
A Prolog knowledgebase is not hardcoded.
> write a logical condition with placeholders
A horn clause is not a "logical condition", and those "placeholders" are just normal variables.
> and Prolog brute-forces every option in every placeholder.
Absolutely not. It traverses a graph proving things, and when it cannot prove something it backtracks and tries a different route, or otherwise fails. This is of course without getting into impure Prolog, or the extra-logical aspects it implements. It's a fundamentally different foundation of computation which is entirely geared towards formal reasoning.
> And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages.
It is extremely different, and the only reason you believe this is because you don't understand Prolog in the slightest, as indicated by the unsoundness of essentially everything you wrote. Prolog is as different from something like Javascript as a neural network with memory is.
IIRC IBM’s Watson (the one that played Jeopardy) used primitive NLP (imagine!) to form a tree of factual relations and then passed this tree to construct Prolog queries that would produce an answer to a question. One could imagine that by swapping out the NLP part with an LLM, the model would have 1. a more thorough factual basis against which to write Prolog queries and 2. a better understanding of the queries it should write to get at answers (for instance, it may exploit more tenuous relations between facts than primitive NLP).
Not so "primitive" NLP. Watson started with what its team called a "shallow parse" of a sentence using a dependency grammar and then matched the parse to an ontology consisting of good, old fashioned frames [1]. That's not as "advanced" as an LLM but far more reliable.
I believe the ontology was indeed implemented in Prolog but I forget the architecture details.
We've done this, and it works. Our setup is to have some agents that synthesize Prolog and other types of symbolic and/or probabilistic models. We then use these models to increase our confidence in LLM reasoning and iterate if there is some mismatch. Making synthesis work reliably on a massive set of queries is tricky, though.
Imagine a medical doctor or a lawyer. At the end of the day, their entire reasoning process can be abstracted into some probabilistic logic program which they synthesize on-the-fly using prior knowledge, access to their domain-specific literature, and observed case evidence.
There is a growing body of publications exploring various aspects of synthesis, e.g. references included in [1] are a good starting point.
I am once again shilling the idea that someone should find a way to glue Prolog and LLMs together for better reasoning agents.
There are definitely people researching ideas here. For my own part, I've been doing a lot of work with Jason[1], a very Prolog like logic language / agent environment with an eye towards how to integrate that with LLMs (and "other").
Nothing specific / exciting to share yet, but just thought I'd point out that there are people out there who see potential value in this sort of thing and are investigating it.
>>Prolog doesn't look like javascript or python so:
Think of this way. In Python and Javascript you write code, and to test if its correct you write unit test cases.
A prolog program is basically a bunch of test cases/unit test cases, you write it, and then tell the Prolog compiler, 'write code, that passes these test cases'.
That is, you are writing the program specification, or tests that if pass would represent solution to the problem. The job of the compiler to write the code that passes these test cases.
It's been a while since I have done web dev, but web devs back then were certainly not scared of any language. Web devs are like the ultimate polyglots. Or at least they were. I was regularly bouncing around between a half dozen languages when I was doing pro web dev. It was web devs who popularized numerous different languages to begin with simply because delivering apps through a browser allowed us a wide variety of options.
The core idea of DeepClause is to use a custom Prolog-based DSL together with a metainterpreter implemented in Prolog that can keep track of execution state and implicitly manage conversational memory for an LLM. The DSL itself comes with special predicates that are interpreted by an LLM. "Vague" parts of the reasoning chain can thus be handed off to a (reasonably) advanced LLM.
Would love to collect some feedback and interesting ideas for possible applications.
As someone who did deep learning research 2017-2023, I agree. "Neurosymbolic AI" seems very obvious, but funding has just been getting tighter and more restrictive towards the direction of figuring out things that can be done with LLMs. It's like we collectively forgot that there's more than just txt2txt in the world.
YES! I've run a few experiments on classical logic problems and an LLM can spit out Prolog programs to solve the puzzel. Try it yourself, ask an LLM to write some prolog to solve some problem and then copy paste it to https://swish.swi-prolog.org/ and see if it runs.
Can't find the links right now, but there were some papers on llm generating prolog facts and queries to ground the reasoning part. Somebody else might have them around.
There's a lot of work in this area. See e.g., the LoRP paper by Di et al. There's also a decent amount of work on the other side too, i.e., using LLMs to convert Prolog reasoning chains back into natural language.
I've been thinking a lot about this, and I want to build the following experiment, in case anyone is interested:
The experiment is about putting an LLM to play plman[0] with and without prolog help.
plman is a pacman like game for learning prolog, it was written by profesor Francisco J. Gallego from Alicante University to teach logic subject in computer science.
Basically you write solution in prolog for a map, and plman executes it step by step so you can see visually the pacman (plman) moving around the maze eating and avoiding ghost and other traps.
There is an interesting dynamic about finding keys for doors and timing based traps.
There are different levels of complexity, and you can also write easily your maps, since they are just ascii characters in a text file.
I though this was the perfect project to visually explain my coworkers the limit of LLM "reasoning" and what is symbolic reasoning.
So far I hooked ChatGPT API to try to solve scenarios, and it fails even with substancial amount of retries. That's what I was expecting.
The next thing would be to write a mcp tool so that the LLM can navigate the problem by using the tool, but here is where I need guidance.
I'm not sure about the best dynamic to prove the usefulness of prolog in a way that goes beyond what context retrieval or db query could do.
I'm not sure if the LLM should write the prolog solution. I want to avoid to build something trivial like the LLM asking for the steps, already solved, so my intuition is telling me that I need some sort of virtual joystick mcp to hide prolog from the LLM, so the LLM could have access to the current state of the screen, and questions like what would be my position if I move up ?
What's the position of the ghost in next move ? where is the door relative to my current position ?
I don't have academic background to design this experiment properly. Would be great if anyone is interested to work together on this, or give me some advice.
Prior work pending on my reading list:
- LoRP: LLM-based Logical Reasoning via Prolog [1]
- A Pipeline of Neural-Symbolic Integration to Enhance Spatial Reasoning in Large Language Models [2]
Prolog really is such a fantastic system, if I can justify its usage then I won't hesitate to do so. Most of the time I'll call a language that I find to be powerful a "power tool", but that doesn't apply here. Prolog is beyond a power tool. A one-off bit of experimental tech built by the greatest minds of a forgotten generation. You'd it find deep in irradiated ruins of a dead city, buried far underground in a bunker easily missed. A supercomputer with the REPL's cursor flickering away in monochrome phosphor. It's sitting there, forgotten. Dutifully waiting for you to jack in.
When I entered university for my Bachelors, I was 28 years old and already worked for 5 or 6 years as a self-taught programmer in the industry. In the first semester, we had a Logic Programming class and it was solely taught in Prolog.
At first, I was mega overwhelmed. It was so different than anything I did before and I had to unlearn a lot of things that I was used to in "regular" programming. At the end of the class, I was a convert! It also opened up my mind to functional programming and mathematical/logical thinking in general.
I still think that Prolog should be mandatory for every programmer. It opens up the mind in such a logical way... Love it.
Unfortunately, I never found an opportunity in my 11 years since then to use it in my professional practice. Or maybe I just missed the opportunities?????
Did they teach you how to use DCGs? A few months ago I used EDCGs as part of a de-spaghettification and bug fixing effort to trawl a really nasty 10k loc sepples compilation unit and generate tags for different parts of it. Think ending up with a couple thousand ground terms like:
tag(TypeOfTag, ParentFunction, Line).
Type of tag indicating things like an unnecessary function call, unidiomatic conditional, etc.
I then used the REPL to pull things apart, wrote some manual notes, and then consulted my complete knowledgebase to create an action plan. Pretty classical expert system stuff. Originally I was expecting the bug fixing effort to take a couple of months. 10 days of Prolog code + 2 days of Prolog interaction + 3 days of sepples weedwacking and adjusting the what remained in the plugboard.
Prolog is a great language to learn. But I wouldn't want to use it for anything more than what its directly good at. Especially the cut operator, that's pretty mind bending. But once you get good at it, it all just flows. But I doubt more than 1% of devs could ever master it, even on an unlimited timeline. Its just much harder than any other type of non-research dev work.
But some parts, like e.g. the cut operator is something I've copied several times over for various things. A couple of prototype parser generators for example - allowing backtracking, but using a cut to indicate when backtracking is an error can be quite helpful.
There was a time when the thinking was you can load all the facts into a prolog engine and it would replace experts like doctors and engineers - expert systems, it didn't work. Now its a curiosity
My prolog anecdote: ~2001 my brother and I writing an A* pathfinder in prolog to navigate a bot around the world of Asheron's Call (still the greatest MMORPG of all time!). A formative experience in what can be done with code. Others had written a plugin system (called Decal) in C for the game and a parser library for the game's terrain file format. We took that data and used prolog to write an A* pathfinder that could navigate the world, avoiding un-walkable terrain and even using the portals to shortcut between locations. Good times.
There seems to an interesting difference between Prolog and conventional (predicate) logic.
In Prolog, anything that can't be inferred from the knowledge base is false. If nothing about "playsAirGuitar(mia)" is implied by the knowledge base, it's false. All the facts are assumed to be given; therefore, if something isn't given, it must be false.
Predicate logic is the opposite: If I can't infer anything about "playsAirGuitar(mia)" from my axioms, it might be true or false. It's truth value is unknown. It's true in some model of the axioms, and false in others. The statement is independent of the axioms.
Deductive logic assumes an open universe, Prolog a closed universe.
It's not really false I think. It's 'no', which is an answer to a question "Do I know this to be true?"
I think there should be a room for three values there: true, unprovable, false. Where false things are also unprovable. I wonder if Prolog has false, defined as "yes" of the opposite.
> It's not really false I think. It's 'no', which is an answer to a question "Do I know this to be true?"
I don't think so, because in this case both x and not-x could be "no", but I think in Prolog, if x is "no", not-x is "yes", even if neither is known to be true. It's not a three-valued logic that doesn't adhere to the law of the excluded middle.
I recently implemented an eagerly evaluated embedded Prolog dialect in Dart for my game applications. I used SWI documentation extensively to figure out what to implement.
But I think I had the most difficulty designing the interface between the logic code and Dart. I ended up with a way to add "Dart-defined relations", where you provide relations backed dynamically by your ECS or database. State stays in imperative land, rules stay in logic land.
Testing on Queens8, SWI is about 10,000 times faster than my implementation. It's a work of art! But it doesn't have the ease of use in my game dev context as a simple Dart library does.
https://news.ycombinator.com/context?id=43948657
Thesis:
1. LLMs are bad at counting the number of r's in strawberry.
2. LLMs are good at writing code that counts letters in a string.
3. LLMs are bad at solving reasoning problems.
4. Prolog is good at solving reasoning problems.
5. ???
6. LLMs are good at writing prolog that solves reasoning problems.
Common replies:
1. The bitter lesson.
2. There are better solvers, ex. Z3.
3. Someone smart must have already tried and ruled it out.
Successful experiments:
1. https://quantumprolog.sgml.net/llm-demo/part1.html
Plain Prolog's way of solving reasoning problems is effectively:
You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.Arguably it lets a human express reasoning problems better than other languages by letting you write high level code in a declarative way, instead of allocating memory and choosing data types and initializing linked lists and so on, so you can focus on the reasoning, but that is no benefit to an LLM which can output any language as easily as any other. And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages. Arguably Python or JavaScript will benefit an LLM most because there are so many training examples inside it, compared to almost any other langauge.
SLD-Resolution with unification (Prolog's automated theorem proving algorithm) is the polar opposite of brute force: as the proof proceeds, the cardinality of the set of possible answers [1] decreases monotonically. Unification itself is nothing but a dirty hack to avoid having to ground the Herbrand base of a predicate before completing a proof; which is basically going from an NP-complete problem to a linear-time one (on average).
Besides which I find it very difficult to see how a language with an automated theorem prover for an interpreter "doesn't do reasoning". If automated theorem proving is not reasoning, what is?
___________________
[1] More precisely, the resolution closure.
The results are quite a bit more powerful, close to end user problems, and upward in the food chain compared to the usual LLM coding tasks for Python and JavaScript such as boilerplate code generation and similarly idiosyncratic problems.
Also, that you push Python and JavaScript makes me think you don't know many languages. Those are terrible languages to try to graft to anything. Just because you only know those 2 languages doesn't make them good choices for something like this. Learn a real language Physicist.
I understand your point - to an LLM there's no meaningful difference between once turing complete language and another. I'll concede that I don't have a counter argument, and perhaps it doesn't need to be prolog - though my hunch is that LLM's tend to give better results when using purpose built tools for a given type of problem.
The only loose end I want to address is the idea of "doing reasoning."
This isn't an AGI proposal (I was careful to say "good at writing prolog") just an augmentation that (as a user) I haven't yet seen applied in practice. But neither have I seen it convincingly dismissed.
The idea is the LLM would act like an NLP parser that gradually populates a prolog ontology, like building a logic jail one brick at a time.
The result would be a living breathing knowledge base which constrains and informs the LLM's outputs.
The punchline is that I don't even know any prolog myself, I just think it's a neat idea.
(a) Trying adding another 100 or 1000 interlocking proposition to your problem. It will find solutions or tell you one doesn't exist. (b) You can verify the solutions yourself. You don't get that with imperative descriptions of problems. (b) Good luck sandboxing Python or JavaScript with the treat of prompt injection still unsolved.
When you add in the constraint solving extensions (CLP(Z) and CLP(B) and so on) it becomes even more powerful, since you can essentially mix vanilla Prolog code with solver tools.
Firstly, we must set one thing straight: Prolog definitionally does reasoning. Formal reasoning. This isn't debatable, it's a simple fact. It implements resolution (a computationally friendly inference rule over computationally-friendly logical clauses) that's sound and refutation complete, and made practical through unification. Your example is not even remotely close to how Prolog actually works, and excludes much of the extra-logical aspects that Prolog implements. Stripping it of any of this effectively changes the language beyond recognition.
> Plain Prolog's way of solving reasoning problems is effectively:
No. There is no cognate to what you wrote anywhere in how Prolog works. What you have here doesn't even qualify as a forward chaining system, though that's what it's closest to given it's somewhat how top-down systems work with their ruleset. For it to even approach a weaker forward chaining system like CLIPS, that would have to be a list of rules which require arbitrary computation and may mutate the list of rules it's operating on. A simple iteration over a list testing for conditions doesn't even remotely cut it, and again that's still not Prolog even if we switch to a top-down approach by enabling tabling.
> You hard code some options
A Prolog knowledgebase is not hardcoded.
> write a logical condition with placeholders
A horn clause is not a "logical condition", and those "placeholders" are just normal variables.
> and Prolog brute-forces every option in every placeholder.
Absolutely not. It traverses a graph proving things, and when it cannot prove something it backtracks and tries a different route, or otherwise fails. This is of course without getting into impure Prolog, or the extra-logical aspects it implements. It's a fundamentally different foundation of computation which is entirely geared towards formal reasoning.
> And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages.
It is extremely different, and the only reason you believe this is because you don't understand Prolog in the slightest, as indicated by the unsoundness of essentially everything you wrote. Prolog is as different from something like Javascript as a neural network with memory is.
I believe the ontology was indeed implemented in Prolog but I forget the architecture details.
______________
[1] https://en.wikipedia.org/wiki/Frame_(artificial_intelligence...
Imagine a medical doctor or a lawyer. At the end of the day, their entire reasoning process can be abstracted into some probabilistic logic program which they synthesize on-the-fly using prior knowledge, access to their domain-specific literature, and observed case evidence.
There is a growing body of publications exploring various aspects of synthesis, e.g. references included in [1] are a good starting point.
[1] https://proceedings.neurips.cc/paper_files/paper/2024/file/8...
https://en.wikipedia.org/wiki/Wicked_problem
There are definitely people researching ideas here. For my own part, I've been doing a lot of work with Jason[1], a very Prolog like logic language / agent environment with an eye towards how to integrate that with LLMs (and "other").
Nothing specific / exciting to share yet, but just thought I'd point out that there are people out there who see potential value in this sort of thing and are investigating it.
[1]: https://github.com/jason-lang/jason
https://arxiv.org/abs/2309.12288
1. web devs are scared of it.
2. not enough training data?
I do remember having to wrestle to get prolog to do what I wanted but I haven't written any in ~10 years.
Think of this way. In Python and Javascript you write code, and to test if its correct you write unit test cases.
A prolog program is basically a bunch of test cases/unit test cases, you write it, and then tell the Prolog compiler, 'write code, that passes these test cases'.
That is, you are writing the program specification, or tests that if pass would represent solution to the problem. The job of the compiler to write the code that passes these test cases.
https://news.ycombinator.com/item?id=45937480
The core idea of DeepClause is to use a custom Prolog-based DSL together with a metainterpreter implemented in Prolog that can keep track of execution state and implicitly manage conversational memory for an LLM. The DSL itself comes with special predicates that are interpreted by an LLM. "Vague" parts of the reasoning chain can thus be handed off to a (reasonably) advanced LLM.
Would love to collect some feedback and interesting ideas for possible applications.
This is a tokenization issue, not an LLM issue.
https://www.symbolica.ai/
The experiment is about putting an LLM to play plman[0] with and without prolog help.
plman is a pacman like game for learning prolog, it was written by profesor Francisco J. Gallego from Alicante University to teach logic subject in computer science.
Basically you write solution in prolog for a map, and plman executes it step by step so you can see visually the pacman (plman) moving around the maze eating and avoiding ghost and other traps.
There is an interesting dynamic about finding keys for doors and timing based traps.
There are different levels of complexity, and you can also write easily your maps, since they are just ascii characters in a text file.
I though this was the perfect project to visually explain my coworkers the limit of LLM "reasoning" and what is symbolic reasoning.
So far I hooked ChatGPT API to try to solve scenarios, and it fails even with substancial amount of retries. That's what I was expecting.
The next thing would be to write a mcp tool so that the LLM can navigate the problem by using the tool, but here is where I need guidance.
I'm not sure about the best dynamic to prove the usefulness of prolog in a way that goes beyond what context retrieval or db query could do.
I'm not sure if the LLM should write the prolog solution. I want to avoid to build something trivial like the LLM asking for the steps, already solved, so my intuition is telling me that I need some sort of virtual joystick mcp to hide prolog from the LLM, so the LLM could have access to the current state of the screen, and questions like what would be my position if I move up ? What's the position of the ghost in next move ? where is the door relative to my current position ?
I don't have academic background to design this experiment properly. Would be great if anyone is interested to work together on this, or give me some advice.
Prior work pending on my reading list:
- LoRP: LLM-based Logical Reasoning via Prolog [1]
- A Pipeline of Neural-Symbolic Integration to Enhance Spatial Reasoning in Large Language Models [2]
- [0] https://github.com/Matematicas1UA/plman/blob/master/README.m...
- [1] https://www.sciencedirect.com/science/article/abs/pii/S09507...
- [2] https://arxiv.org/html/2411.18564v1
I still think that Prolog should be mandatory for every programmer. It opens up the mind in such a logical way... Love it.
Unfortunately, I never found an opportunity in my 11 years since then to use it in my professional practice. Or maybe I just missed the opportunities?????
tag(TypeOfTag, ParentFunction, Line).
Type of tag indicating things like an unnecessary function call, unidiomatic conditional, etc.
I then used the REPL to pull things apart, wrote some manual notes, and then consulted my complete knowledgebase to create an action plan. Pretty classical expert system stuff. Originally I was expecting the bug fixing effort to take a couple of months. 10 days of Prolog code + 2 days of Prolog interaction + 3 days of sepples weedwacking and adjusting the what remained in the plugboard.
But some parts, like e.g. the cut operator is something I've copied several times over for various things. A couple of prototype parser generators for example - allowing backtracking, but using a cut to indicate when backtracking is an error can be quite helpful.
Elmore Leonard, on writing. But he might as well have been talking about the cut operator.
At uni I had assignments where we were simply not allowed to use it.
I don't think I ever learned how it can be useful other than feeding the mind.
It's a mind-bending language and if you want to experience the feeling of learning programming from the beginning again this would be it
Advanced Turbo prolog - https://archive.org/details/advancedturbopro0000schi/mode/2u...
Prolog programming for artificial intelligence - https://archive.org/details/prologprogrammin0000brat_l1m9/mo...
In Prolog, anything that can't be inferred from the knowledge base is false. If nothing about "playsAirGuitar(mia)" is implied by the knowledge base, it's false. All the facts are assumed to be given; therefore, if something isn't given, it must be false.
Predicate logic is the opposite: If I can't infer anything about "playsAirGuitar(mia)" from my axioms, it might be true or false. It's truth value is unknown. It's true in some model of the axioms, and false in others. The statement is independent of the axioms.
Deductive logic assumes an open universe, Prolog a closed universe.
Prolog’s Closed-World Assumption: A Journey Through Time - https://medium.com/@kenichisasagawa/prologs-closed-world-ass...
Is Prolog really based on the closed-world assumption? - https://stackoverflow.com/questions/65014705/is-prolog-reall...
I think there should be a room for three values there: true, unprovable, false. Where false things are also unprovable. I wonder if Prolog has false, defined as "yes" of the opposite.
I don't think so, because in this case both x and not-x could be "no", but I think in Prolog, if x is "no", not-x is "yes", even if neither is known to be true. It's not a three-valued logic that doesn't adhere to the law of the excluded middle.
But I think I had the most difficulty designing the interface between the logic code and Dart. I ended up with a way to add "Dart-defined relations", where you provide relations backed dynamically by your ECS or database. State stays in imperative land, rules stay in logic land.
Testing on Queens8, SWI is about 10,000 times faster than my implementation. It's a work of art! But it doesn't have the ease of use in my game dev context as a simple Dart library does.
Do you have a different use case? I would be open to sharing it on a project- or time-limited basis in exchange for bug reports and feature requests.