To be clear, my amusement isn't that I find this technique to not be useful for the purpose it was created, but that 40 years later, we find ourselves in pursuit for the advancement of AI to be somewhat back where we already were; albeit, in a more semi-automated fashion as someone still has to create the underlying rule-set.
I do feel that the introduction of generative neural network models in both natural language and multi-media creation has been a tremendous boon for the advancement of AI, it just amuses me to see that which was old is new again.
How does automation reasoning actually check a response against the set of rules without using ML? Wouldn't it still need a language model to compare the response to the rule?
aiui a natural language question e.g. "What is the refund policy?" gets matched against formalized contracts, and the relevant bit of the contract gets translated into natural language deterministically. At least this is the way I'd do it, but not sure how it actually works
I couldn't agree more with this take:
> Our demand for stimulating content is being overtaken by supply. Analogously, with AI, we might be in a world where scientific progress is accelerated beyond our wildest dreams, where we have more answers than questions, and where we cannot even process the set of answers available to us.
Curious to learn more about prompt engineering takeaways here. Was feeding more context (or chapters of textbooks, bits of papers, documentation) helpful? It does seem like layering information and being very precise helps a lot. Eerily like with interns
I do feel that the introduction of generative neural network models in both natural language and multi-media creation has been a tremendous boon for the advancement of AI, it just amuses me to see that which was old is new again.