During Web 2.0, we saw similar enthusiasm. Instead of AI agents or blockchain, every modern company had an API exposed. For instance, Gmail- and Facebook chat was usable with 3p client apps.
What killed this was not tech, but business. The product wasn’t say social media, it was ad delivery. And using APIs was considered a bypass of funnels that they want to control. Today, if you go to a consumer service website, you will generally be met with a login/app wall. Even companies that charge money directly (say 23&me ad an egregious example) are also data hoarders. Apple is probably a better example. There’s no escape.
The point is, protocols is the easy part. If the economics and incentives are the same as yesterday, we will see similar outcomes. Today, the consumer web is adversarial between provider ”platforms”, ad delivery, content creators, and the products themselves (ie the people who use them).
I = E / K
where I is the intelligence of the system, E is the effectiveness of the system, and K is the prior knowledge.
For example, a math problem is given to two students, each solving the problem with the same effectiveness (both get the correct answer in the same amount of time). However, student A happens to have more prior knowledge of math than student B. In this case, the intelligence of B is greater than the intelligence of A, even though they have the same effectiveness. B was able to "figure out" the math, without using any of the "tricks" that A already knew.
Now back to the question of whether or not prior knowledge is required. As K approaches 0, intelligence approaches infinity. But when K=0, intelligence is undefined. Tada! I think that answers the question.
Most LLM benchmarks simply measure effectiveness, not intelligence. I conceptualize LLMs as a person with a photographic memory and a low IQ of 85, who was given 100 billion years to learn everything humans have ever created.
IK = E
low intelligence * vast knowledge = reasonable effectiveness