Readit News logoReadit News
lorepieri commented on A Survey of AI Agent Protocols   arxiv.org/abs/2504.16736... · Posted by u/distalx
klabb3 · 8 months ago
It’s good to see a new fresh push for protocols and interop. However, I don’t think it will do the thing we hope for, for non-technical reasons.

During Web 2.0, we saw similar enthusiasm. Instead of AI agents or blockchain, every modern company had an API exposed. For instance, Gmail- and Facebook chat was usable with 3p client apps.

What killed this was not tech, but business. The product wasn’t say social media, it was ad delivery. And using APIs was considered a bypass of funnels that they want to control. Today, if you go to a consumer service website, you will generally be met with a login/app wall. Even companies that charge money directly (say 23&me ad an egregious example) are also data hoarders. Apple is probably a better example. There’s no escape.

The point is, protocols is the easy part. If the economics and incentives are the same as yesterday, we will see similar outcomes. Today, the consumer web is adversarial between provider ”platforms”, ad delivery, content creators, and the products themselves (ie the people who use them).

lorepieri · 8 months ago
I really like this analysis. But what about companies allowing agents to interact natively (via API or similar) getting more of the agents inbound since agents are more optimised to go there? If people want to use agents it will cause a lot of lost revenue for companies not allowing agents to interact natively.
lorepieri commented on Accelerating scientific breakthroughs with an AI co-scientist   research.google/blog/acce... · Posted by u/Jimmc414
confused_boner · 10 months ago
AI prompting us sounds interesting
lorepieri · 10 months ago
Check Manna.
lorepieri commented on OpenAI O3 breakthrough high score on ARC-AGI-PUB   arcprize.org/blog/oai-o3-... · Posted by u/maurycy
owenpalmer · a year ago
Someone asked if true intelligence requires a foundation of prior knowledge. This is the way I think about it.

I = E / K

where I is the intelligence of the system, E is the effectiveness of the system, and K is the prior knowledge.

For example, a math problem is given to two students, each solving the problem with the same effectiveness (both get the correct answer in the same amount of time). However, student A happens to have more prior knowledge of math than student B. In this case, the intelligence of B is greater than the intelligence of A, even though they have the same effectiveness. B was able to "figure out" the math, without using any of the "tricks" that A already knew.

Now back to the question of whether or not prior knowledge is required. As K approaches 0, intelligence approaches infinity. But when K=0, intelligence is undefined. Tada! I think that answers the question.

Most LLM benchmarks simply measure effectiveness, not intelligence. I conceptualize LLMs as a person with a photographic memory and a low IQ of 85, who was given 100 billion years to learn everything humans have ever created.

IK = E

low intelligence * vast knowledge = reasonable effectiveness

lorepieri · a year ago
There should be also a factor about resource consumption. See here: https://lorenzopieri.com/pgii/

u/lorepieri

KarmaCake day241March 20, 2018
About
https://lorenzopieri.com/
View Original