Readit News logoReadit News
WingedBadger commented on Show HN: An A2A-compatible, open-source framework for multi-agent networks   github.com/openagents-org... · Posted by u/snasan
silves89 · a month ago
In the late 90s and early 2000s there was a bunch of academic research into collaborative multi-agent systems. This included things like communication protocols, capability discovery, platforms, and some AI. The classic and over-used example was travel booking -- a hotel booking agent, a flight booking agent, a train booking agent, etc all collaborating to align time, cost, location. The cooperative agents could add themselves and their capabilities to the agent community and the potential of the system as a whole would increase, and there would perhaps be cool emergent behaviours that no one had thought of.

This appears, to me, like an LLM-agent descendent of these earlier multi-agent systems.

I lost track of the research after I left academia -- perhaps someone here can fill in the (considerable) blanks from my overview?

WingedBadger · a month ago
As someone who got into ongoing multi-agent systems (MAS) research relatively recently (~4 years, mostly in distributed optimization), I see two major strands of it. Both of which are certainly still in search of the magical "emergence":

There is the formal view of MAS that is a direct extension of older works with cooperative and competitive agents. This tries to model and then prove emergent properties rigorously. I also count "classic" distributed optimization methods with convergence and correctness properties in this area. Maybe the best known application of this are coordination algorithms for robot/drone swarms.

Then, as a sibling comment points out, there is the influx of machine learning into the field. A large part of this so far was multi-agent reinforcement learning (MARL). I see this mostly applied to any "too hard" or "too slow" optimization problem and in some cases they seem to give impressive results.

Techniques from both areas are frequently mixed and matched for specific applications. Things like agents running a classic optimization but with some ML-based classifications and local knowledge base. What I see actually being used in the wild at the moment are relatively limited agents, applied to a single optimization task and with frequent human supervision.

More recently, LLMs have certainly taken over the MAS term and the corresponding SEO. What this means for the future of the field, I have no idea. It will certainly influence where research funding is allocated. Personally, I find it hard to believe LLMs would solve the classic engineering problems (speed, reliability, correctness) that seem to hold back MAS in more "real world" environments. I assume this will instead push research focus into different applications with higher tolerance for weird outputs. But maybe I just lack imagination.

WingedBadger commented on LLMs are steroids for your Dunning-Kruger   bytesauna.com/post/dunnin... · Posted by u/gridentio
malshe · a month ago
I suspect a majority of my students this semester used LLMs to complete homework assignments. It is really depressing. I spent hours making these assignments and all they probably did was to copy and paste them into ChatGPT. The worst part is when they write to me asking for help, sharing their code, and I can see it was written by LLMs. The errors are mostly there because occasionally the assignments refer to something we did in the class. Without that context LLMs make assumptions and the code fails to generate the exact output. So now I am fixing the part of the code that some of my students didn't bother to write themselves.

Edit: Added "I suspect" in the beginning as I can't prove it.

WingedBadger · a month ago
Just going by the last 2 years of university teaching (energy focused computer science in germany), I feel like LLMs have already had a devastating effect. There has been a large influx of students who seemingly got through their entire Bachelors degree with nothing but ChatGPT. The university is slow to adapt and ill equipped to deal with this.

This is absolutely killing my enjoyment of teaching. There is nothing more disheartening than carefully preparing materials for people to grasp concepts I find extremely interesting, just for them to hand in ChatGPT generated slop and not understanding anything at all. In stark contrast, just a couple of years prior I would have quite rewarding projects and discussions with students. I also refuse to give detailed feedback on such "solutions" anymore because the asymmetry in student effort and my effort is just completely unreasonable.

This development is something very different from the often quipped "graphing calculator in maths education". For a graphing calculator you still need to know the mathematical foundations to input the correct things to get the correct results. LLMs are mostly used by just pasting in the exercise of the day.

This is not to say LLMs can't be a useful tool for learning. They absolutely can. But that is not how the majority of students uses them... to their own detriment and the detriment of those trying to teach them.

If universities don't adapt to this quickly, then the already weak signal of "university degree implies some amount of competence" will be entirely lost.

u/WingedBadger

KarmaCake day1November 11, 2025View Original