Readit News logoReadit News
LakshyAAAgrawal commented on A Comprehensive Survey of Self-Evolving AI Agents [pdf]   arxiv.org/abs/2508.07407... · Posted by u/SerCe
tlarkworthy · 18 days ago
Recently tried out the new GEPA algorithm for prompt evolution with great results. I think using LLMs to write their own prompt and analyze their trajectories is pretty neat once appropriate guardrails are in place

https://arxiv.org/abs/2507.19457

https://observablehq.com/@tomlarkworthy/gepa

I guess GEPA is still preprint and before this survey but I recommend taking a look due to it's simplicity

LakshyAAAgrawal · 17 days ago
Dear Tom, Thanks a lot for trying out GEPA and writing about your experience in the blog!
LakshyAAAgrawal commented on GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning   twitter.com/LakshyAAAgraw... · Posted by u/LakshyAAAgrawal
LakshyAAAgrawal · a month ago
Large language models (LLMs) are increasingly adapted to downstream tasks via reinforcement learning (RL) methods like Group Relative Policy Optimization (GRPO), which often require thousands of rollouts to learn new tasks. We argue that the interpretable nature of language can often provide a much richer learning medium for LLMs, compared with policy gradients derived from sparse, scalar rewards. To test this, we introduce GEPA (Genetic-Pareto), a prompt optimizer that thoroughly incorporates natural language reflection to learn high-level rules from trial and error. Given any AI system containing one or more LLM prompts, GEPA samples system-level trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems, propose and test prompt updates, and combine complementary lessons from the Pareto frontier of its own attempts. As a result of GEPA's design, it can often turn even just a few rollouts into a large quality gain. Across four tasks, GEPA outperforms GRPO by 10% on average and by up to 20%, while using up to 35x fewer rollouts. GEPA also outperforms the leading prompt optimizer, MIPROv2, by over 10% across two LLMs, and demonstrates promising results as an inference-time search strategy for code optimization.
LakshyAAAgrawal commented on LOTUS makes LLM-powered data processing fast and easy (as easy as Pandas)   github.com/guestrin-lab/l... · Posted by u/lianapat
LakshyAAAgrawal · 8 months ago
I have always wanted a framework like this. It's really amazing how a few systems insights can have such a massive impact both in terms of cost and runtime.
LakshyAAAgrawal commented on Multilspy: Building a common LSP client handtuned for all Language servers   github.com/microsoft/mult... · Posted by u/LakshyAAAgrawal
keeda · 8 months ago
Very cool! I wish this existed a couple years ago. I was hacking with LSP and a Java language server for a code migration prototype, and I could not even get past the initialization stage. The server (must have been the Eclipse one) didn't seem to respond to the initialization params as expected, and I could not find the right documentation or code snippets to figure it out. I eventually implemented the prototype using OpenRewrite. With this, I'm tempted to revive that idea again.
LakshyAAAgrawal · 8 months ago
This was precisely the situation I was in! Luckily, the Eclipse JDT.LS contributors are super helpful, and provided me with a lot of their time answering all my questions, which I have now tried to document in as much detail as possible in the Eclipse part of multilspy. My sincere hope is that multilspy can serve as the repository for all other language servers.

I hope you revive "that idea" and I would be glad to help in any way possible w.r.t. multilspy to help you through!

LakshyAAAgrawal commented on Multilspy: Building a common LSP client handtuned for all Language servers   github.com/microsoft/mult... · Posted by u/LakshyAAAgrawal
luisgs · 8 months ago
Great project!
LakshyAAAgrawal · 8 months ago
Thank you very much!
LakshyAAAgrawal commented on Multilspy: Building a common LSP client handtuned for all Language servers   github.com/microsoft/mult... · Posted by u/LakshyAAAgrawal
jgalt212 · 8 months ago
Oh, no. People learn English by watching American TV shows and speak in similar patterns. What if people learn to write by reading LLM output, and write like LLMs? No bueno, to use a vernacular.
LakshyAAAgrawal · 8 months ago
I would like to believe that I have not overfit myself to writing like LLMs. You can find some of my pre-LLM writing at https://medium.com/@LakshyAAAgrawal/tweak-your-lubuntu-appea...!
LakshyAAAgrawal commented on Multilspy: Building a common LSP client handtuned for all Language servers   github.com/microsoft/mult... · Posted by u/LakshyAAAgrawal
LakshyAAAgrawal · 8 months ago
To be very honest, this is one of my first posts/writings which I did not use any writing tool whatsoever (even a spellcheck), since I was typing directly into the HN textbox and don't typically use extensions.

u/LakshyAAAgrawal

KarmaCake day87July 15, 2018View Original