Readit News logoReadit News
nithril · 2 months ago
The same day, a post on reddit was about: "We built 3B and 8B models that rival GPT-5 at HTML extraction while costing 40-80x less - fully open source" [1].

Not fully equivalent to what is doing Skyvern, but still an interesting approach.

[1] https://www.reddit.com/r/LocalLLaMA/comments/1o8m0ti/we_buil...

suchintan · 2 months ago
This is really cool. We might integrate this into Skyvern actually - we've been looking for a faster HTML extraction engine

Thanks for sharing!

_pdp_ · 2 months ago
This is exactly the direction I am seeing agent go. They should be able to write their own tools and we are soon launching something about that.

That being said...

LLMS are amazing for some coding tasks and fail miserably at others. My hypothesis is that there is some sort of practical limit to how many concepts an LLM can hold into account no matter the context window given the current model architectures.

For a long time I wanted to find some sort of litmus test to measure this and I think I found one that is an easy to understand programming problem, can be done in a single file, yet complex enough. I have not found a single LLM to be able to build a solution without careful guidance.

I wrote more about this here if you are interested: https://chatbotkit.com/reflections/where-ai-coding-agents-go...

JimDabell · 2 months ago
> For a long time I wanted to find some sort of litmus test to measure this and I think I found one that is an easy to understand programming problem, can be done in a single file, yet complex enough. I have not found a single LLM to be able to build a solution without careful guidance.

Plan for solving this problem:

- Build a comprehensive design system with AI models

- Catalogue the components it fails on (like yours)

- These components are the perfect test cases for hiring challenges (immune to “cheating” with AI)

- The answers to these hiring challenges can be used as training data for models

- Newer models can now solve these problems

- You can vary this by framework (web component / React / Vue / Svelte / etc.) or by version (React v18 vs React v19, etc.)

What you’re doing with this is finding the exact contours of the edge of AI capability, then building a focused training dataset to push past those boundaries. Also a Rosetta Stone for translating between different frameworks.

I put a brain dump about the bigger picture this fits into here:

https://jim.dabell.name/articles/2025/08/08/autonomous-softw...

Groxx · 2 months ago
also training data quality. they are horrifyingly bad at concurrent code in general in my experience, and looking at most concurrent code in existence.... yeah I can see why.
disgruntledphd2 · 2 months ago
The really depressing part about LLMs (and the limitations of ML more generally) is that humans are really bad at formal logic (which is what programming basically is), and instead of continuing the path of making machines that made it harder for us to get it wrong, we instead decided to toss every open piece of code/text in existence into a big machine that then reproduces those patterns non-deterministically and use that to build more programs.

One can see the results in a place where most code is terrible (data science is the place I see this most, as it's what I do mostly) but most people don't realise this. I assume this also happens for stuff like frontend, where I don't see the badness because I'm not an expert.

Grimblewald · 2 months ago
Or when code is fully vectorizable they default to using loops even if explicitly told not to yse loops. Code I got a LLM to solve for a fairly straightforward problem took 18 minutes to run.

my own solution? 1.56 seconds. I consider myself to be at an intermediate skill level, and while LLMs are useful, they likely wont replace any but the least talented programmers. Even then i'd value human with critial thinking paired with an LLM over an even more competent LLM.

CaptainOfCoit · 2 months ago
Codex (GPT-5) + Rust (with or without Tokio) seems to work out well for me, asking it to run the program and validate everything as it iterates on a solution. I've used the same workflow with Python programs too and seems to work OK, but not as well as with Rust.

Just for curiosities sake, what language have you been trying to use?

erichocean · 2 months ago
In my experience, because the Clojure concurrency model is just incredibly sane and easy to get right, LLMs have no difficulty with it.
meowface · 2 months ago
With the upcoming release of Gemini 3.0 Pro, we might see a breakthrough for that particular issue. (Those are the rumors, at least.) I'm sure not fully solved, but possibly greatly improved.
whinvik · 2 months ago
I feel like this is how normal work is. When I have to figure out how to use a new app/api etc, I go through an initial period where I am just clicking around, shouting in the ether etc until I get the hang of it.

And then the third or fourth time its automatic. Its weird but sometimes I feel like the best way to make agents work is to metathink about how I myself work.

suchintan · 2 months ago
I have a 2yo and it's been surreal watching her learn the world. It deeply resembles how LLMs learn and think. Crazy
Retric · 2 months ago
Odd, I've been stuck by how different LLMs and kids learn the world.

You don’t get that whole uncanny valley disconnect do you?

goatlover · 2 months ago
How so? Your kid has a body that interacts with the physical world. An LLM is trained on terabytes of text, then modified by human feedback and rules to be a useful chatbot for all sorts of tasks. I don't see the similarity.
haskellshill · 2 months ago
> It deeply resembles how LLMs learn and think

What? LLMs don't think nor learn in the sense humans do. They have absolutely no resemblance to a human being. This must be the most ridiculous statement I've read this year

melagonster · 2 months ago
I am sorry, but you are scoffing at the humanity of your kid; you know that, right?
pennaMan · 2 months ago
Yes, it is easy. LLMs have reduced my maintenance work on scraping tasks I manage (lots of specialized high-traffic adfield sites) by 99%

What used to be a constant almost daily chore with them breaking all the time at random intervals is now a self-healing system that rarely ever fails.

silver_sun · 2 months ago
Interesting. Could you elaborate? Is there a specific reason that it doesn't do 100% of the work already?
ACCount37 · 2 months ago
One of the uses for AI I'm excited about - maintaining systems, keeping up with the moving targets.
TheTaytay · 2 months ago
Could you elaborate on your setup please?
suchintan · 2 months ago
That's the dream
hamasho · 2 months ago
Off topic, but because the article mentioned improper usage of DOM, I put down the UK government's design system/accessibility. It's well documented, and I hope all governments have the same standard. I guess they paid a huge amount of money to consultants and vendors.

[1] https://design-system.service.gov.uk/components/radios/

philipbjorge · 2 months ago
We had a similar realization here at Thoughtful and pivoted towards code generation approaches as well.

I know the authors of Skyvern are around here sometimes -- How do you think about code generation with vision based approaches to agentic browser use like OpenAI's Operator, Claude Computer Use and Magnitude?

From my POV, I think the vision based approaches are superior, but they are less amenable to codegen IMO.

suchintan · 2 months ago
Unrelated, but thoughtful gave us some very very helpful feedback early in our journey. We are big fans!
suchintan · 2 months ago
I think they're complementary, and that's the direction we're headed.

We can ask the vision based models to output why they are doing what they are doing, and fallback to code-based approaches for subsequent runs

Ldorigo · 2 months ago
I wonder why the focus on replaying UI interactions, rather than just skipping one step ahead to the underlying network/API calls? I've been playing around with similar ideas a lot recently, and I indeed started out in a similar approach as what is described in the article - but then I realized that you can get much more robust (and faster-executing) automation scripts by having the agents figure out the exact network calls to replay, rather than clicking around in a headless browser.
franze · 2 months ago
In AI First workshops. By now I tell them for the last exercise "no scrappers". the learning is to separate reasoning (AI) from data (that you have to bring.) and ai coded scrappers seem a logical, but always fail. scrapping is a scaling issue, not reasoning challenge. also the most interesting websites are not keen for new scrappers.