Readit News logoReadit News
Posted by u/cschiller 10 months ago
Launch HN: GPT Driver (YC S21) – End-to-end app testing in natural language
Hey HN, we are Chris and Chris from MobileBoost (https://mobileboost.io/). We’re building GPT Driver, an AI-native approach to create and execute end-to-end (E2E) tests on mobile applications. Our solution allows teams to define tests in natural language and prevents test flakiness by taking a visual approach paired with LLM (Large Language Model) reasoning. This helps achieve E2E test coverage with a fraction of the usual effort.

You can watch a brief product walkthrough here: https://www.youtube.com/watch?v=5-Ge2fqdlxc

In terms of trying the product out: since the service is resource-intensive (we provide hosted virtual/real phone instances), we don't currently have a playground available. However, you can see some examples here https://mobileboost.io/showcases and book a demo of GPT Driver testing your app through our website.

Why we built this: working at previous startups and scaleups, we saw how as app teams grew, QA teams would struggle to ensure everything was still working. This caused tension between teams and resulted in bugs making it into production.

You’d expect automated tests to help, but these were a huge effort because only engineers could create the tests, and the apps themselves kept changing—breaking the tests regularly and leading to high maintenance overhead. Functional tests often failed not because of actual app errors, but due to changes like copy updates or modifications to element IDs. This was already a challenge, even before considering the added complexities of multiple platforms, different environments, multilingual UIs, marketing popups, A/B tests, or minor UI changes from third-party authentication or payment providers.

We realized that combining computer vision with LLM reasoning could solve the common flakiness issues in E2E testing. So, we launched GPT Driver—a no-code editor paired with a hosted emulator/simulator service that allows teams to set up test automation efficiently. Our visual + LLM reasoning test execution reduces false alarms, enabling teams to integrate their E2E tests into their CI/CD pipelines without getting blocked. Some interesting technical challenges we faced along the way: (1) UI Object Detection from Vision Input: We had to train object detection models (YOLO and Faster R-CNN based) on a subset of the RICO dataset as well as our own dataset to be able to interact accurately with the UI. (2) Reasoning with Current LLMs: We have to shorten instructions, action history, and screen content during runtime for better results, as handling large amounts of input tokens remains a challenge. We also work with reasoning templates to achieve robust decision-making. (3) Performance Optimization: We optimized our agentic loop to make decisions in less than 4 seconds. To reduce this further, we implemented caching mechanisms and offer a command-first approach, where our AI agent only takes over when the command fails.

Since launching GPT Driver, we’ve seen adoption by technical teams, both with and without dedicated QA roles. Compared to code-based tests, the core benefit is the reduction of both the manual work and the time required to maintain effective E2E tests. This approach is particularly powerful for apps which have a lot of dynamic screens and content such as Duolingo which we have been working with since a couple of months. Additionally, the tests can now also be managed by non-engineers.

We’d love to hear about your experiences with E2E test automation—what approaches have worked or didn’t work for you? What features would you find valuable?

msoad · 10 months ago
I work in this space. We manage thousands of e2e tests. The pain has never been in writing the tests. Frameworks like Playwright are great at the UX. And having code editors like Cursor makes it even easier to write the tests. Now, if I could show Cursor the browser, it would be even better, but that doesn’t work today since most multimodal models are too slow to understand screenshots.

It used to be that the frontend was very fragile. XVFB, Selenium, ChromeDriver, etc., used to be the cause of pain, but recently the frontend frameworks and browser automation have been solid. Headless Chrome hardly lets us down.

The biggest pain in e2e testing is that tests fail for reasons that are hard to understand and debug. This is a very, very difficult thing to automate and requires AGI-level intelligence to really build a system that can go read the logs of some random service deep in our service mesh to understand why an e2e test fails. When an e2e test flakes, in a lot of cases we ignore it. I have been in other orgs where this is the case too. I wish there was a system that would follow up and generate a report that says, “This e2e test failed because service XYZ had a null pointer exception in this line,” but that doesn’t exist today. In most of the companies I’ve been at, we had complex enough infra that the error message never makes it to the frontend so we can see it in the logs. OpenTelemetry and other tools are promising, but again, I’ve never seen good enough infra that puts that all together.

Writing tests is not a pain point worth buying a solution for, in my case.

My 2c. Hopefully it’s helpful and not too cynical.

hn_throwaway_99 · 10 months ago
While I agree with your primary pain point, I would argue that that really isn't specific to tests at all. It sounds like what you're really saying is that when something goes wrong, it's really difficult to determine which component in a complex system is responsible. I mean, from what you've described (and from what I've experienced as well), you would have the same if not harder problem if a user experienced a bug on the front end and then you had to find the root cause.

That is, I don't think a framework focused on front end testing should really be where the solution for your problem is implemented. You say "This is a very, very difficult thing to automate and requires AGI-level intelligence to really build a system that can go read the logs of some random service deep in our service mesh to understand why an e2e test fails." - I would argue what you really need is better log aggregation and system tracing. And I'm not saying this to be snarky (at scale with a bunch of different teams managing different components I've seen that it can be difficult to get everyone on the same aggregation/tracing framework and practices), but that's where I'd focus, as you'll get the dividends not only in testing but in runtime observability as well.

Lienetic · 10 months ago
Agreed. Is there a good tool you'd recommend for this?
krainboltgreene · 10 months ago
"OpenTelemetry and other tools are promising, but again, I’ve never seen good enough infra that puts that all together."

It's a two paragraph comment and you somehow missed it.

ec109685 · 10 months ago
There are silly things that trip up e2e tests like a cookie pop up or network failures and whatnot. An AI can plow through these in a way that a purely coded test can’t.

Those types of transient issues aren’t something that you would want to fail a test for given it still would let the human get the job done if it happened in the field.

This seems like the most useful part of adding AI to e2e tests. The world is not deterministic, which AI handles well.

Uber takes this approach here: https://www.uber.com/blog/generative-ai-for-high-quality-mob...

tomatohs · 10 months ago
I predict an all out war over deterministic vs non-deterministic testing, or at least a new buzzword for fuzzy testing. Product people understand that a cookie banner "shouldn't" prevent the test from passing, but an engineer would entirely disagree (see the rest of the convos below).

Engineers struggle with non-deterministic output. It removes the control and "truth" that engineering is founded upon. It's going to take a lot of work (or again, a toung-in-cheek buzzword like "chaos testing") to get engineers to accept the non-deterministic behavior.

cschiller · 10 months ago
Thanks for your thoughtful response! Agree that digging into the root cause of a failure, especially in complex microservice setups, can be incredibly time-consuming.

Regarding writing robust e2e tests, I think it really depends on the team's experience and the organization’s setup. We’ve found that in some organizations—particularly those with large, fast-moving engineering teams—test creation and maintenance can still be a bottleneck due to the flakiness of their e2e tests.

For example, we’ve seen an e-commerce team with 150+ mobile engineers struggle to keep their functional tests up-to-date while the company was running copy and marketing experiments. Another team in the food delivery space faced issues where unrelated changes in webviews caused their e2e tests to fail, making it impossible to run tests in a production-like system.

Our goal is to help free up that time so that teams can focus on solving bigger challenges, like the debugging problems you’ve mentioned.

Terretta · 10 months ago
fullstackchris · 10 months ago
To be fair, this is NOT the case with native mobile apps. There are some projects like detox that are trying to make e2e tests easier, but the tests themselves can be painful, run fairly slow on emulators, etc.

Maybe someday the tooling for mobile will be as good as headless chrome is for web :)

Agreed though that the followup debugging of a failed test could be hard to automate in some cases.

edelans · 10 months ago
I think we can claim that at Waldo.

Check for yourself: I've just recorded this [1] scripted test on the wikipedia mobile app, and it yields this [2] Replay. In less than a minute we spin up a fresh virtual device, install your app on it, execute the 8 steps of the script.

As a result, you get the Replay of the session : video synchronized with interaction timeline, device & network logs, so you can debug in full context.

[1]: https://github.com/waldoapp/waldo-programmatic-samples/blob/... [2]: https://share.waldo.com/7a45b5bd364edbf17c578070ce8bde220240...

rafaelmn · 10 months ago
I think either you're overselling the maturity of the ecosystem or I've been unfortunate enough to get stuck with the worst option out there - Cypress. I run into tooling limitations and issues regularly, only to eventually find an open GitHub issue with no solution or some such.
codedokode · 10 months ago
Sorry if it is a stupid idea, but cannot you log all messages to a separate file for each test (or attach test id to the messages)? Then if the test fails, you can see where the error occured.
msoad · 10 months ago
Where I work there are 1,500 microservices. How do I get a log of all of those services -- only related to my test's requests in a file?

I know there are solutions for this, but in the real world I have not seen it properly implemented.

TechDebtDevin · 10 months ago
I doubt that screenshot methods are the bottleneck considering that's the method Microsoft and Anthropic are using.
tomatohs · 10 months ago
It's absolutely not the bottleneck. OpenAI can process a full resolution screenshot in about 4 seconds.
tomatohs · 10 months ago
You're totally right here, but "debugging failed tests" is a mature problem that assumes you have working tests and people to write them. Most companies don't have the resources to dedicate full engineer time to QA, and if they do nobody maintains the test.

Debugging failed test is a "first world problem"

AdieuToLogic · 10 months ago
> ... "debugging failed tests" is a mature problem that assumes you have working tests and people to write them.

I am reminded of an old s/w engineering law:

  Developers can test their solution or Customers will.
  Either way, the system will be tested.

batikha · 10 months ago
Very cool! I already can see a lot of "this is already solved by playwright/cypress/selenium/deterministic stuff" in the comments.

Over nearly 10 years in startups (big and small), I've been consistently surprised by how much I hear that "testing has been solved", yet I see very little automation in place and PMs/QAs/devs and sometimes CEOs and VPs doing lots of manual QA. And not only on new features (which is a good thing), also on happy path / core features (arguably a waste of time to test things over and over again).

More than once I worked for a company that was against having a manual QA team, out of principle and more or less valid reasons (we use a typed language so less bug, engineers are empowered, etc etc), but ended up hiring external consultants to handle QA after a big quality incident.

The amount of mismatch between theory and practice in this field is impressive.

epolanski · 10 months ago
> yet I see very little automation in place and PMs/QAs/devs and sometimes CEOs and VPs doing lots of manual QA

Because software is a clownish mimicking of engineering that lacks any real solid and widespread engineering practices.

It's cultural.

Crowds boast their engineering degrees, but have little to show but leetcode and system design black belts, even though their day to day job rarely requires them to architect systems or reimplement a new Levehnstein distance but would benefit a lot from thoroughly investigating functional and non functional requirements and encoding and maintaining those through automation.

There's very little engineering in software, people really care about the borderline fun parts and discard the rest.

cschiller · 10 months ago
Thanks for sharing your experience! Completely agree - there's often a huge gap between the perception that testing is "solved" and the reality of manual QA still being necessary, even for core features. We recently had a call with one of the largest US mobile teams and were surprised to learn they're still doing extensive manual testing because some use cases remain uncovered by traditional tools. It's definitely not as "solved" as many might think.
ec109685 · 10 months ago
> In terms of trying the product out: since the service is resource-intensive (we provide hosted virtual/real phone instances), we don't currently have a playground available. However, you can see some examples here https://mobileboost.io/showcases and book a demo of GPT Driver testing your app through our website.

Have you considered an approach like what Anthropic is doing for their computer control where an agent runs on your own computer and controls a device simulator?

ec109685 · 10 months ago
Or even the actual device on the latest Mac OS.
codepathfinder · 10 months ago
I've been a mobile developer for the past 10 years and my overall belief is that mobile app development has slower growth and companies with the mobile team are investing less on mobile Dev or testing+tooling+education. Do you think the market is still hot once it was to use your product?
cschiller · 10 months ago
I would say that mobile apps are still the primary format for launching new consumer services, incl. new apps like ChatGPT and many others. However we’ve observed that teams are expected to do more with less—delivering high-quality products while ensuring compliance, often with the same or even smaller team sizes. This is why we focus on minimizing the engineering burden, particularly when it comes to repetitive tasks like regression testing, which can be especially painful to maintain in the mobile ecosystem due to use of third-party integrations (authentication, payments, etc.).
codetrotter · 10 months ago
> mobile apps are still the primary format for launching new consumer services, incl. new apps like ChatGPT and many others

OpenAI launched ChatGPT to the public on the web first and it took like, several months I think from I used their public web version until they had an official app for it in App Store. In the meantime, some third party apps popped up in App Store for using ChatGPT. I kept using the web version until the official app showed up. And probably having the mobile app in App Store has helped them grow to the number of users they have now. But IMO, ChatGPT as a product was not itself “launched” on App Store and they seemed to do very well in terms of adoption even when initially they only had the web version. The main point, that mobile apps are still desired, I agree with though.

codepathfinder · 10 months ago
Is it possible to record the user screen and just generate a test case. I believe that's most efficient way IMO
cschiller · 10 months ago
Yes, great point! We have an 'Assistant' feature where you can perform the flow on the device, and we automatically generate the test case as you navigate the app. As you mentioned, it’s a great starting point to quickly automate the functional flow. Afterwards, you can add more detailed assertions as needed. Technically we do this by using both the UI hierarchy from the app as well as vision models to generate the test prompt.
tomatohs · 10 months ago
This comes up all the time. It seems like it would be possible, but imagine the case where you want to verify that a menu shows on hover. Was the hover on the menu intentional?

Another example, imagine an error box shows up. Was that correct or incorrect?

So you need to build a "meta" layer, which includes UI, to start marking up the video and end up in the same state.

Our approach has been to let the AI explore the app and come up with ideas. Less interaction from the user.

codepathfinder · 10 months ago
My way of thinking while working of B2 enterprise app, sometimes users come up from weird scenarios in feature with X turn on, off with specific edition (country).

Maybe the gpt can surf the user activity logs or crash logs and reproduce the scenarios as test case.

Remember crashlytics ?

rvz · 10 months ago
How does this compare to Robin by mobile.dev; the same guys that built Maestro? [0]

That has around 95% of what GPT Driver does and has the potential to do Web E2E testing.

[0] https://maestro.mobile.dev

cschiller · 10 months ago
One of our customers recently compared GPTD with Maestro’s Robin (formerly App Quality CoPilot). Their mobile platform engineering manager highlighted three key reasons for choosing us: lack of frustration, ease of implementation, and reliability.

To be more concrete their words were: - “What you define, you can tweak, touch the detail, and customize, saving you time.” - “You don’t entirely rely on AI. You stay involved, avoiding misinterpretations by AI.” - “Flexibility to refine, by using templates and triggering partial tests, features that come from real-world experience. This speeds up the process significantly.”

Our understanding is that because we launched the first version of GPT Driver in April 2023, we’ve built it in an “AI-native” way, while other tools are simply adding AI-based features on top. We worked closely with leading mobile teams, including Duolingo, to ensure we stay as aligned as possible with real-world challenges.

While our focus is on mobile, GPT Driver also works effectively on web platforms.

mmaunder · 10 months ago
Congrats! How has Anthropic's latest release supporting computer use affected your planning/thinking around this?

PS:If you had this for desktop we'd immediately become a customer.

cschiller · 10 months ago
Thank you! Sonnet 3.5 is indeed a powerful model, and we're actually using it. However, even with the latest version, there are still some limitations affecting our specific use case. For instance, the model struggles to accurately recognize semi-overlaid areas, such as popups that block interactions, and it has trouble consistently detecting when UI elements are in a disabled state.

To address these issues, we enhance the models with our own custom logic and specialized models, which helps us achieve more reliable results.

Looking forward, we expect our QA Studio to become even more powerful as we integrate tools like test management, reporting, and infrastructure, especially as models improve. We're excited about the possibilities ahead!

edelans · 10 months ago
Hi cschiller, I think we can help you with those issues at Waldo. I guess you are using Appium under the hood to get the UI hierarchy. At Waldo we developed a competing (proprietary) engine that solves a lot of Appium problems.

We provide the most accurate view hierarchy for mobile apps (including React Native and Flutter apps), and we do it under 500ms for each view.

I would love to get in touch: at e.de-lansalut [at] tricentis.com

Here is an example of what we are able to do: https://share.waldo.com/7a45b5bd364edbf17c578070ce8bde220240...

tomatohs · 10 months ago
We do AI E2E desktop, sent you an email.
drothlis · 10 months ago
I noticed in your demo it generated the prompt "tap on the 'Log in' button located directly below the 'Facebook Password' field".

Does your model consistently get the positions right? (above, below, etc). Every time I play with ChatGPT, even GPT-4o, it can't do basic spatial reasoning. For example, here's a typical output (emphasis mine):

> If YouTube is to the upper *left* of ESPN, press "Up" once, then *"Right"* to move the focus.

(I test TV apps where the input is a remote control, rather than tapping directly on the UI elements.)