Readit News logoReadit News
Posted by u/richardzhang a year ago
Launch HN: Integuru (YC W24) – Reverse-engineer internal APIs using LLMsgithub.com/Integuru-AI/In...
Hey HN! We’re Richard and Alan from Integuru (https://integuru.ai). We build low-latency integrations with platforms lacking official APIs. We take custom requests and manage creation, hosting, and authentication. To automate our work, we built an open-source AI agent that reverse-engineers internal APIs to generate integration code. Here’s a demo: https://www.youtube.com/watch?v=7OJ4w5BCpQ0.

Many products need integrations with third-party services, but platforms often lack official APIs. Examples include logistics software, financial services, electronic health records (EHRs), and government websites. To build low-latency integrations, developers must reverse-engineer internal APIs, but this can get complicated. With Integuru, you can have easier access to integrations.

We started as recent college grads trying to make US income tax data accessible. We contacted banks, brokerages, payroll software, and more to request access to their APIs, but none took us seriously. We resorted to building integrations with these systems to extract documents like W-2s and 1099s. We initially used browser automation but ran into two big problems: our integrations (1) weren’t reliable due to changing UIs and (2) had slow execution speeds due to spinning up browsers and waiting for pages to load. We experimented with AI-based automation maintenance, but it didn’t solve slow speeds. So, we concluded that browser automation is useful when execution speed isn’t essential, but reverse engineering is often the only path for performant integrations.

Through reverse-engineering dozens of platforms, we noticed many internal API design patterns that LLMs could decipher. We built an agent to automate the creation of integrations. Today, Integuru can analyze a platform’s internal API designs and build an integration in minutes.

The agent mimics what a human does when reverse-engineering. Say you want to download utility bills from a utility website. You’d first use Integuru to generate a file of network requests and a file of cookies. You pair these two files with a prompt about your desired action—in this case, to download utility bills.

Integuru identifies the final request that downloads utility bills. The request URL might look like this: https://www.example.com/utility-bills?accountId=123&userId=4.... It then identifies parts of the request that depend on other requests. The example URL contains dynamic parts— accountId and userId—that usually are in the response of previous request(s). It then finds other requests whose response contains any of these and adds them to the dependency graph. The newly found request URLs might look like https://www.example.com/getAccountId, https://www.example.com/getUserId, and so on.

This process repeats until the most recently found request doesn’t depend on any other request. Integuru then traverses up the graph, starting from the requests without dependencies while converting each request into a runnable function.

Integuru supports a surprising number of use cases like downloading documents, sending money, creating virtual cards… People already use the agent to build low-latency APIs for platforms like Robinhood, transportation management systems (TMS), and more. However, the agent still has limitations due to current LLM capabilities and long-tail edge cases, but we’ve been giving each platform to the agent for the first try. When the agent does struggle, we find the generated graphs and code still helpful as references for us to complete the work manually.

The agent and all integrations are open-source under AGPL-3.0. We charge for services to (1) build custom integrations when the agent struggles or for your convenience, (2) handle hosting, and (3) manage authentication using authentication cookies from authenticated browser sessions. We charge per API call with an implementation fee for new platforms.

We’re currently working to increase the agent’s coverage and improve code generation. We will continue to iterate and want to one day allow developers to integrate with all platforms instantly.

Integuru is still an early effort. We’re passionate about automating integrations and would love your feedback!

dewey · a year ago
If your landing page doesn't look like this, you've launched too late: https://integuru.ai
bryant · a year ago
Page source is amazing. I can't remember the last time I've seen a serious YC company launch page with absolutely zero JavaScript. Even the CSS is just a single selector.

I'm a fan.

ocean_moist · a year ago
I wish I could do this… best part of building for devs is being able to provide simple, good UX with minimal UI.
geoctl · a year ago
Still looks more interesting than that Next.js landing page template used by every startup these days.
silvanocerza · a year ago
Their website is this one though. :) https://www.taiki.ai/
swyx · a year ago
@richardzhang what is the relationship between taiki and integuru? is this a pivot?

Deleted Comment

ramenlover · a year ago
I don't know what my PM would say but to me this is "excellent and appealing design"
btbuildem · a year ago
This is what happens when your daily grind is cutting through all kinds of atrocious and excessive "web design" in order to get at information.
qsort · a year ago
Literally peak graphics.
shmatt · a year ago
I just noticed over the weekend new Claude agreed to reverse engineer a graphql server with introspection turned off, something Im pretty sure it would have refused for ethical reasons before the new version

it kept writing scripts, i would paste the output, and it would keep going, until it was able to create its own working discount code on an actual retail website

The only issue with these kinds of things is breaking robots.txt rules and the possibility things will break without notice, and often

The use of unofficial APIs can be legally questionable [1]

[1] https://law.stackexchange.com/questions/93831/legality-of-us...

As the authors of essentially a hacking tool, I would expect at least some legal boilerplate language about not being liable

richardzhang · a year ago
We are working on a way to auto-patch internal APIs that change by having another agent trigger the requests.

Regarding the legality aspects — really appreciate you mentioning this — we’ve put a lot of thought into these issues, and it’s something we’re continually working on and refining.

Ultimately, our goal is to allow each developer to make their own informed decision regarding the policies of the platforms that they're working with. There are situations where unofficial APIs can be both legal and beneficial, such as when they're used to access data that the end user rightfully owns and controls.

For our hosted service, we aim to balance serving legitimate data needs with safeguarding against bad actors, and we’re fully aware this can be a tricky line to navigate. What this looks like in reality would be to prioritize use cases where the end-user truly owns the data. But we know this is not always black-and-white, and will come up with the right legal language as you recommended. What does help our case is that many companies are making unofficial APIs for their own purposes, so there are legal precedents that we can refer to.

shmatt · a year ago
I have to disagree, it is definitely not legal in the US to use unauthorized access points to access authorized data. Thats like saying you're allowed to get into your apartment through breaking your neighbors door and climbing between the windows

In the US this is pretty simply covered by Computer Misuse Act and Computer Fraud and Abuse Act, both federal laws

Im not claiming you're liable, just surprised no lawyer pointed this out at YC

rozap · a year ago
You're right in principle, but I think in practice this is sort of a non issue. Most sites now employ (for better or worse) anti botting tools which have some sort of javascript challenge that will generate a unique token. Given that this tool is only capable of replacing the dynamic parts of the request graph with tokens found in the output from the previous steps, I don't see how it would get around these sorts of challenges. So effectively, if you're using methods to prevent "unauthorized" use of your APIs, I think this sort of tool will be defeated extremely easily. The reverse engineering/web scraping world has unfortunately evolved to be extremely adversarial, and this sort of tool is does not have the sneakiness required to get around even the simplest anti botting measures.

Until LLMs become smart enough to emulate a full JS stack, I think we're safe :)

_hl_ · a year ago
This is awesome, but I'm not sure what the long-term use case for the intersection of low-latency integration and non-production-stable is? I'm saying this as someone with way more experience than I'd like to in using reverse-engineered APIs as part of production products... You inevitably run into breakages, sometimes even actively hostile platforms, which will degrade user experience as users wait for your 1day window to fix their product again.

Though I suppose if you can auto-fix and retry issues within ~1minute or so it could work?

lo0dot0 · a year ago
New pipe breaks regularly. It's almost like YouTube changes the API on purpose to hurt 3rd party clients that don't show ads.
miki123211 · a year ago
Either that, or they just straight up don't care.

I think it's pretty likely that they just don't look at or test Newpipe when they change their APIs. If the change doesn't break any official clients, it goes through.

With how large Youtube is, I iimagine API changes are not infrequent.

alanloo · a year ago
This is a very important question. Thank you for bringing this up! Currently it requires human intervention to auto-fix integrations as someone needs to trigger the correct network request. We are planning on having another agent that triggers the network requests through interacting with the UI and then passing the network request to Integuru.
loktarogar · a year ago
In my experience reverse engineering is often the easy bit, or at least easy compared to what follows: maintenance. Knowing both when and how it fails when it fails (eg in cases like when the API stops returning any results but is still otherwise valid). Knowing when the response has changed in a way that is subtle to detect, like they changed the format of a single field, which may still parse correctly but is now interpreted incorrectly.

How do you keep up with the maintenance?

alanloo · a year ago
We feel your pain with maintenance. We have plans to handle this by using LLMs to detect response anomalies.

From our experience, reverse engineering is still less prone to breakage compared to traditional browser automation. But we definitely want to make integrations even more reliable with maintenance features.

cphoover · a year ago
Wouldn't something like snapshot testing from a scheduled probe be more effective and reliable than using an LLM?

Every X hours test the endpoints and validate the types and field names are consistent... If they change then trigger some kind of alerting mechanism to the user.

sureglymop · a year ago
I word say: it depends. I must've wasted days of my life trying to reverse engineer android apps with pinned certificates. It's crazy how hard it has become to just inspect the traffic on my own device that I bought and own.
sunbum · a year ago
Just setup httptoolkit [0], it just works.

[0] - https://httptoolkit.com/

loktarogar · a year ago
Yeah I feel you on that. I wonder if this can deal with those difficult cases? This would be killer if so
toomuchtodo · a year ago
Brilliant. Is the next part to monitor and autocorrect breakage when the API in scope changes unexpectedly underneath the system? This is a pain point of workflow automation systems that integrate with APIs in my experience, typically requiring a human to triage an alert (due to an unexpected external API change), pause worker queues, ship a fix, and then resume queue processing.

Love the landing page, please keep it.

alanloo · a year ago
Thanks and yes that's part of the roadmap!

Currently you need to trigger the UI actions manually to generate the network requests used by Integuru. But we're planning automate the whole thing by having another agent auto-trigger the UI actions to generate the network requests first, and then have Integuru reverse-engineer the requests.

mdaniel · a year ago
Ah, by clicking on the Taiki logo to see what the ... parent company? ... builds, I now understand how this came about. And I'll be honest, as someone who hates all that tax paperwork gathering with all my heart, this launch may have gotten you a new customer for Taiki :-)

Also, just as a friendly suggestion, given what both(?) products seemingly do, this section could use some love other than "we use TLS": https://www.taiki.ai/faq#:~:text=How%20does%20Taiki%20handle... since TLS doesn't care about storing credentials in plain text in a DB, for example

---

p.s. the GitHub organization in your .gitmodules is still pointing to Unofficial-APIs which I actually think you should have kept o/

alanloo · a year ago
Thank you for your suggestions, and really glad to hear you're excited about Taiki! We will update the the FAQ with your suggestions — honestly, this part of the website is a bit outdated, and we will make sure to change it.

Regarding the Unofficial-APIs name, it was a really tough decision. We liked the name a lot but just thought it was a bit long. A Real pleasant surprise that you found it :)

imranq · a year ago
Wow this is great! I think this is kind of the future of automation and "computer use" once LLMs become powerful enough.

Every task on the web can be reduced down to a series of backend calls, and the key is extracting out the minimal graph that can replicate that task.

richardzhang · a year ago
Thank you!
blakeburch · a year ago
Really digging this idea.

I've spent plenty of time trying to dig into the network tab to automate requests to a website without an API. Cool to see the process streamlined with LLMs. Wishing you all the best of luck!

richardzhang · a year ago
Thank you!