For example, imagine a humanoid robot whose job is to bring in packages from your front door. Vision functionality is required to gather the package. If someone leaves a package with an image taped to it containing a prompt injection, the robot could be tricked into gathering valuables from inside the house and throwing them out the window.
Good post. Securing these systems against prompt injections is something we urgently need to solve.
Deleted Comment
i like that there are new startups in the space, though. things were getting pretty stale and uninspired.
Thank you for building Selenium.
Selenium offered headless mode and integrated with 3rd party providers like BrowserStack, which ran acceptance tests in parallel in the cloud. It seems like what browser-use.com is doing is a modern day version with many more features & adaptability.
This post is like saying Grafana and not mentioning Nagios
that already exists, its called CommonCrawl:
It’s smaller than Google’s index and Google does not represent the entirety of the web either.
For LLM training purposes this may or may not matter, since it does have a large amount of the web. It’s hard to prove scientifically whether the additional data would train a better model, because no one (afaik) not Google not common crawl not Facebook not Internet Archive have a copy that holds the entirety of the currently accessible web (let alone dead links). I’m often surprised using GoogleFu at how many pages I know exist even with famous authors that just don’t appear in googles index, common crawl or IA.
So every message that gets generated by the first LLM is then passed to a second series of LLM requests + a distilled version of the legislation. ex: "Does this message imply likelihood of credit approval (True/False)". Then we can score the original LLM response based on that rubric.
All of the compliance checks are very standardized, and have very little reasoning requirements, since they can mostly be distilled into a series of ~20 booleans.