How do you see the product evolving as agents become better and better?
How do you see the product evolving as agents become better and better?
Any alternatives besides racking own servers?
*EDIT* Did a little ChatGPT and it recommended tiny t4g.micro then use EBS of type cold HDD (sc1). Not gonna be fast, but for offsite backup will probably do the trick.
[1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-inst...
I'm not joking, I didn't ask this as a way to namedrop my experience and credentials (common 'round this neck o' the woods), I honestly don't know what all the much more competent organizations are doing and would really like to find out.
I've been advocating with several projects over recent years to get SQLite3 as an archive/export/interchange format for data. Need to archive 2019 data from the database, dump it into a SQLite db with roughly the same schema... Need to pass multiple CSVs worth of data dumps, use a SQLite file instead.
As a secondary, I wonder if it's possible to actively use a SQLite interface against a database file on S3, assuming a single server/instance is the actual active connection.
You could achieve this today using one of the many adapters that turn S3 into a file system, without needing to wait for any SQLite buy in.
First, as models get better, our agent's ability to navigate a website and generate accurate automation scripts will improve, giving us the ability to more confidently perform multi-step generations and get better at one-shotting automations.
We expect browser agents will improve as well, which I think is more along the lines of what you're asking. At scale, we still think scripts will be better for their cost, performance, and debuggability aspects - but there are places where we think browser agents could potentially fit as an add-on to deterministic workflows (e.g., handling inconsistent elements like pop-ups or modals). That said, if we do end up introducing a browser agent in the execution runtime, we want to be very opinionated about how it can be used, since our product is primarily focused on deterministic scripting.
This actually makes a ton of sense to me in lots of the LLM contexts (e.g. seeing how we are starting to prefer having LLMs write one-off scripts to do API calls rather than just pointing them at problems and having them try it directly).
Thanks!