And a question to the knowledgeable: does a simple/stupid question cost more in terms of resources then a complex problem? in terms of power consumption.
Deleted Comment
And a question to the knowledgeable: does a simple/stupid question cost more in terms of resources then a complex problem? in terms of power consumption.
Until we give them access to all Jira tickets instead of just one so they know what's missing.
I was thinking of the first solar civilization, which lives totally in space. Near a star, but not in a planet, and no gravitational pull anywhere. They build tubes 10 km long, a shot board is put at one end, and the players at the other end. They shoot darts at the board, and each shot takes 5 hours to reach the target. That's their national sport.
Problem is, I have never played darts, i don't know anyone who plays it, I will ask the LLM to fill in the blanks, of how a story based on that game could be constructed. Then I will add my own story on top of that, I will fix anything that doesn't fit in, add some stuff, remove some other stuff and so on.
For me it saves time, instead of asking people about something, hearing them talk about it or watching them do it, i do data mining on words. Maybe more shallow than experiencing it myself or asking people who know about it first hand, but the time it takes to get some information good enough collapses down to 5 minutes.
Depends on how you use it, it can enhance human capabilities, or indeed, mute them.
I would argue it needs editing, as it violates the HN guideline:
> use the original title, unless it is misleading or linkbait; don't editorialize.
GPT 4o pricing for comparison: Price Input: $2.50 / 1M tokens Cached input: $1.25 / 1M tokens Output: $10.00 / 1M tokens
It sounds like it's so expensive and the difference in usefulness is so lacking(?) they're not even gonna keep serving it in the API for long:
> GPT‑4.5 is a very large and compute-intensive model, making it more expensive than and not a replacement for GPT‑4o. Because of this, we’re evaluating whether to continue serving it in the API long-term as we balance supporting current capabilities with building future models. We look forward to learning more about its strengths, capabilities, and potential applications in real-world settings. If GPT‑4.5 delivers unique value for your use case, your feedback (opens in a new window) will play an important role in guiding our decision.
I'm still gonna give it a go, though.
testWorkflow
.step(llm)
.then(decider)
.then(agentOne)
.then(workflow)
.after(decider)
.then(agentTwo)
.then(workflow)
.commit();
On a first glance, this looks like a very awkward way to represent the graph from the picture. And this is just a simple "workflow" (the structure of the graph does not depend on the results of the execution), not an agent.I built my own TypeScript AI platform https://typedai.dev with an extensive feature list where I've kept iterating on what I find the most ergonomic way to develop, using standard constructs as much as possible. I've coded enough Java streams, RxJS chains, and JavaScript callbacks and Promise chains to know what kind of code I like to read and debug.
I was having a peek at xstate but after I came across https://docs.dbos.dev/ here recently I'm pretty sure that's that path I'll go down for durable execution to keep building everything with a simple programming model.
I mean this has been enough to get my feet wet and have some fun with exploring agent-based development, no doubt, and I appreciate it, but I'm having a hard time crossing my experience with,
> generous free-of-charge quotas
as they say. It's not that generous if it stops working after 5 mins? (This morning literally a single sentence I typed into Crush resulted in some back and forth I guess it called the API a few times and it just rate limited-out. Fine, it was probably a lot of requests going on, but, but I literally gave it a single small job to do and it couldn't finish it.)
Meanwhile I seem to be able to use the Gemini web app endlessly and haven't hit any limits yet.