> The workloads commonly executed on Async are those that do not require blocking an active user’s experience with a product and can be performed anywhere from a few seconds to several hours after a user’s action.
*) it's estimated up to 20% of the US's defence budget is spent protecting oil supplies for a start, which effectively acts as a subsidy of around 70c a gallon.
Undoing our mistake is always an option.
Perhaps at some point you can screenshot the lowcode and paste the image into GPT for it to interpret, but will they build for that use-case? The former exists today.
They are hungry and willing, but often overlooked. I have seen many climb out and teach themselves code, build tools for IT and revolutionize the way teams and orgs work. It's a marvel to see someone with drive do what they desire.
Fast forward to today. I see IT forcing all members to use a low-code tool. The passion drains from their eyes. I can see the fear of the mounting weight of becoming unemployable. They've shared with me their experiences. The directions they want to go have nothing to do with Low-Code, and the roles and orgs their interviewing with aren't interested in people who build with them. The question "what have you been working on?" is like a death knell. I'm pretty sure a lot of them don't see a future beyond helpdesk because of these tools.
My point is, think carefully about who is using this product. You can kill careers with this stuff. I think it's great for business teams who want to "do x in x app when y happens in y app."
Then you sit there like an idiot dragging blocks around when you could have just asked GPT to bust it out in code in seconds.
They're so bad for source control and documentation, too.
To build solutions I have to use the Amazon States Language, which is a learning curve and being as charitable as I can a royal pain in the ass. Ultimately I end up with a JSON file that is a "giant, flexible config file" for their runtime.
On the plus side, solutions using it are very nearly zero maintenance. No runtime updates, no package updates, no manual scaling, etc.
Another plus, it's zero cost when not in use. No VMs I have to pay for hourly or monthly.
The downsides (for me) are obvious: It's difficult to learn; It's very restrictive, and solutions often end-up needing some aspect of more flexible services like Lambda or Fargate (containers) when end-up adding cost and maintenance; It's proprietary and there is nothing I can use elsewhere (no other company support ASL as far as I know).
Overall, though, I love it. Why? I despise having to choose between unpatched systems and the drudgery of constant patching. With StepFuctions I don't have to choose.
The problem is the people around me thought it was too difficult, and couldn't see long term. So we implemented a low code solution and now everything is in there and it's a mess/nightmare. I hate my work now, and everything we build is tightly coupled to this spaghetti platform that will inevitably decide to raise it's prices on us and we will have no recourse.
Job hunting has been tough too, because very few places have done this, so they ask "what have you been working on?" and I'm basically setting record times for ending interviews if I tell the truth.
This sound like a huge waste of money for something that should just be completely on-device or self-hosted if you don't trust cloud-based AI models like ChatGPT Enterprise and want it all private and low cost.
But either way, Meta seems to be already at the finish line in this race and there is more to AI than the LLM hype.