(This is satie for those of you who needs to have it spelled out :-)
When will people acknowledge that LLMs are stochastic text generators?
This whole blog reads like trying to fit a square into a round hole. And frankly most of the comments in this thread is jumping right on the wagon “what water?”-style [1]
By all means use LLMs for what they can be useful for but god damnit when they are not useful please acknowledge this and stop trying to make everything a nail for the LLM-hammer.
LLMs are. not. intelligent. They don’t have a work ethic that says “oh maybe skipping tests is bad”. If they generate output that skips tests it’s because a high enough part of the training data contained that text sentence.
[1] fish joke
Have you or anyone here any API in EU for getting payments directly to your bank account? I have started a discussion on this on OPH[1], I welcome any information on direct banking API in Europe in that discussion.
[1] https://github.com/abishekmuthian/open-payment-host/discussi...
I don’t have any experience integrating to their API myself but Lunar is a relatively new Danish (so EU) 100% digital bank. See https://www.lunar.app/en/personal/what-is-lunar
They have an Open API: https://developer.openbanking.prod.lunar.app/home
Edit: “new” in finance terms - started 2015.
1. LLMs are a new technology and it's hard to put the genie back in the bottle with that. It's difficult to imagine a future where they don't continue to exist in some form, with all the timesaving benefits and social issues that come with them.
2. Almost three years in, companies investing in LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them, the majority of consumer usage is at the free tier, the industry is seeing the first signs of pulling back investments, and model capabilities are plateauing at a level where most people agree that the output is trite and unpleasant to consume.
There are many technologies that have seemed inevitable and seen retreats under the lack of commensurate business return (the supersonic jetliner), and several that seemed poised to displace both old tech and labor but have settled into specific use cases (the microwave oven). Given the lack of a sufficiently profitable business model, it feels as likely as not that LLMs settle somewhere a little less remarkable, and hopefully less annoying, than today's almost universally disliked attempts to cram it everywhere.
I agree with you, but I’m curious; do you have link to one or two concrete examples of companies pulling back investments, or rolling back an AI push?
(Yes it’s just to fuel my confirmation bias, but it’s still feels nice:-) )
Downsides are you need to use proton client or web UI.
The proton suite now also features other useful (and secure) apps like Drive, Password manager, etc. I’m not using those though.
I'm trying to share as much technical across this thread as for your two examples:
System upgrades:
Keep in mind that as per the ISO specification, system upgrades should be applied but in a controlled manner. This lends itself perfectly to the following case that is manually triggered.
Since we take steps to make applications stateless, and Ansible scripts are immutable:
We spin up a new machine with the latest packages and once ready it join the Cloudflare load balancer. The old machines are drained and deprovisioned.
we spin up a new machine We have a playbook that iterates through our machines and does it per machine before proceeding. Since we have redundancy on components, this creates no downtime. The redundancy in the web application is easy to achieve using the load balancer in Cloudflare. For the Postgres database, it does require that we switch the read-only replica to become the main database.
DB failover:
The database is only written and read from by our web applications. We have a second VM on a different cloud that has a streaming replication of the Postgres database. It is a hot standby that can be promoted. You can use something like PG Bouncer or HAProxy to route traffic from your apps. But our web framework allows for changing the database at runtime.
> Business
Before migration (AWS): We had about 0.1 FTE on infra — most of the time went into deployment pipelines and occasional fine-tuning (the usual AWS dance). After migration (Hetzner + OVHCloud + DIY stack): After stabilizing it is still 0.1 FTE (but I was 0.5 FTE for 3-4 months), but now it rests with one person. We didn’t hire a dedicated ops person. On scaling — if we grew 5-10×: * For stateless services, we’re confident we’d stay DIY — Hetzner + OVHCloud + automation scales beautifully. * For stateful services, especially the Postgres database, I think we'd investigate servicing clients out of their own DBs in a multi-tenant setup, and if too cumbersome (we would need tenant-specific disaster recovery playbooks), we'd go back to a managed solution quickly.
I can't speak for cloud FTE toll vs a series of VPS servers in the big boys league ($ million in monthly consumption) and in the tiny league but at our league it turns out that it is the same FTE requirement.
Anyone want to see my scripts, hit me up at jk@datapult.dk. I'm not sure it'd be great security posture to hand it out on a public forum.
Have you considered doing your own HA Load balance? If yes what tech options did you consider
> No one would implement a bunch of utility functions that we already have in a different module.
> No one would change a global configuration when there’s a mechanism to do it on a module level.
> No one would write a class when we’re using a functional approach everywhere.
Boy I'd like to work on whatever teams this guy's worked on. People absolutely do all those things.
Critical solutions, but small(er) projects with 2-4 devs, that’s where it’s at. I feel like it’s because then it’s actually possible to build a devteam culture and consensus that has the wanted balance of quality and deliveryspeed.