"we have taken latest AI subscription. We expect you to be able to increase productivity and complete 5/10/100 stories per sprint from now on instead of one per sprint that we planned previously".
"we have taken latest AI subscription. We expect you to be able to increase productivity and complete 5/10/100 stories per sprint from now on instead of one per sprint that we planned previously".
I just hope against hope that Google doesn't limit its functionality further and point us towards the new terminal app in the name of security.
Europe has no wafer production and no companies that produce GPUs.
That means it is dependent on Taiwan for wafers and the USA for GPU design.
Then there is the question wether there is a will to invest. Gemini gives me this list of publicly traded companies in the US and what they invested in AI infrastructure in 2025:
Amazon: $100B
Alphabet: $90B
Microsoft: $80B
Meta: $70B
Tesla: $20B
For Europe, I get this list: Deutsche Telekom: $1BAre there really any customers who are demanding AI and threatening to leave if those AI features are missing in every tech adjacent product ?
I think the make or break situation of integrating cutting edge AI for any business is just the hype and fomo at leadership level.
Google/Apple can in the future announce a "safety" feature that periodically announce Airtag'ish Bluetooth beacons even when in Airplane or powered off mode.
They lost battle for office software, they can't even exist in chat space, despise trying to make chat that sticks for 2 decades now, they squandered on video chat space and office space too.
IF Alphabet was actually efficient they should own office space, but 365 ate their office productivity and even the utter turd that is MS teams is beating them out on chat.
Even their search gets worse and only places where they actually have progress is AI.
I remain surprised at how long people can flog horses I figured would be dead decades earlier in enterprise. Too scared to fix fundamental issues and still running off the fumes of vendor lock-in with exasperated end users.
Even with all the best practices, patterns and reviews in place software products often turns out to be held up by hacks and patches.
Add AI and inexperienced developers into the mix, the risk of fragile software increases ?
So we try to make every new feature that might be disruptive optional in systemd and opt-in. Of course we don't always succeed and there will always be differences in opinion.
Also, we're a team of people that started in open source and have done open source for most of our careers. We definitely don't intend to change that at all. Keeping systemd a healthy project will certainly always stay important for me.
If you were not a systemd maintainer and have started this project/company independently targeting systemd, you would have to go through the same process as everyone and I would have expected the systemd maintainers to, look at it objectively and review with healthy skepticism before accepting it. But we cannot rely on that basic checks and balances anymore and that's the most worrying part.
> that might be disruptive optional in systemd
> we don't always succeed and there will always be differences in opinion.
You (including other maintainers) are still the final arbitrator of what's disruptive. The differences of opinion in the past have mostly been settled as "deal with it" and that's the basis of current skepticism.
Somebody will use it and eventually force it if it exists and I don't think gaming especially those requiring anti-cheat is worth that risk.
If that means linux will not be able to overtake window's market share, that's ok. At-least the year of the linux memes will still be funny.
Whatever it is, I hope it doesn't go the usual path of a minimal support, optional support and then being virtually mandatory by means of tight coupling with other subsystems.
There’s a related issue that gives me deep concern: if LLMs are the new programming languages we don’t even own the compilers. They can be taken from us at any time.
New models come out constantly and over time companies will phase out older ones. These newer models will be better, sure, but their outputs will be different. And who knows what edge cases we’ll run into when being forced to upgrade models?
(and that’s putting aside what an enormous step back it would be to rent a compiler rather than own one for free)
IIUC, same model with same seed and other parameters is not guaranteed to produce the same output.
If anyone is imagining a future where your "source" git repo is just a bunch of highly detailed prompt files and "compilation" just needs an extra LLM code generator, they are signing up for disappointment.