The more "sentimental" or "egotistical" a piece of software is in itself, the less I like it. Taken to the limit, the title of the article commands us to generate Skinner boxes to maximize user engagement etc.
This has been our experience in the Greater Boston Mesh.
I understand some hams run a meshtastic repeater primarily to convince meshtastic users to become hams.
But yes, it can't realistically be compared to something like a "real" MANET system with $10k radios that can do something like 100mbps data rates. It is dramatically more accessible and deployable though.
I'm quite sympathetic to on-prem but there's a possible argument against.
1. We advocate automation because people like Brenda are error-prone and machines are perfect.
2. We disavow AI because people like Brenda are perfect and the machine is error-prone.
These aren't contradictions because we only advocate for automation in limited contexts: when the task is understandable, the execution is reliable, the process is observable, and the endeavour tedious. The complexity of the task isn't a factor - it's complex to generate correct machine code, but we trust compilers to do it all the time.
In a nutshell, we seem to be fine with automation if we can have a mental model of what it does and how it does it in a way that saves humans effort.
So, then - why don't people embrace AI with thinking mode as an acceptable form of automation? Can't the C-suite in this case follow its thought process and step in when it messes up?
I think people still find AI repugnant in that case. There's still a sense of "I don't know why you did this and it scares me", despite the debuggability, and it comes from the autonomy without guardrails. People want to be able to stop bad things before they happen, but with AI you often only seem to do so after the fact.
Narrow AI, AI with guardrails, AI with multiple safety redundancies - these don't elicit the same reaction. They seem to be valid, acceptable forms of automation. Perhaps that's what the ecosystem will eventually tend to, hopefully.
I predict there will actually be a lot of work to be done on the "software engineering" side w.r.t. improving reliability and safety as you allude to, for handing off to less than sentient bots. Improved snapshot, commit, undo, quorum, functionalities, this sort of thing.
The idea that the AI should step into our programs without changing the programs whatsoever around the AI is a horseless carriage.
Deleted Comment
It has also been responsible for predicting revolutions which never failed to materialize. 3D printing would make some kind of manufacturing obsolete, computers would make about half the world's jobs obsolete, etc etc.
Hand coding can be the knitting to the loom, or it can be industrialized plastic injection molding to 3D printing. How do you know? That distinction is not a detail--it's the whole point.
It's survivorship bias to only look at horses, cars, calculators, and whatever other real job market shifting technologies occurred in the past and assume that's how it always happens. You have to include all predictions which never panned out.
As human beings we just tend no to do that.
[EDIT: this being Pedantry News let me get ahead of an inevitable reply: 3D printing is used industrially, and it does have tremendous value. It enabled new ways of working, it grew the economy, and in some cases yes it even replaced processes which used to depend on injection molding. But by and large, the original predictions of "out with the old, in with the new" did not pan out. It was not the automobile to the horse and buggy. It was mostly additive, complementary, and turned out to have different use cases. That's the distinction.]
One could have made a reasonable remark in the past about how injection molding is dramatically faster than 3D printing (it applies material everywhere, all at once), scales better for large parts, et cetera. This isn't really true for what I'm calling hand-coding.
Obviously nothing about the future can be known for certain... but there are obvious trends that need not stop at software engineering.
I've written a ton of code in my life and while I've been a successful startup CTO, I've always stayed in IC level roles (I'm in one right now in addition to hobby coding) outside of that, data structures and pipelines, keep it simple, all that stuff that makes a thing work and maintainable.
But here is the thing, writing code isn't my identity, being a programmer, vim vs emacs, mechanical keyboard, RTFM noob, pure functions, serverless, leetcode, cargo culting, complexity merchants, resume driven dev, early semantic css lunacy, these are thing outside of me.
I have explored all of these things, had them be part of my life for better or worse, but they aren't who I am.
I am a guy born with a bunch of heart defects who is happy to be here and trying new stuff, I want to explore in space and abstraction through the short slice of time I've got.
I want to figure stuff out and make things and sometimes that's with a keyboard and sometimes that's with a hammer.
I think there are a lot of societal status issues (devs were mostly low social status until The Social Network came out) and personal identity issues.
I've seen that for 40 years, anything tied to a persons identity is basically a thing they can't be honest about, can't update their priors on, can't reason about.
And people who feel secure and appreciated don't give much grace to those who don't, a lot of callous people out there, in the dev community too.
I don't know why people are so fast to narrow the scope of who they are.
Humans emit meaning like stars emit photons.
The natural world would go on without us, but as far as we have empirically observed we make the maximally complex, multi modally coherent meaning of the universe.
We are each like a unique write head in the random walk of giving the universe meaning.
There are a ton of issues from a network resilience and maximizing the random meaning generation walk where Ai and consolidation are extremely dangerous, I think as far as new stuff in the pipeline it's between Ai and artificial wombs that have the greatest risks for narrowing the scope of human discovery and unique meaning expansion to a catastrophic point.
But so many of these arguments are just post-hoc rationalizations to poorly justify what at root is this loss of self identity, we were always in the business of automating jobs out from under people, this is very weak tea and crocodile tears.
The simple fact is, all our tools should allow us to have materially more comfortable and free lives, the Ai isn't the problem, it's the fact that devs didn't understand that tech is best when empowering people to think and connect better and have more freedom and self determination with their time.
If that isn't happening it's not the codes fault, it's the network architecture of our current human power structures fault.