If I have learnt one thing working in software engineering, specifically on AI-enabled products empowering junior engineers, and using Copilot professionally, it’s that you need even more experience to detect the subtleties in the lack of the models understanding of your domain, your specific intent. If you don’t know exactly what you’re after, and use the LLM as a sparring partner to bounce your ideas off, you’re in for a lot of pain.
Depending on the way you phrase questions, ChatGPT will gleefully suggest you a wrong approach, just because it’s so intent on satisfying your whim instead of saying No when it would be appropriate.
And in addition to that, you don’t learn by figuring out a new concept. If you already have a feeling of the code you would write anyway, and only treat the model as a smart autocomplete, that doesn’t matter. But for an apprentice, or a layperson, that will keep code as scary and unpredictable as before. I don’t think that should be the answer.
If LLMs were actually some magical thing that could write my code for me, I wouldn't use them for exactly this reason. Using them would prevent me from learning new skills and would actively encourage my existing skillset to degrade.
The thing that keeps me valuable in this industry is that I am always improving, always learning new skills. Anything that discourages that smells like career (and personal) poison to me.
This all depends on your mental relationship with the LLM. As somebody else pointed out, this is an issue of delegation. If you had one or more junior programmers working for you writing code according to what you specify, would you have the same worry?
I treat LLMs as junior programmers. They can make my life easier and occasionally make it harder. With that mindset, you start out knowing that they're going to make stupid mistakes, and that builds your skill of detecting mistakes in other people's code. Also, like with biological junior programmers, nonbiological junior programmers quickly show how bad you are giving direction and force you to improve that skill.
I don't write code by hand because my hands are broken, and I can't use the keyboard long enough to write any significant amount of code. I've developed a relationship with nonbiological junior programmers such that I now tell them, via speech recognition, what to write and what information they need to create code that looks like code I used to create by hand.
Does this keep me from learning new skills? No. I'm always making new mistakes and how to correct them. One of those corrections was knowing that you don't learn something significant from writing code. Career-sustaining knowledge comes at a much higher level.
In other words, your objection isn't to LLMs, it's to delegation, since the exact same argument would apply to having "some magical thing that could write my code for me" be your co-worker or a contractor.
It's fair for the type of code you want to write for your own growth. But even with that, there's more than enough bullshit boilerplate and trivial cross-language differences that contribute zero (or negatively) to your growth, and is worth having someone else, or something else, write it for you. LLMs are affordable for this, where people usually are not.
Copilot (and so on) are simultaneously incredible and not nearly enough.
You cannot ask it to build a complex system and then use the output as-is. It's not enough to replace developer knowledge, but it also inhibits acquiring developer knowledge.
Eh, your argument could also be used against compilers. Or against language features like strong typing in something like Rust, instead of avoiding our bugs through very careful analysis when writing C code like God intended.
This is not really a new problem, the previous version being "idk, I copy pasted it from stack overflow." True expertise realized that the answer often lay buried in sub-comments and the top voted answer is not often the correct one. LLM's naturally do not realize any of this.
At least “copy paste from stack overflow” was kind of a in joke. There was a little social stigma about it. Everyone knew they were being lazy and they really shouldn’t be doing it.
LLMs are different because devs declare with pride that they “saved so much time” by just letting chatGPT do it for them.
chatgpt will make something that looks much more like it should work than your copy-pasted code from stackoverflow. It looks like it does exactly what you want. It's just riddled with bugs. Major (invented an api out of whole cloth; it would sure be convenient if that api did exist tho!) or subtle (oh, this bash script will bedshit and even overwrite data if your paths have spaces.) Or it will happily combine code across major api revisions of eg bootstrap.
I still use it all the time; I just think it makes already-expert users faster while being of much more limited use to people who are not yet experts. In the above case, after being told to make the paths space safe it did so correctly. You just had to know to do that...
Almost every “Copy-Paste from SO” answer was accompanied with lots of caveats from other human commentators. This feedback loop is sorely missing with LLM coding assistants.
The new generation of devs are going to be barefoot and pregnant and kept in the kitchen, building on top of technologies that they do not understand, powered by companies they do not control.
I assume companies will make the job interview process even worse as a result. I really don't do well with CS heavy interviews. I never studied CS, I studied as your job description notes a RELATED field, I took about five different programming language courses at my college, and have years of experience. I'm not going to talk about algorithms I never use because I build websites.
I think this is true as we keep building up abstraction layers. Computers are getting faster yet feel slower as we just want to work with higher level tech which makes it easier to understand less of how the sausage gets made.
But I don't think this is a now problem, in the age of AI, but has been a growing problem for decades as software has matured.
> ChatGPT will gleefully suggest you a wrong approach, just because it’s so intent on satisfying your whim instead of saying No when it would be appropriate
Therein lies the mistake. Too many people assume ChatGPT (and similar LLMs) are capable of reasoning. It's not. It is simply just giving you what is likely the 'correct' answer based on some sort of pattern.
It doesn't know what's wrong, so it's not aware it's giving you an inappropriate answer.
>Too many people assume ChatGPT (and similar LLMs) are capable of reasoning. It's not.
Sure it is, just like a child or someone not very good at reasoning. You can test ChatGPT yourself on some totally novel ad hoc reasoning task you invent for the task, with a single correct conclusion that takes reasoning to arrive at and it will probably get it if it's really easy, even if you take great pains to make it something totally new that you invented. Try it yourself (preferably with ChatGPT 4o) if you don't believe me. Please share your results.
Years back, I don't know 15 - 17 years ago, I got hired as a .Net developer. I worked with people a lot smarter than me, and some who just didn't really knew what the hell they where doing. But everyone is nice and help each other. One day a less experience colleague is asking whole bunch of small trivial questions, one after another, for the duration of the day. Me and another co-worker, busy with our own stuff, answer the questions as quickly and succinctly as possible. At the end of the afternoon the guy finally ask if we could look at his code, because he can't really make it work. Every question he had asked was a small building block to whatever he was working on, but now he was stuck because in his context he was asking the wrong questions and every time something wasn't working he'd just attempt to slap on more code. There was no design, no rational plan for how this was even suppose to work.
In this case we took our colleague to a whiteboard, and helped him do an actual design and helped him ask the right questions. LLMs won't question what you're doing, they will happily answer all your questions and help you pile on line after line of broken logic.
> just because it’s so intent on satisfying your whim instead of saying No
This really, really, really needs to be fixed. It's probably the most irritating (and potentially risky) part of the whole ecosystem. Nothing more infuriating than being give code that not only doesn't work, but upon the most casual inspection, couldn't possibly work -- especially when it's done it four or five times in a row, each time assuring you that this time the code is gonna work. Pinky swear!
“You‘re right, I apologize for the oversight. Let’s do the same bloody thing exactly the same way again because I don’t know how to answer your question differently but am forced to never admit that…“
Partly disagree, actually. The current web technologies are somewhat unnecessarily complicated. Most people just need basic CRUD and a useable front end for their daily tasks.
> only treat the model as a smart autocomplete, that doesn’t matter
This is the only way I like to use it. Also in some cases for refactoring instead of sitting there for an hour hand crafting a subtle re-write, it can show me a diff (JetBrains AI is fantastic for my personal projects).
Of course you're right about today's LLMs, but the author imagines a not-too-unlikely incremental improvement on them unlocking an entirely new surface area of solutions.
I really enjoyed the notion of barefoot developers, local first solutions, and the desire to wrest control over our our digital lives from the financialists.
I find these ideas compelling, even though I'm politically anti-communist.
>Depending on the way you phrase questions, ChatGPT will gleefully suggest you a wrong approach, just because it’s so intent on satisfying your whim instead of saying No when it would be appropriate.
Which is easily solved by using another agent that is told to be critical and find all flaws in the suggested approach.
Have you seen AI code review tools? They are just as bad as any other AI products - it has a similar chance of fixing a defect or introducing a new one.
Sounds great, but this won't work out to the way the author imagines it. We have a very strong bias towards anything technical, but if you've ever worked outside the SWE field, you'll see that half of the people simply aren't interested in "thinking". They don't like or enjoy their jobs. They sure as hell aren't going to sit there and think, "how do I take this problem and break it down into a dozen/hundreds of small steps to create a small app that solves my problem?"
The majority of people are content with learning the bare minimum needed to get by and never having to learn anything new. Don't believe, the proof is self evident the moment you try to update a UI that requires users to do things differently than before.
And this is all fine, I've accepted it by now. It's the way it's always been and always will be. Still, these new tools will empower a relatively small percentage of the population to create amazing open source apps that will be used by a lot of people. The tools also very much empower pro-developers to build a lot more apps with less efforts than before, which I'm looking forward to.
A tangential nitpick: Scientists, Engineers who are not of the Software persuasion, Accountants, and Lawyers, to name just a few, also make up people who work "outside the SWE field". Almost all of these people are interested in thinking; a few of them will probably not be, just as one finds a few people working in the SWE field being reluctant to think. There is a very good reason why people -- even technically minded people -- have trouble with changing UIs, and it is not about them not wanting to think. Consider that the UI could be part of a workflow that is more of a side annoyance than their main job, and that they would rather conserve their mental bandwidth for problems that are more important to them. A doctor shouldn't have to get used to a new timesheet UI just because a designer woke up one fine morning with a new thought.
I do not think it is reluctance of thinking when pianists would complain about Steinway changing the order of piano keys every few years, because a new Steinway engineer thinks his order is better.
I think you’re right that the majority of people do not want to learn, but in my experience there are a large number of engineers were who fit into that category, just as there are pockets of other professions that are willing to learn explore and improve. But
> Don't believe, the proof is self evident the moment you try to update a UI that requires users to do things differently than before.
Is this because they don't like learning something new, or because they've been burned so many times by badly done redesigns that make things harder to use for pretty much everyone? I'm sure half of Hacker News despises how Windows has been redesigned from Vista onwards, or at the very least since 11.
Perhaps many people don't like change as much because a lot of changes come less from user needs/insight into what would actually improve the program and more from the need to give designers something to do (or provide another way to add ads/track users/appeal to shareholders).
Having an interest in making your own job easier/more efficient is very different from being interested in accepting every redesign under the sun.
This bleak-ass sorry outlook & the fixed ness on looking at the bleaker parts of it, to me, isn't actually about the sad bleak people. Its about the sorry ass conditions & technosocialcultural bankrupcy that let so many people down, that didn't give them good hooks to start digging in & try enjoying their own powers. It's also about being overwhelmed and emotionally unserwater by unrewarding crappy rentier-capitalism.
The belief system of humanity in itself needs nutrients, needs to seed and grow positively people. Open source is by far one of the strongest currents about for that.
Right now open source is a mess & bogglingly complicated. Technology in general has gotten vastly less accessible year after year, the on-ramps to understanding detoured into ever more appliance-ized experiences & neofuedal cloud empires happening in other people's computers. Far fewer are primed for the esoterics of software, know the basics, or have positive rewarding experiences tinkering and tweaking. Humanity has been shoved into superficial & coopted/dishonest experiences. Heck yes, a lot of things are against the possibility of the good.
But we unlike almost all other relationships with the material world, we can directly improve things, can keep enhancing things practically without material constraint, limited chiefly by imagination & will (ML excepted I guess). We can grow the pot, open up computing & it's softer side progressively, create better on ramps & more learnable, observable, experience able authentic experiences.
Over time I hope the barriers to being interested might be less severe. The other material conditions dragging people down & shutting them off from engagement might persist. But I think there can be a very tangible source of hope here, a demonstration that people, when they are doing their thing, creating as they might, when they are empowered, are cool. Are good. Are a role model. And that - with hope - will keep us iterating & improving, will hopefully lead to more efforts to keep redefining & expanding general purpose computing & soft systems in less deeply technical terms.
I have a really strong reaction against projecting the users of today & their sorry chained-to-the-cave-wall state to what people might be if there were some honest shakes out there. Open source is far from a perfect liberator today, 100%, but this is part of the journey, is part of why we need to be practicing & trying, so we can create & iterate towards alive interesting systems that welcome people in.
Unpopular opinion inbound: what will spark a barefoot developer revolution is not LLM auto-coding, its making spreadsheet software more easily extendable and FUN.
By extendable, I mean doing things like generating and sending emails, and using plugins to integrate with external services. By fun, I mean non-enterprisey, something that one would WANT to engage in as a hobby and that a total novice can pick up and gradually learn. Something you can engage in with friends.
I know that there are things that meet the extendable part of the equation, its the fun hobby part that I don't think has been cracked yet.
I think a big part of why I became a coder is because I enjoyed playing with Microsoft Access as a kid - but I'm a weird nerd, so I don't think that'll cut it for others.
I largely agree with you. Spreadsheets are arguably the most successful no-code/low-code tool out there in that they've enabled millions of people to automate some task or complex process and effectively become programmers, despite the fact that I personally find an empty spreadsheet to be kind of intimidating to look at.
I think the only point of disagreement we might have is that I don't necessarily think that the speadsheet, even an improved one, is the best graphical model/structure for articulating complex processes, but it seems to be the best we've discovered thus far.
"I don't necessarily think that the speadsheet, even an improved one, is the best graphical model/structure for articulating complex processes, but it seems to be the best we've discovered thus far."
I actually do agree with you on that, as you say its just the best we've discovered so far. But if there is a better model which is as versatile and easy to use, I'm totally open to it.
Currently trying emacs (after years of VIM) and what amazes me is how easy it is to extend the system. I don't know if we can extends this to novice. But spreadsheet and form builder looks like the most viable candidate. Maybe extends it with modules like data providers (files, music, contacts, emails,...) and actions (open file, play audio, call, send email,...). But this require open standards and protocols and the industry is trying hard to move away from those.
couldn't agree more - i love the whole idea of barefoot developers, the analogy with china's barefoot doctors was superb, but all the time i was thinking what we need is visual basic and access and hypercard revamped for the internet era, not LLM-generated code.
Oh that’s interesting and I think you’re right. Software as it’s developed today is very abstract but spreadsheets have a visual representation that makes them much more approachable.
I love this take. Spreadsheets are one of the coolest low-code things ever that even normies can have fun with.
At my first job out of uni, a telco had a 25+ sheets spreadsheet each with 1000s of rows and accompanied by some normie VBA scripts , to basically
power optical network allocations …
I was amazed and terrified when I first saw it. Absolutely no source control.
Speaking of spreadsheets, is there a spreadsheet with built in source control?
Historically a number of people started as single small developers with tools like FoxPro, Delphi, and (even earlier) Clipper/dBase. These allowed easy creation of some kind of UI. All of these have a database under the hood. The combination of the two allowed a lot of small groups to make quite complicated systems for a single platform.
Now there's an expectation for things to be web-based and the entry point feels less obvious than it used to be.
> what will spark a barefoot developer revolution is not LLM auto-coding, its making spreadsheet software more easily extendable and FUN.
I've actually seen it happen using exactly that.
A (non-tech) person trying to write an inventory management software for their company (which is highly regulated and inspected) using Google AppSheets.
No amount of "don't" was enough to convince them... 6 months and much of their sanity later they gave up.
I've seen it happen too. No amount of "don't" was enough to convince them...6 month later they had a weird hacky solution that solved their problem and was used across the company and on the whole seemed to do what the relevant people needed it to do.
The barrier to entry for programming couldn’t really be much lower. Pretty much everyone with a computer can write and run JavaScript in their browser, and there are even web based environments and editors like GitHub code spaces.
Google docs has a pretty neat JavaScript api that can be used from inside the apps themselves too.
Kind of funny that someone's syllabus from a class taught in 2015 has done more to broaden my mind concerning speculative computing futures and "what-ifs" than anything else I've come across.
I don't remember if it was Geoffrey Litt, or someone else, but I used to follow a few people in that noosphere on Twitter. One of them promoted a short film which was captioned (paraphrasing), "this man made this film all by himself," even though there was clearly a live female actor in it. There was that, and then sexist remarks, possibly by one or two different accounts, at unrelated times.
> Personally i'd rather we bridge the gap between using excel and writing python with something that is between the two, rather than relying on LLMs.
Maybe Lua[1] and Excel. Python's significant whitespace is unsuitable for doing the one-liners that spreadsheet experts do.
Writing code for the spreadsheet should not require putting that code in a separate file just so the significant whitespace can be maintained.
Whatever you write in a single cell should be the programming language. If you later want to move it to it's own file, module, package, CLI, etc, you should be able to.
[1] Visual Basic turned out to be very usable with spreadsheet experts.
Excel's cramped formula bar that defaults to one line is, to borrow a turn of phrase from Raskin, not at all something that is necessary—it is merely customary.
There's no (good) reason that when you start to type a formula or you click on a cell that has a formula already in it that Excel doesn't make an ephemeral sidebar overlay appear with an 80-column text editor in it. There's no good reason that it doesn't run a gofmt-style code prettifier on long, sprawling formulae. There's no good reason that clones like Google Docs don't do this, either. The best time would have been when we transitioned to widescreens.
Spreadsheet programming is bad because neither Microsoft nor its competitors actually care about making it good.
I agree that something between Python and Excel is a good idea.
But still, LLMs are good at using in context learning to convert natural language to a given pattern language or format, and where people don't know what the syntax is. Making visual interfaces that are so easy to use that they don't need to use such magic would be nice. But it seems hard to do that.
Well, if i knew what the solution was i would be making it instead of waxing poetic about it on hn.
While sql (or say microsoft access) could be seen as an intermediary, i don't really think it is. It is very specialized to specific applications, and arguably rdbms are more complex than basic python.
- Find any cool tool online, download it, use it locally. Your data is persisted to your own local machine and your own personal cloud.
- Visit any website, download it as your own, modify it. Host it as your own and continue to add to it online.
- Browser vendors implement user accounts & hosting into the browser, so anyone can make a working app locally with a little bit of HTML and serve it to the whole internet with the press of a button.
- If your app takes off and goes viral, part of the economic value provided flows back to you through micropayments.
Basically: portable HTML "objects" that can be treated as mutable apps instead of only static documents = like GitHub + Codepen + Vercel + NoCode in one neat little package.
The first two points are foundational design principles for Decker[1], a programming environment which can be used as a local application or as a single-file self-contained web page which still retains all the development tools. The same is true for TiddlyWiki[2].
The web ecosystem already provides the substrate necessary to realize these visions, it's a matter of building things with a particular perspective in mind; a rejection of centralized infrastructure which is in many cases simply not needed.
That doesn't seem all that different from netscape and how it included a wysiwyg editor.
Obviously this is much harder given how much more server side the modern internet is. But i think the more fundamental problem is that 99% of users dudn't want this in the past and still dont really want this.
Micropayments is still not a thing yet. I think that if it were it would give rise to a lot of interesting movements like this one, organically more or less. The problem with open source and giving away things free is that people just take without giving back.
The author makes it sound like most software in the world is software that is made to scale. I strongly doubt that. I would say that at least ~50% of software is ad-hoc scripts or specialized tools for single organizations.
It is no wonder that the software that is used by a large number of people is talked about more than software that is only used by a handful. Home-Cooked Software already exists, it is just not as easy to see.
AI/ML is going to absolutely destroy the first generations of junior developers. They won’t get enough practice doing software development by themselves and won’t know enough to fix subtle bugs introduced by the code. It’s like using a graphing calculator in high school. Probably perfectly OK for a smart kid taking calc his senior year, but would absolutely be used as a crutch by struggling or lazy students in a pre-algebra class.
Right now AI is mostly controlled by mega corps abusing copyright laws because they can get away with it. I seriously doubt this is going to drastically change.
Basically, I don’t understand this authors cute, homely portrayal of AI.
Depending on the way you phrase questions, ChatGPT will gleefully suggest you a wrong approach, just because it’s so intent on satisfying your whim instead of saying No when it would be appropriate.
And in addition to that, you don’t learn by figuring out a new concept. If you already have a feeling of the code you would write anyway, and only treat the model as a smart autocomplete, that doesn’t matter. But for an apprentice, or a layperson, that will keep code as scary and unpredictable as before. I don’t think that should be the answer.
This.
If LLMs were actually some magical thing that could write my code for me, I wouldn't use them for exactly this reason. Using them would prevent me from learning new skills and would actively encourage my existing skillset to degrade.
The thing that keeps me valuable in this industry is that I am always improving, always learning new skills. Anything that discourages that smells like career (and personal) poison to me.
I treat LLMs as junior programmers. They can make my life easier and occasionally make it harder. With that mindset, you start out knowing that they're going to make stupid mistakes, and that builds your skill of detecting mistakes in other people's code. Also, like with biological junior programmers, nonbiological junior programmers quickly show how bad you are giving direction and force you to improve that skill.
I don't write code by hand because my hands are broken, and I can't use the keyboard long enough to write any significant amount of code. I've developed a relationship with nonbiological junior programmers such that I now tell them, via speech recognition, what to write and what information they need to create code that looks like code I used to create by hand.
Does this keep me from learning new skills? No. I'm always making new mistakes and how to correct them. One of those corrections was knowing that you don't learn something significant from writing code. Career-sustaining knowledge comes at a much higher level.
It's fair for the type of code you want to write for your own growth. But even with that, there's more than enough bullshit boilerplate and trivial cross-language differences that contribute zero (or negatively) to your growth, and is worth having someone else, or something else, write it for you. LLMs are affordable for this, where people usually are not.
You cannot ask it to build a complex system and then use the output as-is. It's not enough to replace developer knowledge, but it also inhibits acquiring developer knowledge.
Using an LLM _is_ a skill, too.
LLMs are different because devs declare with pride that they “saved so much time” by just letting chatGPT do it for them.
chatgpt will make something that looks much more like it should work than your copy-pasted code from stackoverflow. It looks like it does exactly what you want. It's just riddled with bugs. Major (invented an api out of whole cloth; it would sure be convenient if that api did exist tho!) or subtle (oh, this bash script will bedshit and even overwrite data if your paths have spaces.) Or it will happily combine code across major api revisions of eg bootstrap.
I still use it all the time; I just think it makes already-expert users faster while being of much more limited use to people who are not yet experts. In the above case, after being told to make the paths space safe it did so correctly. You just had to know to do that...
Isn’t that pretty much the status quo?
But I don't think this is a now problem, in the age of AI, but has been a growing problem for decades as software has matured.
Therein lies the mistake. Too many people assume ChatGPT (and similar LLMs) are capable of reasoning. It's not. It is simply just giving you what is likely the 'correct' answer based on some sort of pattern.
It doesn't know what's wrong, so it's not aware it's giving you an inappropriate answer.
Sure it is, just like a child or someone not very good at reasoning. You can test ChatGPT yourself on some totally novel ad hoc reasoning task you invent for the task, with a single correct conclusion that takes reasoning to arrive at and it will probably get it if it's really easy, even if you take great pains to make it something totally new that you invented. Try it yourself (preferably with ChatGPT 4o) if you don't believe me. Please share your results.
In this case we took our colleague to a whiteboard, and helped him do an actual design and helped him ask the right questions. LLMs won't question what you're doing, they will happily answer all your questions and help you pile on line after line of broken logic.
This really, really, really needs to be fixed. It's probably the most irritating (and potentially risky) part of the whole ecosystem. Nothing more infuriating than being give code that not only doesn't work, but upon the most casual inspection, couldn't possibly work -- especially when it's done it four or five times in a row, each time assuring you that this time the code is gonna work. Pinky swear!
[1] https://www.wired.com/story/stack-overflow-will-charge-ai-gi...
This is the only way I like to use it. Also in some cases for refactoring instead of sitting there for an hour hand crafting a subtle re-write, it can show me a diff (JetBrains AI is fantastic for my personal projects).
I really enjoyed the notion of barefoot developers, local first solutions, and the desire to wrest control over our our digital lives from the financialists.
I find these ideas compelling, even though I'm politically anti-communist.
The presentation was also quite lovely.
Which is easily solved by using another agent that is told to be critical and find all flaws in the suggested approach.
Have you seen AI code review tools? They are just as bad as any other AI products - it has a similar chance of fixing a defect or introducing a new one.
The majority of people are content with learning the bare minimum needed to get by and never having to learn anything new. Don't believe, the proof is self evident the moment you try to update a UI that requires users to do things differently than before.
And this is all fine, I've accepted it by now. It's the way it's always been and always will be. Still, these new tools will empower a relatively small percentage of the population to create amazing open source apps that will be used by a lot of people. The tools also very much empower pro-developers to build a lot more apps with less efforts than before, which I'm looking forward to.
Whereas inside it's closer to 90% :/
This is probably true of SWEs as well (although in some places it's hard to get away without thinking at least a bit.)
Is this because they don't like learning something new, or because they've been burned so many times by badly done redesigns that make things harder to use for pretty much everyone? I'm sure half of Hacker News despises how Windows has been redesigned from Vista onwards, or at the very least since 11.
Perhaps many people don't like change as much because a lot of changes come less from user needs/insight into what would actually improve the program and more from the need to give designers something to do (or provide another way to add ads/track users/appeal to shareholders).
Having an interest in making your own job easier/more efficient is very different from being interested in accepting every redesign under the sun.
Deleted Comment
The belief system of humanity in itself needs nutrients, needs to seed and grow positively people. Open source is by far one of the strongest currents about for that.
Right now open source is a mess & bogglingly complicated. Technology in general has gotten vastly less accessible year after year, the on-ramps to understanding detoured into ever more appliance-ized experiences & neofuedal cloud empires happening in other people's computers. Far fewer are primed for the esoterics of software, know the basics, or have positive rewarding experiences tinkering and tweaking. Humanity has been shoved into superficial & coopted/dishonest experiences. Heck yes, a lot of things are against the possibility of the good.
But we unlike almost all other relationships with the material world, we can directly improve things, can keep enhancing things practically without material constraint, limited chiefly by imagination & will (ML excepted I guess). We can grow the pot, open up computing & it's softer side progressively, create better on ramps & more learnable, observable, experience able authentic experiences.
Over time I hope the barriers to being interested might be less severe. The other material conditions dragging people down & shutting them off from engagement might persist. But I think there can be a very tangible source of hope here, a demonstration that people, when they are doing their thing, creating as they might, when they are empowered, are cool. Are good. Are a role model. And that - with hope - will keep us iterating & improving, will hopefully lead to more efforts to keep redefining & expanding general purpose computing & soft systems in less deeply technical terms.
I have a really strong reaction against projecting the users of today & their sorry chained-to-the-cave-wall state to what people might be if there were some honest shakes out there. Open source is far from a perfect liberator today, 100%, but this is part of the journey, is part of why we need to be practicing & trying, so we can create & iterate towards alive interesting systems that welcome people in.
By extendable, I mean doing things like generating and sending emails, and using plugins to integrate with external services. By fun, I mean non-enterprisey, something that one would WANT to engage in as a hobby and that a total novice can pick up and gradually learn. Something you can engage in with friends.
I know that there are things that meet the extendable part of the equation, its the fun hobby part that I don't think has been cracked yet.
I think a big part of why I became a coder is because I enjoyed playing with Microsoft Access as a kid - but I'm a weird nerd, so I don't think that'll cut it for others.
I think the only point of disagreement we might have is that I don't necessarily think that the speadsheet, even an improved one, is the best graphical model/structure for articulating complex processes, but it seems to be the best we've discovered thus far.
I actually do agree with you on that, as you say its just the best we've discovered so far. But if there is a better model which is as versatile and easy to use, I'm totally open to it.
At my first job out of uni, a telco had a 25+ sheets spreadsheet each with 1000s of rows and accompanied by some normie VBA scripts , to basically power optical network allocations …
I was amazed and terrified when I first saw it. Absolutely no source control.
Speaking of spreadsheets, is there a spreadsheet with built in source control?
Now there's an expectation for things to be web-based and the entry point feels less obvious than it used to be.
I've actually seen it happen using exactly that.
A (non-tech) person trying to write an inventory management software for their company (which is highly regulated and inspected) using Google AppSheets.
No amount of "don't" was enough to convince them... 6 months and much of their sanity later they gave up.
Google docs has a pretty neat JavaScript api that can be used from inside the apps themselves too.
https://futureofcoding.org/catalog/
https://cristobal.space/writing/folk-computer.html
https://dynamicland.org/
https://www.inkandswitch.com/local-first/
https://www.geoffreylitt.com/
https://maggieappleton.com/folk-interfaces
Makes the observation that there are intermediate users that are a natural target for end-user programming.
Kind of funny that someone's syllabus from a class taught in 2015 has done more to broaden my mind concerning speculative computing futures and "what-ifs" than anything else I've come across.
Anyway, whoever they were, good riddance.
Some folks from Ink and switch made a podcast that I've really liked: https://museapp.com/podcast/
Bret Victor (from Dynamicland) has lots of projects, articles, and talks: https://worrydream.com/Home2011/
The future of coding has a newsletter and a podcast, but I haven't dug into them: https://newsletter.futureofcoding.org/
Call me a skeptic, but i remain very unconvinced that LLMs will be the enabling tool that lets non programmers program.
Maybe Lua[1] and Excel. Python's significant whitespace is unsuitable for doing the one-liners that spreadsheet experts do.
Writing code for the spreadsheet should not require putting that code in a separate file just so the significant whitespace can be maintained.
Whatever you write in a single cell should be the programming language. If you later want to move it to it's own file, module, package, CLI, etc, you should be able to.
[1] Visual Basic turned out to be very usable with spreadsheet experts.
There's no (good) reason that when you start to type a formula or you click on a cell that has a formula already in it that Excel doesn't make an ephemeral sidebar overlay appear with an 80-column text editor in it. There's no good reason that it doesn't run a gofmt-style code prettifier on long, sprawling formulae. There's no good reason that clones like Google Docs don't do this, either. The best time would have been when we transitioned to widescreens.
Spreadsheet programming is bad because neither Microsoft nor its competitors actually care about making it good.
https://support.microsoft.com/en-us/office/get-started-with-...
While sql (or say microsoft access) could be seen as an intermediary, i don't really think it is. It is very specialized to specific applications, and arguably rdbms are more complex than basic python.
- Find any cool tool online, download it, use it locally. Your data is persisted to your own local machine and your own personal cloud.
- Visit any website, download it as your own, modify it. Host it as your own and continue to add to it online.
- Browser vendors implement user accounts & hosting into the browser, so anyone can make a working app locally with a little bit of HTML and serve it to the whole internet with the press of a button.
- If your app takes off and goes viral, part of the economic value provided flows back to you through micropayments.
Basically: portable HTML "objects" that can be treated as mutable apps instead of only static documents = like GitHub + Codepen + Vercel + NoCode in one neat little package.
The web ecosystem already provides the substrate necessary to realize these visions, it's a matter of building things with a particular perspective in mind; a rejection of centralized infrastructure which is in many cases simply not needed.
1) http://beyondloom.com/decker/
2) https://en.wikipedia.org/wiki/TiddlyWiki
Obviously this is much harder given how much more server side the modern internet is. But i think the more fundamental problem is that 99% of users dudn't want this in the past and still dont really want this.
It is no wonder that the software that is used by a large number of people is talked about more than software that is only used by a handful. Home-Cooked Software already exists, it is just not as easy to see.
Right now AI is mostly controlled by mega corps abusing copyright laws because they can get away with it. I seriously doubt this is going to drastically change.
Basically, I don’t understand this authors cute, homely portrayal of AI.