> ... the beginnings of intelligent ... assistants in our IDEs ... specialize (sic) in ... C/C++, Java, Mobile. They will have intimate knowledge of common APIs ... trained on tens of thousands of code projects pulled from the open repositories across the web (google code, github, bitbucket,...). In addition to having 'read' more orders of magnitude more code then any human could in a lifetime, they will also have rudimentary ability to extract programmer intent, and organizational patterns from code. ... The computer automatically bringing up example code snippets, suggesting references to existing functionality that could be reused.
This person (Marc DeRosa) predicted Github Copilot within a margin of one year. Incredible.
Cherry picking only the accurate prediction makes it seem like the predictor is really good!
Look at the rest of that quote:
> The human, computer pair, will also interactively suggest, confirm and fine tune specifications of mathematical properties and invariants at points in the the program.
That hasn't happened and as far as I am aware, is not even close.
> This will help the computer assistant to not only better understand the program but also to to generate real time, verification test runs using SAT technology,
Ditto.
> Interactive testing will eliminate whole classes of logic bugs, making most non ui code correct by construction.
Ditto..
So, 1 out of 4 predictions correct, which makes it worse than random chance?
I don't know, some of that is at least starting to happen with Idris 2, though not in the exact way predicted: https://www.youtube.com/watch?v=mOtKD7ml0NU (Type Driven Development with Idris 2)
Short summary of linked video: when your type system is powerful enough you can restrict the set of possible implementations to the point that the compiler can make a decent guess as to what the program should be to satisfy the type signature.
> Leveraging the strengths of the computer and human will lead to an order of magnitude improvement in programmer productivity. Interactive testing will eliminate whole classes of logic bugs, making most non ui code correct by construction.
I'd like to see evidence / experience reports backing up this part. Certainly Copilot exists, but what I've read about it is pretty mixed.
More like Brooks was the one being right (for like 3 decades now) with his ‘No silver bullets article’, in that we will not have another order of magnitude productivity change in programming after high level languages became a thing.
I do, it's helpful most of the time, mostly because it behaves like intelligent text expander, and couple of times a day I'm consciously aware how it helped me in even more "intelligent" way. There were also situations when it was distracting/confusing, but I can deal with it. I'm working as web dev, with TS, JS and PHP, and in my case there is no place for some great solutions that I wouldn't thought about myself (less than 5 times I've used something generated with comment), it's simply predicting what I want to do next in the line and instead of writing 50 chars to end that, I can hit tab. Honestly, I don't want to work without it anymore.
EDIT. regarding these situations when it doesn't predict what I wanted, I feel that my brain is getting better at not focusing at that and moving on. At first it was distracting, now I subconsciously know that the provided solution might be wrong and I'm deciding faster if it's something I should choose or skip. Your tool is adapting to you and you're adapting to your tool to find the right balance :)
Writing boring chores or repetitive code is amazing. Need to handle all cases in a case in pretty much the same way but changing only one thing? Write the first one, the rest is auto coded perfectly.
I do every day. I code 5 times faster on problems several times harder.
You also get good at 'using copilot' just like you can be 'good at googling'. So if you not already doing it. Start now.
IMO. There's literally no point coding without it. You are completely wasting your time. However it has limits. It only helps you write code. Architecture is still down to you. If it could read your whole codebase rather than just the page you are on. And if you could supply it prompts via urls. i.e. preloads it with hints from other code bases. Then it will really become something else.
People only seem to have heard about GitHub Copilot. But Microsoft has a similar solution (that is not as... invasive) that was first released in preview in 2018. It's called IntelliCode, available for multiple languages both for Visual Studio Code[1] and regular Visual Studio[2] (but not enabled by default last time I checked -- which was a while ago, granted).
JetBrains has their own experimental thing that is not enabled per default.
And there's also Codota and Tabnine.
Copilot is actually fairly late to the party here.
Well people were making API calls to stackoverflow at the time to pull the most popular answer based code comments. There were several Sublime plugins. So this wasn't so hard to imagine.
Its honestly not that hard to predict, with a bit of statistics and some insights into early deep learning, its obvious that alot of data with discernable patterns will produce good results.
The “ Some safe and some bold predictions” comment is almost exactly my view on how programming should evolve. (functional, reactive, going toward dependent types etc ) Interesting how in 2012 it was already so clear!
I think mostly we do have gone in that direction, even if probably even slower than the (already cautious) commenter predicted.
Honest question: why are we as a community so slow at evolving a good, solid programming environment?
I get that each time a new language is introduced, porting over all the existing code + training the people is an immense task. But this is _exactly the problem_, right?
Why aren’t we able to stop for a while and sort out once and for all a solid framework for coding?
It’s not a theoretical ideal: I’m very convinced that the dumb piling up of technologies/languages/frameworks that we use now is significantly slowing down the _actual daily work_ we do of producing software. Definitely >>50% of my time as a programmer is spent on accidental complexity, that i know for sure.
It’s very practical: at this point this whole thing simply feels like very bad engineering, tbh?
There are many different types of programming. For example there is an increasing amount of people who program but do not have programming related job titles.
- A designer might use HTML/CSS/JS directly or with "low code" tools (AKA graphical IDEs, they are still essentially programming) to implement frontends or a bit of scripting to automate stuff in their Adobe tools.
- An analyst might use Excel (a programming environment) or SQL or some hybrid tool.
- Mathematicians and scientists are increasingly programming.
- An electrical technician or engineer programs their installations.
- And so on...
There isn't one paradigm or even a set of paradigms that fits them all. Some languages are deliberately straight forward and provide minimal abstractions, other languages have strong runtime guarantees, others enable flexible, composable abstractions.
I think programming will become even more diverse in the future.
> The “ Some safe and some bold predictions” comment is almost exactly my view on how programming should evolve. (functional, reactive, going toward dependent types etc ) Interesting how in 2012 it was already so clear!
Given that languages used broadly in the industry are lacking behind research 20+ years that's not a good prediction. It's just the way things are going.
BTW: There's still no mainstream language with full dependent typing… (I count Haskell as mainstream; Scala seems closest¹ but still a long way to go).
Actually people are already so overwhelmed by all that really old stuff coming now to languages that some of them resort to even much more basic approaches on the level of the 70'ies, like Go, thinking programming is otherwise "too complicated". To add on that: I don't think "the average dude" will ever use for example depended types even they would appear in some broader used language.
My guess for the future is more that we will see a kind of split between a big group of people using advanced low-code like tools for programming day to day things and data exploration, and some much smaller group of "experts" doing the "hard things" needed to make those tools work. On the one hand side this will bring programming further "to the masses". But on the other hand side the hard parts will not only remain, they will get even harder and much less approachable by arbitrary people.
> Given that languages used broadly in the industry are lacking behind research 20+ years
This is obviously true in abstract, but the real breakthrough happens when you make those concepts ergonomic for the working developer. The theory behind dependent types is well established, but I can't really write my next project in Idris, can I?
Similarly, there was a time when C was the only sensible choice to write anything non-academic, non-toy. It's not like people didn't know about OOP or functional programming back then, but it wasn't a realistic possibility.
Or parametric subtype polymorphism, also known by its common name "generics". The concept has been around at least since the 70s, C++ templates weren't widely used before what, 1990?
> My guess for the future is more that we will see a kind of split between a big group of people using advanced low-code like tools for programming day to day things
This has arguably already happened, we call them data scientists. Many of them are technical and have some light scripting skills but they couldn't be put to work on, say, your average backend project. Obviously this is a gross generalization, titles mean literally nothing, I'm pretty sure there exist data scientists that kick ass at coding.
1. Societal issues. Microsoft wanted Java they could control and change. C# it is then. Google is moving away from java to kotlin, because of disagreements with Oracle.
2. Wish of different trade offs. fast-to-learn vs feature-full vs ease-of-use vs configurability vs portability vs speed vs safety vs developer friendly vs user friendly vs admin friendly vs development-speed vs program corectness.
3. Low proof-of-concept cost. If some lib involves lot of boilerplate for my usecase, it is easy to create my own improved version and feel the sense of achievement
(my version might be just a wrapper at first, but the gate have opened)
4. Reduction-of-programing-worlds-complexity is at the bottom of everybodies priority lists.
Google moving away from Java has nothing to do with Oracle. That lawsuit was about an old Sun license of Java that was thought to be hurt by Google.
Since then, Java was completely open-sourced, having the same license as the Linux kernel, so there is nothing stopping android from using it. The preference for kotlin comes from the fact that Android’s Java is barely at OpenJDK’s Java 8 versions, making syntactic sugars all that much more important there.
> Google is moving away from java to kotlin, because of disagreements with Oracle.
Java and it's VM are OpenSource. So that argument doesn't make any sense.
Also I don't think Google would, or even could, move away form one of their primary languages.
Kotlin on the other hand is only a significant trend in Android development, and it will get dropped like a hot potato in favor of Dart as soon as Fuchsia arrives as Android successor, I guess.
Kotlin has a problem: It tries to be "the better Java", but Java is picking up (slowly) all the features. The space for a significantly more powerful JVM language is already taken by Scala. So in the long run there won't be much space for Kotlin left: As soon as Java will get "more modern Features" Kotlin will have a hard time to compete. As likely mostly only syntax differences will remain.
Reactive is a mistake. It's a tool for a specific job, sure, but it's too big and opinionated of an abstraction to use it everywhere. The future of programming should be reality based.
> Why aren’t we able to stop for a while and sort out once and for all a solid framework for coding?
There's never going to be one "solution" here. There will always be tradeoffs depending on the constraints of a given domain or problem space. One-size-fits-all solutions end up not being great for anything, since they have to make so many compromises.
Also great tools are made through solving real problems. If we just went to plato's heaven and dreamed up a "perfect" programming environment, we would end up with something which solves our problems in theory. But the issue with this is that our problems don't exist in theory, they exist in reality.
For a mere 8 year timeframe, the predictions seem rather poor. It feels like there was such a push among people to have the forward thinking ideas that they overestimated how much would change.
Looking back, the biggest changes between now and 2012 are:
* Git (and github) took over the world in the version control. Git was already the leader in 2012, but mercurial was doing ok and svn was still around to a much greater extent.
* Docker/Kubernetes and the container ecosystem. There was a guess here about app servers, but the poster seemed to thinking of PaaS platforms and Java app servers like Jetty more so. I guess you could say "serverless" is sort of in that vein, but it's far from the majority of use cases as the poster predicted.
* Functional programming ideas became mainstream, except in Go, which is a sort of reactionary back to basics language.
Overall though:
Good predictions:
* The IDE/editor space gets a shake up, though maybe not in the way any of the specific predictions guessed (the rise of VS code)
* Machine learning gets bigger
* Apple introduces a new language to replace Objective-C
* Some sort of middle ground to the dynamic/static divide (static languages have got local type inference, dynamic languages have got optional typing)
Bad predictions:
* No-code tools are still no further along mainstream adoption than 2012
* Various predictions that Lisp/ML/Haskell get more mainstream rather than just having their most accessible ideas cherry picked.
* A new development in version control displaces git
* DSLs, DSLs everywhere. DSLs for app development, DSLs for standard cross database NoSQL access,
> * No-code tools are still no further along mainstream adoption than 2012
In 200 years, people will still be predicting the rise of no code solutions.
If you are executing diagrams and schematics, you still have code. And the people maintaining that code are still coding. That is, they are coders. They're just working in a whole new stack that doesn't have git, diff, a variety of editors, open standards for encoding, etc.
There's no such thing as no-code, it's just a question of the properties of the coding scheme. For example C.S. Peirce proposed a turing-complete "no-code" graphical logic (aka programming) language[1] in the 19th century.
This is true, and there are also coders today who's code does not run on computers, but in people's heads as the implementation of diagrams and schematics.
I agree with your summary, but predictions are very hard (especially about the future).
Except for the perpetual no-code and DSL memes, whether git would dominate as much as it did was pretty much up in the air in 2012. Same with functional languages. In 2012 the mainstream was what, Java 1.6? ML/Haskell was like a breath of fresh air, it's hard to understate how pure OOP a la early Java sucks. The fact that some functional features would become mainstream wasn't a given back then, if anything, that is the surprising turn of events.
The App Servers prediction is actually a lot more accurate than I first thought:
> You'll abstract most applications with a DSL, structured of applets or formlets operating on an abstract client API. The same app can then be 'compiled' and served to multiple mediums - as a web-page, an android app, a Chrome app, or even a desktop app.
This is effectively React with Electron, React Native, etc. Of course the author was overly optimistic about how polished and effective cross platform apps would be, but it's still the same idea. React is a UI DSL with a reactive runtime that can run on many different platforms.
> At a guess, people will use something with:
- Strong tooling and libraries
- An accessible type system
- Deterministic memory behaviour
- By-default strict evaluation
- Commercial backing
Every mainstream functional language is lacking in at least one of these areas.
Rust’s type system is very close to Haskell’s and takes a lot of inspiration from functional programming.
I’m not sure I’d call Rust mutable-first either. As for side-effects, I suspect the next leap in PL development will be an efficient algebraic-effects system with a good dev experience (or at least a trade-off so appealing it overcomes the necessary friction).
For most practical purposes, Golang and Java cover this. Rust / C++ is great is systems stuff, I am not going to use it for some application layer stuff, with all complexities that come with it.
I find it pretty hilarious that “too much micro libraries” is a criticism, but then you recommend TypeScript - which runs on Node.is, the progenitor of “micro libraries” in the next sentence.
We’ll ignore every other bullet point containing fundamentally incorrect information also.
Seriously, someone goes to the trouble of finding the archive link because the server is having trouble (note that someone else had mentioned it too, not just me), and it gets modded down. Why? What possible justification is there for that?
Then even mentioning the ridiculous modding down gets modded down further. Why? (and no, "because those are the rules" isn't a reason... did it ever occur to you that sometimes rules are wrong?).
> ... the beginnings of intelligent ... assistants in our IDEs ... specialize (sic) in ... C/C++, Java, Mobile. They will have intimate knowledge of common APIs ... trained on tens of thousands of code projects pulled from the open repositories across the web (google code, github, bitbucket,...). In addition to having 'read' more orders of magnitude more code then any human could in a lifetime, they will also have rudimentary ability to extract programmer intent, and organizational patterns from code. ... The computer automatically bringing up example code snippets, suggesting references to existing functionality that could be reused.
This person (Marc DeRosa) predicted Github Copilot within a margin of one year. Incredible.
Look at the rest of that quote:
> The human, computer pair, will also interactively suggest, confirm and fine tune specifications of mathematical properties and invariants at points in the the program.
That hasn't happened and as far as I am aware, is not even close.
> This will help the computer assistant to not only better understand the program but also to to generate real time, verification test runs using SAT technology,
Ditto.
> Interactive testing will eliminate whole classes of logic bugs, making most non ui code correct by construction.
Ditto..
So, 1 out of 4 predictions correct, which makes it worse than random chance?
Short summary of linked video: when your type system is powerful enough you can restrict the set of possible implementations to the point that the compiler can make a decent guess as to what the program should be to satisfy the type signature.
I was just pointing out something fun.
Isn't that just the Rust borrow checker?
I'd like to see evidence / experience reports backing up this part. Certainly Copilot exists, but what I've read about it is pretty mixed.
EDIT. regarding these situations when it doesn't predict what I wanted, I feel that my brain is getting better at not focusing at that and moving on. At first it was distracting, now I subconsciously know that the provided solution might be wrong and I'm deciding faster if it's something I should choose or skip. Your tool is adapting to you and you're adapting to your tool to find the right balance :)
You also get good at 'using copilot' just like you can be 'good at googling'. So if you not already doing it. Start now.
IMO. There's literally no point coding without it. You are completely wasting your time. However it has limits. It only helps you write code. Architecture is still down to you. If it could read your whole codebase rather than just the page you are on. And if you could supply it prompts via urls. i.e. preloads it with hints from other code bases. Then it will really become something else.
JetBrains has their own experimental thing that is not enabled per default.
And there's also Codota and Tabnine.
Copilot is actually fairly late to the party here.
[1]: https://marketplace.visualstudio.com/items?itemName=VisualSt...
[2]: https://github.com/MicrosoftDocs/intellicode/blob/master/doc...
Deleted Comment
I think mostly we do have gone in that direction, even if probably even slower than the (already cautious) commenter predicted.
Honest question: why are we as a community so slow at evolving a good, solid programming environment?
I get that each time a new language is introduced, porting over all the existing code + training the people is an immense task. But this is _exactly the problem_, right?
Why aren’t we able to stop for a while and sort out once and for all a solid framework for coding?
It’s not a theoretical ideal: I’m very convinced that the dumb piling up of technologies/languages/frameworks that we use now is significantly slowing down the _actual daily work_ we do of producing software. Definitely >>50% of my time as a programmer is spent on accidental complexity, that i know for sure.
It’s very practical: at this point this whole thing simply feels like very bad engineering, tbh?
- A designer might use HTML/CSS/JS directly or with "low code" tools (AKA graphical IDEs, they are still essentially programming) to implement frontends or a bit of scripting to automate stuff in their Adobe tools.
- An analyst might use Excel (a programming environment) or SQL or some hybrid tool.
- Mathematicians and scientists are increasingly programming.
- An electrical technician or engineer programs their installations.
- And so on...
There isn't one paradigm or even a set of paradigms that fits them all. Some languages are deliberately straight forward and provide minimal abstractions, other languages have strong runtime guarantees, others enable flexible, composable abstractions.
I think programming will become even more diverse in the future.
Given that languages used broadly in the industry are lacking behind research 20+ years that's not a good prediction. It's just the way things are going.
BTW: There's still no mainstream language with full dependent typing… (I count Haskell as mainstream; Scala seems closest¹ but still a long way to go).
Actually people are already so overwhelmed by all that really old stuff coming now to languages that some of them resort to even much more basic approaches on the level of the 70'ies, like Go, thinking programming is otherwise "too complicated". To add on that: I don't think "the average dude" will ever use for example depended types even they would appear in some broader used language.
My guess for the future is more that we will see a kind of split between a big group of people using advanced low-code like tools for programming day to day things and data exploration, and some much smaller group of "experts" doing the "hard things" needed to make those tools work. On the one hand side this will bring programming further "to the masses". But on the other hand side the hard parts will not only remain, they will get even harder and much less approachable by arbitrary people.
¹ https://arxiv.org/abs/2011.07653
This is obviously true in abstract, but the real breakthrough happens when you make those concepts ergonomic for the working developer. The theory behind dependent types is well established, but I can't really write my next project in Idris, can I?
Similarly, there was a time when C was the only sensible choice to write anything non-academic, non-toy. It's not like people didn't know about OOP or functional programming back then, but it wasn't a realistic possibility.
Or parametric subtype polymorphism, also known by its common name "generics". The concept has been around at least since the 70s, C++ templates weren't widely used before what, 1990?
> My guess for the future is more that we will see a kind of split between a big group of people using advanced low-code like tools for programming day to day things
This has arguably already happened, we call them data scientists. Many of them are technical and have some light scripting skills but they couldn't be put to work on, say, your average backend project. Obviously this is a gross generalization, titles mean literally nothing, I'm pretty sure there exist data scientists that kick ass at coding.
1. Societal issues. Microsoft wanted Java they could control and change. C# it is then. Google is moving away from java to kotlin, because of disagreements with Oracle.
2. Wish of different trade offs. fast-to-learn vs feature-full vs ease-of-use vs configurability vs portability vs speed vs safety vs developer friendly vs user friendly vs admin friendly vs development-speed vs program corectness.
3. Low proof-of-concept cost. If some lib involves lot of boilerplate for my usecase, it is easy to create my own improved version and feel the sense of achievement (my version might be just a wrapper at first, but the gate have opened)
4. Reduction-of-programing-worlds-complexity is at the bottom of everybodies priority lists.
Since then, Java was completely open-sourced, having the same license as the Linux kernel, so there is nothing stopping android from using it. The preference for kotlin comes from the fact that Android’s Java is barely at OpenJDK’s Java 8 versions, making syntactic sugars all that much more important there.
Java and it's VM are OpenSource. So that argument doesn't make any sense.
Also I don't think Google would, or even could, move away form one of their primary languages.
Kotlin on the other hand is only a significant trend in Android development, and it will get dropped like a hot potato in favor of Dart as soon as Fuchsia arrives as Android successor, I guess.
Kotlin has a problem: It tries to be "the better Java", but Java is picking up (slowly) all the features. The space for a significantly more powerful JVM language is already taken by Scala. So in the long run there won't be much space for Kotlin left: As soon as Java will get "more modern Features" Kotlin will have a hard time to compete. As likely mostly only syntax differences will remain.
> Why aren’t we able to stop for a while and sort out once and for all a solid framework for coding?
There's never going to be one "solution" here. There will always be tradeoffs depending on the constraints of a given domain or problem space. One-size-fits-all solutions end up not being great for anything, since they have to make so many compromises.
Also great tools are made through solving real problems. If we just went to plato's heaven and dreamed up a "perfect" programming environment, we would end up with something which solves our problems in theory. But the issue with this is that our problems don't exist in theory, they exist in reality.
Looking back, the biggest changes between now and 2012 are:
* Git (and github) took over the world in the version control. Git was already the leader in 2012, but mercurial was doing ok and svn was still around to a much greater extent.
* Docker/Kubernetes and the container ecosystem. There was a guess here about app servers, but the poster seemed to thinking of PaaS platforms and Java app servers like Jetty more so. I guess you could say "serverless" is sort of in that vein, but it's far from the majority of use cases as the poster predicted.
* Functional programming ideas became mainstream, except in Go, which is a sort of reactionary back to basics language.
Overall though:
Good predictions:
* The IDE/editor space gets a shake up, though maybe not in the way any of the specific predictions guessed (the rise of VS code)
* Machine learning gets bigger
* Apple introduces a new language to replace Objective-C
* Some sort of middle ground to the dynamic/static divide (static languages have got local type inference, dynamic languages have got optional typing)
Bad predictions:
* No-code tools are still no further along mainstream adoption than 2012
* Various predictions that Lisp/ML/Haskell get more mainstream rather than just having their most accessible ideas cherry picked.
* A new development in version control displaces git
* DSLs, DSLs everywhere. DSLs for app development, DSLs for standard cross database NoSQL access,
In 200 years, people will still be predicting the rise of no code solutions.
If you are executing diagrams and schematics, you still have code. And the people maintaining that code are still coding. That is, they are coders. They're just working in a whole new stack that doesn't have git, diff, a variety of editors, open standards for encoding, etc.
[1] http://www.jfsowa.com/pubs/egtut.pdf
I feel like there is also a return to basic imperative programming, with OO and functional where it makes sense.
Except for the perpetual no-code and DSL memes, whether git would dominate as much as it did was pretty much up in the air in 2012. Same with functional languages. In 2012 the mainstream was what, Java 1.6? ML/Haskell was like a breath of fresh air, it's hard to understate how pure OOP a la early Java sucks. The fact that some functional features would become mainstream wasn't a given back then, if anything, that is the surprising turn of events.
What will programming look like in 2020? - https://news.ycombinator.com/item?id=4962694 - Dec 2012 (3 comments)
Ask HN: What will programming look like in 2020? - https://news.ycombinator.com/item?id=4931774 - Dec 2012 (12 comments)
Rust and Zig have this, right?
> You'll abstract most applications with a DSL, structured of applets or formlets operating on an abstract client API. The same app can then be 'compiled' and served to multiple mediums - as a web-page, an android app, a Chrome app, or even a desktop app.
This is effectively React with Electron, React Native, etc. Of course the author was overly optimistic about how polished and effective cross platform apps would be, but it's still the same idea. React is a UI DSL with a reactive runtime that can run on many different platforms.
Deleted Comment
This user casually predicted Rust.
I’m not sure I’d call Rust mutable-first either. As for side-effects, I suspect the next leap in PL development will be an efficient algebraic-effects system with a good dev experience (or at least a trade-off so appealing it overcomes the necessary friction).
* Long compile times * Too much micro libraries
> An accessible type system
As a sibling comment mentioned, typescript covers this more appropriately.
> - Deterministic memory behaviour - By-default strict evaluation
For most practical purposes, Golang and Java cover this. Rust / C++ is great is systems stuff, I am not going to use it for some application layer stuff, with all complexities that come with it.
> Commercial backing
Not much, really.
We’ll ignore every other bullet point containing fundamentally incorrect information also.
https://web.archive.org/web/20210925211554/http://lambda-the...
P.S. these are predictions made in 2012 of what 2020 was going to be like.
Seriously, someone goes to the trouble of finding the archive link because the server is having trouble (note that someone else had mentioned it too, not just me), and it gets modded down. Why? What possible justification is there for that?
Then even mentioning the ridiculous modding down gets modded down further. Why? (and no, "because those are the rules" isn't a reason... did it ever occur to you that sometimes rules are wrong?).