Alan Kay, my favorite curmudgeon, spent decades trying to remind us we keep reinventing concepts that were worked out in the late 70s and he’s disappointed we’ve been running in circles ever since. He’s still disappointed because very few programmers are ever introduced to the history of computer science in the way that artists study the history of art or philosophers the history of philosophy.
When I was at RIT (2006ish?) there was an elective History of Computing course that started with the abacus and worked up to mainframes and networking. I think the professor retired years ago, but the course notes are still online.
To strenghten GPs point a bit: There are courses on conceptual art (1966-72) or minimal art alone. One "History of Computing" course, while appreciated, is not doing its history enough justice.
Hello fellow RIT alum! I don't think I knew about this class when I went there, though I started as a Computer Engineering student (eventually switched to Computing Security).
The effective history of computing spans a lifetime or three.
There's no sense comparing the two. In the year 2500 it might make sense to be disappointed that people don't compare current computational practices with things done in 2100 or even 1970, but right now, to call what we have "history" does a disservice to the broad meaning of that term.
Another issue: art and philosophy have very limited or zero dependence on a material substrate. Computation has overwhelming dependence on the performance of its physical substrate (by various metrics, including but not limited to: cpu speed, memory size, persistent storage size, persistent storage speed, network bandwidth, network scope, display size, display resolution, input device characteristics, sensory modalities accessibly via digital to analog conversion, ...). To assert that the way problems were solved in 1970 obviously has dramatic lessons for how to solve them in 2025 seems to me to completely miss what we're actually doing with computers.
If Alan Kay doesn't respond directly to this comment, what is Hacker News even for? :)
You're not wrong about history, but that only strengthens Kay's case. E.g., our gazillion-times better physical substrate should have led an array of hotshot devs to write web apps that run circles around GraIL[1] by 2025. (Note the modeless GUI interaction.) Well, guess what? Such a thing definitely doesn't exist. And that can only point to programmers having a general lack of knowledge about their incredibly short history.
(In reality my hope is some slice of devs have achieved this and I've summoned links to their projects by claiming the opposite on the internet.)
Edit: just so I get the right incantation for summoning links-- I'm talking about the whole enchilada of a visual language that runs and rebuilds the user's flowchart program as the user modelessly edits it.
> Another issue: art and philosophy have very limited or zero dependence on a material substrate. Computation has overwhelming dependence on the performance of its physical substrate
That's absolutely false. Do you know why MCM furniture is characterized by bent plywood? It's because we developed the glues that enabled this during world war II. In fashion you had a lot more colors beginning in the mid 1800s because of the development of synthetic dyes. Really odd that oil paints were really perfected around Holland (major place for flax and thus linseed oil), which is what the dutch masters _did_. Architectural mcmansions began because of the development of pre-fab roof trusses in the 70s and 80s.
How about philosophy? Well, the industrial revolution and it's consequences have been a disaster for the human race. I could go on.
The issue is that engineers think they're smart and can design things from first principles. The problem is that they're really not, and design things from first principles.
> To assert that the way problems were solved in 1970 obviously has dramatic lessons for how to solve them in 2025 seems to me to completely miss what we're actually doing with computers.
True they might not all be "dramatic lessons" for us, but to ignore them and assume that they hold no lessons for us is also a tragic waste of resources and hard-won knowledge.
You first sentences already suggest one comparison between the histories of computing and philosophy: history of computing ought to be much easier. Most of it is still in living memory. Yet somehow, the philosophy people manage it while we computing people rarely bother.
Art and philosophy have very limited or zero dependence on a material substrate
Not true for either. For centuries it was very expensive to paint with blue due to the cost of blue pigments (which were essentially crushed gemstones).
Philosophy has advanced considerably since the time of Plato and much of what it studies today is dependent on science and technology. Good luck studying philosophy of quantum mechanics back in the Greek city state era!
Just because a period of history is short doesn't make it _not history_.
Studying history is not just, or even often, a way to rediscover old ways of doing things.
Learning about the people, places, decisions, discussions, and other related context is of intrinsic value.
Also, what does "material substrate" have to do with history? It sounds here like you're using it literally, in which case you're thinking like an engineer and not like a historian. If you're using it metaphorically, well, art and philosophy are absolutely built on layers of what came before.
The rate of change in computer technology has been orders of magnitudes faster than most other technologies.
Consider transport. Millennia ago, before the domestication of the horse, the fastest a human could travel was by running. That's a peak of about 45 km/h, but around 20 km/h sustained over a long distance for the fastest modern humans; it was probably a bit less then. Now that's about 900 km/h for commercial airplanes (45x faster) or 3500 km/h for the fastest military aircraft ever put in service (178x faster). Space travel is faster still, but so rarely used for practical transport I think we can ignore it here.
My current laptop, made in 2022 is thousands of times faster than my first laptop, made in 1992. It has about 8000 times as much memory. Its network bandwidth is over 4000 times as much. There are few fields where the magnitude of human technology has shifted by such large amounts in any amount of time, much less a fraction of a human lifespan.
That gives even more reason to study the history of CS. Even artists study contemporary art from the last few decades.
Given the pace of CS (like you mentioned) 50 years might as well be centuries and so early computing devices and solutions are worth studying to understand how the technology has evolved and what lessons we can learn and what we can discard.
> Computation has overwhelming dependence on the performance of its physical substrate (by various metrics, including but not limited to: cpu speed, memory size, persistent storage size, persistent storage speed, network bandwidth, network scope, display size, display resolution
This was clearly true in 01970, but it's mostly false today.
It's still true today for LLMs and, say, photorealistic VR. But what I'm doing right now is typing ASCII text into an HTML form that I will then submit, adding my comment to a persistent database where you and others can read it later. The main differences between this and a guestbook CGI 30 years ago or maybe even a dialup BBS 40 years ago have very little to do with the performance of the physical substrate. It has more in common with the People's Computer Company's Community Memory 55 years ago (?) using teletypes and an SDS 940 than with LLMs and GPU raytracing.
Sometime around 01990 the crucial limiting factor in computer usefulness went from being the performance of the physical substrate to being the programmer's imagination. This happened earlier for some applications than for others; livestreaming videogames probably requires a computer from 02010 or later, or special-purpose hardware to handle the video data.
Screensavers and demoscene prods used to be attempts to push the limits of what that physical substrate could do. When I saw Future Crew's "Unreal", on a 50MHz(?) 80486, around 01993, I had never seen a computer display anything like that before. I couldn't believe it was even possible XScreensaver contains a museum of screensavers from this period, which displayed things normally beyond the computer's ability. But, in 01998, my office computer was a dual-processor 200MHz Pentium Pro, and it had a screensaver that displayed fullscreen high-resolution clips from a Star Trek movie.
From then on, a computer screen could display literally anything the human eye could see, as long as it was prerendered. The dependence on the physical substrate had been severed. As Zombocom says, the only limit was your imagination. The demoscene retreated into retrocomputing and sizecoding compos, replaced by Shockwave, Flash, and HTML, which freed nontechnical users to materialize their imaginings.
The same thing had happened with still 2-D monochrome graphics in the 01980s; that was the desktop publishing revolution. Before that, you had to learn to program to make graphics on a computer, and the graphics were strongly constrained by the physical substrate. But once the physical substrate was good enough, further improvements didn't open up any new possible expressions. You can print the same things on a LaserWriter from 01985 that you can print on the latest black-and-white laser printer. The dependence on the physical substrate has been severed.
For things you can do with ASCII text without an LLM, the cut happened even earlier. That's why we still format our mail with RFC-822, our equations with TeX, and in some cases our code with Emacs, all of whose original physical substrate was a PDP-10.
Most things people do with computers today, and in particular the most important things, are things fewer people have been doing with computers in nearly the same way for 30 years, when the physical substrate was very different: 300 times slower, 300 times smaller, a much smaller network.
Except, maybe, mass emotional manipulation, doomscrolling, LLMs, mass surveillance, and streaming video.
A different reason to study the history of computing, though, is the sense in which your claim is true.
Perceptrons were investigated in the 01950s and largely abandoned after Minsky & Papert's book, and experienced some revival as "neural networks" in the 80s. In the 90s the US Postal Service deployed them to recognize handwritten addresses on snailmail envelopes. (A friend of mine who worked on the project told me that they discovered by serendipity that decreasing the learning rate over time was critical.) Dr. Dobb's hosted a programming contest for handwriting recognition; one entry used a neural network, but was disqualified for running too slowly, though it did best on the test data they had the patience to run it on. But in the early 21st century connectionist theories of AI were far outside the mainstream; they were only a matter of the history of computation. Although a friend of mine in 02005 or so explained to me how ConvNets worked and that they were the state-of-the-art OCR algorithm at the time.
Then ImageNet changed everything, and now we're writing production code with agentic LLMs.
Many things that people have tried before that didn't work at the time, limited by the physical substrate, might work now.
You do know people have imagination and guys back in 1970 already imagined pretty much everything we use now and even posed problems that are not going to be solved by our computing power.
Dude watch original StarTrek from 1960’s you will be surprised.
You might also be surprised that all AI stuff nowadays is so hyped was already invented in 1960’s only that they didn’t have our hardware to run large models. Read up on neural networks.
I recall seeing a project on github with a comment:
Q: "Sooo... what does this do that Ansible doesn't?"
A: "I've never heard of Ansible until now."
Lots of people think they are the first to come across some concept or need. Like every generation when they listen to songs with references to drugs and sex.
I think software engineering have so many social problems to a level that other fields just don't have. Dogmatism, superstition, toxicity ... you name it.
I can't concur enough. We don't teach, "how to design computers and better methods to interface with them" we keep hashing over the same stuff over and over again. It gets worse over time and the effect is that what Engelbart called, "intelligence augmenters" become, "super televisions that cause you political and social angst."
How far we have fallen but so great the the reward if we could, "lift ourselves up again." I have hope in people like Bret Victor and Brenda Laurel.
Hey - tried to reply this to the earlier thread on Mansfield, but evidently there's a timeout on reply-to's. Anyway appreciated your response (my not responding is often just me contemplating..:) ) Grokipedia found this on Mansfield which I've found is adding additional context about Mansfield's motiviations: https://doi.org/10.1126/science.169.3943.356 (can read in sci-hub) . The Vietnam war thematic background was helpful - Google Gemini agreed with you and the Grokipedia article seems to add support to it as well, (though not directly tying to Sen. Mansfield's motivations).
It's funny, my impression had been that Mansfield was legislating from a heavily right-wing standpoint, of 'Why are we tax-and-spending into something that won't help protect the sanctity of my private property?" I was pleased to see the motivations were different. The fact that so few have articulated why the invoke Mansfield underscores why I was able to adopt such an orthogonal interpretation of Mansfield's amendment prior to this discussion. All the best AfterHIA. -ricksunny
I reflect on university, and one of the most interesting projects I did was an 'essay on the history of <operating system of your choice>' as part of an OS course. I chose OS X (Snow Leopard) and digging into the history gave me fantastic insights into software development, Unix, and software commercialisation. Echo your Mr Kay's sentiments entirely.
Sadly this naturally happens in any field that ends up expanding due to its success. Suddenly the number of new practitioners outnumbers the number of competent educators. I think it is a fundamental human resources problem with no easy fix. Maybe llms will help with this, but they seem to reinforce the convergence to the mean in many cases as those to be educated is not in a position to ask the deeper questions.
> Sadly this naturally happens in any field that ends up expanding due to its success. Suddenly the number of new practitioners outnumbers the number of competent educators. I think it is a fundamental human resources problem with no easy fix.
In my observation the problem rather is that many of the people who want to "learn" computer science actually just want to get a certification to get a cushy job at some MAGNA company, and then they complain about the "academic ivory tower" stuff that they learned at the university.
So, the big problem is not the lack of competent educators, but practitioners actively sabotaging the teaching of topics that they don't consider to be relevant for the job at a MAGNA company. The same holds for the bigwigs at such companies.
I sometimes even see the conspiracy that if a lot of graduates saw that what their work at these MAGNA involves is from the history of computer science often decades old and has been repeated multiple times over the decades, this might demotivate the employees who are to believe that they work on the "most important, soon to be world changing" thing.
When I was a graduate student at UCLA, I signed up for a CS course that turned out to secretly be "the Alan Kay show." He showed up every week and lectured about computer science history. Didn't learn much about programming language design that semester (what the course ostensibly was) but it was one of my most formative experiences.
Your experience with bad teachers seems more like an argument in favor of better education than against it. It's possible to develop better teaching material and methods if there is focus on a subject, even if it's time consuming.
At least for history of economics, I think it's harder to really grasp modern economic thinking without considering the layers it's built upon, the context ideas were developed within etc...
I spent my teens explaining to my mum that main memory (which used to be 'core', she interjected) was now RAM, a record was now a row, a thin client was now a browser, PF keys were now just function keys. And then from this basis I watched Windows Forms and .NET and all the iterations of the JDK and the early churn of non-standardized JavaScript all float by, and thought, 'hmm.'
He's right. I frequently work with founders who are reinventing and taking credit for stuff because they have no idea it was already created. The history of computers + computer science is really interesting. Studying past problems and solutions isn't a waste of time.
> He’s still disappointed because very few programmers are ever introduced to the history of computer science in the way that artists study the history of art or philosophers the history of philosophy.
Software development/programming is a field where the importance of planning and design lies somewhere between ignored and outright despised. The role of software architect is both ridiculed and vilified, whereas the role of the brave solo developer is elevated to the status of hero.
What you get from that cultural mix is a community that values ad-hoc solutions made up on the spot by inexperienced programmers who managed to get something up and running, and at the same time is hostile towards those who take the time to learn from history and evaluate tradeoffs.
See for example the cliche of clueless developers attacking even the most basic aspects of software architecture such as the existence of design patterns.
with that sort of community, how does anyone expect to build respect for prior work.
Maybe history teaches us that planning and design do not work very well....
I think one of the problems is that if someone uses a word, one still does not know what it means. A person can say 'design patterns' and what he is actually doing is a very good use of them that really helps to clarify the code. Another person can say 'design patterns' and is busy creating an overengineered mess that is not applicable to the actual situation where the program is supposed to work.
Maybe he managed to work them out and understand them in the '70s, if you believe him. But he has certainly never managed to convey that understanding to anyone else. Frankly I think if you fail to explain your discovery to even a fraction of the wider community, you haven't actually discovered it.
There should also be PSYC 5640: How to become a guru by reading the documentation everyone else is ignoring. Cannot be taken at the same time as PSYC 5630.
Simulate increasingly unethical product requests and deadlines. The only way to pass is to refuse and justify your refusal with professional standards.
Watch as peers increasingly overpromise and accept unethical product requests and deadlines and leave you high, mighty, and picking up the scraps as they move on to the next thing.
Conventional university degrees already contain practical examples of the principles of both these courses in the form of cheating and the social dynamics around it.
I would add debugging as a course. Maybe they should teach this but how to dive deep into figuring out how to learn the root cause of defects and various tools would have been enormously helpful for me. Perhaps this already exists
Great idea. I had a chemistry lab in college where I was given a vial of a white powder on the first day of class and the course was complete when I identified what it was.
A similar course in CS would give each student a legacy codebase with a few dozen bugs and performance / scaling problems. When the code passes all unit and integration tests, the course is complete.
I feel like that's something you pick from solving problems. You hit something, printf and check work, repeat.
Repwat for 2 yeaes. Rhen later on, my Systems Programming course would give an overview of GDB, Valgrind, and tease the class with GProf. It'd even warn us on the dangers of debugging hypnosis. But that was all the extent of formal debugging I got. The rest was on the job or during projects.
Interactive debugging tends to be unhelpful on anything realtime. By the time you look at what's going on, all the timing constraints are shot and everything is broken. You may be able to see the current state of that thread at that moment, but you can't move forward from there. (Unless you freeze all the threads - but then, all the threads involved might not be in one process, or even on one machine.)
CSCI 0001: Functional programming and type theory (taught in English [0])
For decades, the academia mafia, through impenetrable jargon and intimidating equations, have successfully prevented the masses from adopting this beautiful paradigm of computation. That changes now. Join us to learn why monads really are monoids in the category of endofunctors (oh my! sorry about that).
"CSCI 4020: Writing Fast Code in Slow Languages" does exist, at least in the book form. Teach algorithmic complexity theory in slowest possible language like VB or Ruby. Then demonstrate how O(N) in Ruby trumps O(N^2) in C++.
One of my childhood books compared bubble sort implemented in FORTRAN and running on a Cray-1 and quicksort implemented in BASIC and running on TRS-80.
The BASIC implementation started to outrun the supercomputer at some surprisingly pedestrian array sizes. I was properly impressed.
To be fair, the standard bubble sort algorithm isn't vectorized, and so can only use about 5% of the power of a Cray-1. Which is good for another factor of about 5 in the array size.
We had this as a lab in a learning systems course. converting python loops into numpy vector manipulation (map reduce), and then into tensorflow operations, and measuring the speed.
Gave a good idea of how python is even remotely useful for AI.
We are rebuilding a core infrastructure system from unmaintained python (it's from before our company was bought and everyone left) to java. It's nothing interesting, standard ML infrastructure fare. A straightforward, uncareful, like weekend implementation in java was over ten times faster.
The reason is very simple: Python takes longer for a few function calls than Java takes to do everything. There's nothing I can do to fix that.
I wrote a portion of code that just takes a list of 170ish simple functions and run them, and they are such that it should be parallelizable, but I was rushing and just slapped the boring serialized version into place to get things working. I'll fix it when we need to be faster I thought.
The entire thing runs in a couple nanoseconds.
So much of our industry is writing godawful interpreted code and then having to do crazy engineering to get stupid interpreted languages to do a little faster.
Oh, and this was before I fixed it so the code didn't rebuild a constant regex pattern 100k times per task.
But our computers are so stupidly fast. It's so refreshing to be able to just write code and it runs as fast as computers run. The naive, trivial to read and understand code just works. I don't need a PhD to write it, understand it, or come up with it.
I imagine this is a class specifically about slow languages. Writing code that doesn't get garbage collected, using vectorized operations(numpy), exploiting jit to achieve performance greater than normal C, etc.
Python has come along way. It’s never gonna win for something like high-frequency trading, but it will be super competitive in areas you wouldn’t expect.
The Python interpreter and core library is mostly C code, right? Even a Python library can be coded in C. If you want to sort an array for example, it will cost more in Python because it's sorting python objects, but it's coded in C.
Many computer science programs today have basically turned into coding trade schools.
Students can use frameworks, but they don’t understand why languages are designed the way they are, or how systems evolved over time.
It’s important to remember that computing is also a field of ideas and thought, not just implementation.
My large state university still has the same core required classes as it did 25 years ago. I don't think CS programs can veer to far away from teaching core computer science without losing accreditation.
Alan Kay, my favorite curmudgeon, spent decades trying to remind us we keep reinventing concepts that were worked out in the late 70s and he’s disappointed we’ve been running in circles ever since. He’s still disappointed because very few programmers are ever introduced to the history of computer science in the way that artists study the history of art or philosophers the history of philosophy.
https://www.cs.rit.edu/~swm/history/index.html
The effective history of computing spans a lifetime or three.
There's no sense comparing the two. In the year 2500 it might make sense to be disappointed that people don't compare current computational practices with things done in 2100 or even 1970, but right now, to call what we have "history" does a disservice to the broad meaning of that term.
Another issue: art and philosophy have very limited or zero dependence on a material substrate. Computation has overwhelming dependence on the performance of its physical substrate (by various metrics, including but not limited to: cpu speed, memory size, persistent storage size, persistent storage speed, network bandwidth, network scope, display size, display resolution, input device characteristics, sensory modalities accessibly via digital to analog conversion, ...). To assert that the way problems were solved in 1970 obviously has dramatic lessons for how to solve them in 2025 seems to me to completely miss what we're actually doing with computers.
You're not wrong about history, but that only strengthens Kay's case. E.g., our gazillion-times better physical substrate should have led an array of hotshot devs to write web apps that run circles around GraIL[1] by 2025. (Note the modeless GUI interaction.) Well, guess what? Such a thing definitely doesn't exist. And that can only point to programmers having a general lack of knowledge about their incredibly short history.
(In reality my hope is some slice of devs have achieved this and I've summoned links to their projects by claiming the opposite on the internet.)
Edit: just so I get the right incantation for summoning links-- I'm talking about the whole enchilada of a visual language that runs and rebuilds the user's flowchart program as the user modelessly edits it.
1: https://www.youtube.com/watch?v=QQhVQ1UG6aM
That argument actually strengthens the original point: Even though it's been that short, youngsters often still don't have a clue.
That's absolutely false. Do you know why MCM furniture is characterized by bent plywood? It's because we developed the glues that enabled this during world war II. In fashion you had a lot more colors beginning in the mid 1800s because of the development of synthetic dyes. Really odd that oil paints were really perfected around Holland (major place for flax and thus linseed oil), which is what the dutch masters _did_. Architectural mcmansions began because of the development of pre-fab roof trusses in the 70s and 80s.
How about philosophy? Well, the industrial revolution and it's consequences have been a disaster for the human race. I could go on.
The issue is that engineers think they're smart and can design things from first principles. The problem is that they're really not, and design things from first principles.
True they might not all be "dramatic lessons" for us, but to ignore them and assume that they hold no lessons for us is also a tragic waste of resources and hard-won knowledge.
This seems to fundamentally underestimate the nature of most artforms.
Not true for either. For centuries it was very expensive to paint with blue due to the cost of blue pigments (which were essentially crushed gemstones).
Philosophy has advanced considerably since the time of Plato and much of what it studies today is dependent on science and technology. Good luck studying philosophy of quantum mechanics back in the Greek city state era!
Studying history is not just, or even often, a way to rediscover old ways of doing things.
Learning about the people, places, decisions, discussions, and other related context is of intrinsic value.
Also, what does "material substrate" have to do with history? It sounds here like you're using it literally, in which case you're thinking like an engineer and not like a historian. If you're using it metaphorically, well, art and philosophy are absolutely built on layers of what came before.
Consider transport. Millennia ago, before the domestication of the horse, the fastest a human could travel was by running. That's a peak of about 45 km/h, but around 20 km/h sustained over a long distance for the fastest modern humans; it was probably a bit less then. Now that's about 900 km/h for commercial airplanes (45x faster) or 3500 km/h for the fastest military aircraft ever put in service (178x faster). Space travel is faster still, but so rarely used for practical transport I think we can ignore it here.
My current laptop, made in 2022 is thousands of times faster than my first laptop, made in 1992. It has about 8000 times as much memory. Its network bandwidth is over 4000 times as much. There are few fields where the magnitude of human technology has shifted by such large amounts in any amount of time, much less a fraction of a human lifespan.
Given the pace of CS (like you mentioned) 50 years might as well be centuries and so early computing devices and solutions are worth studying to understand how the technology has evolved and what lessons we can learn and what we can discard.
"Computer" goes back to 1613 per https://en.wikipedia.org/wiki/Computer_(occupation)
https://en.wikipedia.org/wiki/Euclidean_algorithm was 300 BC.
https://en.wikipedia.org/wiki/Quadratic_equation has algorithms back to 2000 BC.
This was clearly true in 01970, but it's mostly false today.
It's still true today for LLMs and, say, photorealistic VR. But what I'm doing right now is typing ASCII text into an HTML form that I will then submit, adding my comment to a persistent database where you and others can read it later. The main differences between this and a guestbook CGI 30 years ago or maybe even a dialup BBS 40 years ago have very little to do with the performance of the physical substrate. It has more in common with the People's Computer Company's Community Memory 55 years ago (?) using teletypes and an SDS 940 than with LLMs and GPU raytracing.
Sometime around 01990 the crucial limiting factor in computer usefulness went from being the performance of the physical substrate to being the programmer's imagination. This happened earlier for some applications than for others; livestreaming videogames probably requires a computer from 02010 or later, or special-purpose hardware to handle the video data.
Screensavers and demoscene prods used to be attempts to push the limits of what that physical substrate could do. When I saw Future Crew's "Unreal", on a 50MHz(?) 80486, around 01993, I had never seen a computer display anything like that before. I couldn't believe it was even possible XScreensaver contains a museum of screensavers from this period, which displayed things normally beyond the computer's ability. But, in 01998, my office computer was a dual-processor 200MHz Pentium Pro, and it had a screensaver that displayed fullscreen high-resolution clips from a Star Trek movie.
From then on, a computer screen could display literally anything the human eye could see, as long as it was prerendered. The dependence on the physical substrate had been severed. As Zombocom says, the only limit was your imagination. The demoscene retreated into retrocomputing and sizecoding compos, replaced by Shockwave, Flash, and HTML, which freed nontechnical users to materialize their imaginings.
The same thing had happened with still 2-D monochrome graphics in the 01980s; that was the desktop publishing revolution. Before that, you had to learn to program to make graphics on a computer, and the graphics were strongly constrained by the physical substrate. But once the physical substrate was good enough, further improvements didn't open up any new possible expressions. You can print the same things on a LaserWriter from 01985 that you can print on the latest black-and-white laser printer. The dependence on the physical substrate has been severed.
For things you can do with ASCII text without an LLM, the cut happened even earlier. That's why we still format our mail with RFC-822, our equations with TeX, and in some cases our code with Emacs, all of whose original physical substrate was a PDP-10.
Most things people do with computers today, and in particular the most important things, are things fewer people have been doing with computers in nearly the same way for 30 years, when the physical substrate was very different: 300 times slower, 300 times smaller, a much smaller network.
Except, maybe, mass emotional manipulation, doomscrolling, LLMs, mass surveillance, and streaming video.
A different reason to study the history of computing, though, is the sense in which your claim is true.
Perceptrons were investigated in the 01950s and largely abandoned after Minsky & Papert's book, and experienced some revival as "neural networks" in the 80s. In the 90s the US Postal Service deployed them to recognize handwritten addresses on snailmail envelopes. (A friend of mine who worked on the project told me that they discovered by serendipity that decreasing the learning rate over time was critical.) Dr. Dobb's hosted a programming contest for handwriting recognition; one entry used a neural network, but was disqualified for running too slowly, though it did best on the test data they had the patience to run it on. But in the early 21st century connectionist theories of AI were far outside the mainstream; they were only a matter of the history of computation. Although a friend of mine in 02005 or so explained to me how ConvNets worked and that they were the state-of-the-art OCR algorithm at the time.
Then ImageNet changed everything, and now we're writing production code with agentic LLMs.
Many things that people have tried before that didn't work at the time, limited by the physical substrate, might work now.
Dude watch original StarTrek from 1960’s you will be surprised.
You might also be surprised that all AI stuff nowadays is so hyped was already invented in 1960’s only that they didn’t have our hardware to run large models. Read up on neural networks.
And yet for the most part, philosophy and the humanities' seminal development took place within about a generation, viz., Plato and Aristotle.
> Computation has overwhelming dependence on the performance of its physical substrate [...].
Computation theory does not.
Q: "Sooo... what does this do that Ansible doesn't?"
A: "I've never heard of Ansible until now."
Lots of people think they are the first to come across some concept or need. Like every generation when they listen to songs with references to drugs and sex.
I think software engineering have so many social problems to a level that other fields just don't have. Dogmatism, superstition, toxicity ... you name it.
How far we have fallen but so great the the reward if we could, "lift ourselves up again." I have hope in people like Bret Victor and Brenda Laurel.
It's funny, my impression had been that Mansfield was legislating from a heavily right-wing standpoint, of 'Why are we tax-and-spending into something that won't help protect the sanctity of my private property?" I was pleased to see the motivations were different. The fact that so few have articulated why the invoke Mansfield underscores why I was able to adopt such an orthogonal interpretation of Mansfield's amendment prior to this discussion. All the best AfterHIA. -ricksunny
In my observation the problem rather is that many of the people who want to "learn" computer science actually just want to get a certification to get a cushy job at some MAGNA company, and then they complain about the "academic ivory tower" stuff that they learned at the university.
So, the big problem is not the lack of competent educators, but practitioners actively sabotaging the teaching of topics that they don't consider to be relevant for the job at a MAGNA company. The same holds for the bigwigs at such companies.
I sometimes even see the conspiracy that if a lot of graduates saw that what their work at these MAGNA involves is from the history of computer science often decades old and has been repeated multiple times over the decades, this might demotivate the employees who are to believe that they work on the "most important, soon to be world changing" thing.
Alan Kay giving the same (unique, his own, not a bad) speech at every conference for 50 years is not Alan Kay being a curmudgeon
>we keep reinventing concepts that were worked out in the late 70s and he’s disappointed we’ve been running in circles ever since.
it's Alan Kay running in circles
I think assessing piece of hardware/software is much more difficult and time consuming than art. So there are no people with really broad experience.
Art or philosophy might or might not make progress. No one can say for sure. They are bad role models.
As opposed to ours, where we're fond of subjective regression. ;-P
Software development/programming is a field where the importance of planning and design lies somewhere between ignored and outright despised. The role of software architect is both ridiculed and vilified, whereas the role of the brave solo developer is elevated to the status of hero.
What you get from that cultural mix is a community that values ad-hoc solutions made up on the spot by inexperienced programmers who managed to get something up and running, and at the same time is hostile towards those who take the time to learn from history and evaluate tradeoffs.
See for example the cliche of clueless developers attacking even the most basic aspects of software architecture such as the existence of design patterns.
with that sort of community, how does anyone expect to build respect for prior work.
I think one of the problems is that if someone uses a word, one still does not know what it means. A person can say 'design patterns' and what he is actually doing is a very good use of them that really helps to clarify the code. Another person can say 'design patterns' and is busy creating an overengineered mess that is not applicable to the actual situation where the program is supposed to work.
I see that more from devs in startup culture, or shipping products are sofware only company.
It is a very different mindset when software is not the main business of a company, or in consulting.
- How to ignore the latest platform/library/thing everyone is talking about on HN
CSCI 3120: Novelty Driven Development
- How to stay interested in your job by using the latest platform/library/thing everyone is talking about on HN even if it isn't actually necessary
NB: CSCI 3120 cannot be taken at the same time as CSCI 3240
PSYC 4870: Meeting Techniques
- mentioning problems before, during or after meetings - different techniques for different manager types
- small talk conventions in different cultures
- camera on or camera off - the modern split in corporate habits
PSYC 5630: Accepting Organisational Friction
- how to motivate yourself to write documentation that no-one will ever read
- managers are people too - understanding what makes someone think they want to be a manager
- Everything Will Take A Long Time And No-one Will Remember What Got Decided - working in large organisartions
- Cake and Stare - how to handle a leaving do
Simulate increasingly unethical product requests and deadlines. The only way to pass is to refuse and justify your refusal with professional standards.
Watch as peers increasingly overpromise and accept unethical product requests and deadlines and leave you high, mighty, and picking up the scraps as they move on to the next thing.
A similar course in CS would give each student a legacy codebase with a few dozen bugs and performance / scaling problems. When the code passes all unit and integration tests, the course is complete.
Repwat for 2 yeaes. Rhen later on, my Systems Programming course would give an overview of GDB, Valgrind, and tease the class with GProf. It'd even warn us on the dangers of debugging hypnosis. But that was all the extent of formal debugging I got. The rest was on the job or during projects.
Do you have a moment to talk about our saviour, Lord interactive debugging?
It's old but reliable.
For decades, the academia mafia, through impenetrable jargon and intimidating equations, have successfully prevented the masses from adopting this beautiful paradigm of computation. That changes now. Join us to learn why monads really are monoids in the category of endofunctors (oh my! sorry about that).
[0] Insert your favourite natural language
The BASIC implementation started to outrun the supercomputer at some surprisingly pedestrian array sizes. I was properly impressed.
Gave a good idea of how python is even remotely useful for AI.
https://www.amazon.ca/Visual-Basic-Algorithms-Ready-Run/dp/0...
The reason is very simple: Python takes longer for a few function calls than Java takes to do everything. There's nothing I can do to fix that.
I wrote a portion of code that just takes a list of 170ish simple functions and run them, and they are such that it should be parallelizable, but I was rushing and just slapped the boring serialized version into place to get things working. I'll fix it when we need to be faster I thought.
The entire thing runs in a couple nanoseconds.
So much of our industry is writing godawful interpreted code and then having to do crazy engineering to get stupid interpreted languages to do a little faster.
Oh, and this was before I fixed it so the code didn't rebuild a constant regex pattern 100k times per task.
But our computers are so stupidly fast. It's so refreshing to be able to just write code and it runs as fast as computers run. The naive, trivial to read and understand code just works. I don't need a PhD to write it, understand it, or come up with it.
There are many cases where O(n^2) will beat O(n).
Utilising the hardware can make a bigger difference than algorithmic complexity in many cases.
Vectorised code on linear memory vs unvectorised code on data scattered around the heap.
Computer science courses that don't exist, but should (2015) - https://news.ycombinator.com/item?id=16127508 - Jan 2018 (4 comments)
Computer science courses that don't exist, but should (2015) - https://news.ycombinator.com/item?id=13424320 - Jan 2017 (1 comment)
Computer science courses that don't exist, but should - https://news.ycombinator.com/item?id=10201611 - Sept 2015 (247 comments)
My large state university still has the same core required classes as it did 25 years ago. I don't think CS programs can veer to far away from teaching core computer science without losing accreditation.