* C should have had standard types comparable to Rust's Vec and String. Pascal did. Null terminated strings were a terrible idea, and are still a curse. Plus a sane set of string functions, not "strcat" and its posse. Didn't have to be part of a general object system. Should have happened early.
* Slices. Easy to implement in a compiler, eliminates most of the need for pointer arithmetic. Again, should have happened early.
Those two alone would have eliminated tens of thousands of pointer bugs and decades of security holes.
Operating systems
* UNIX should have had better interprocess communication. It took decades to get that in, and it's still not that good in Linux. QNX had that figured out in the 1980s. There were some distributed UNIX variants that had it. System V had something.
Networking
* Alongside TCP, there should have been a reliable message-oriented protocol. Send a message, of any length, it gets delivered reliably. Not a stream, so you don't have issues around "is there more coming?". There are RFCs for such protocols, but they never went anywhere.
Databases
* Barely existed. This resulted in many hokey workarounds.
With all of those, we could have had CRUD apps by the late 1980s. That covers a whole range of use cases.
There were client/server systems, but each had its own system of client/server interconnection. With all of the above,
it would have been much easier to write basic server applications.
> * Alongside TCP, there should have been a reliable message-oriented protocol. Send a message, of any length, it gets delivered reliably. Not a stream, so you don't have issues around "is there more coming?". There are RFCs for such protocols, but they never went anywhere.
HTTP works well enough for this for basic use cases, assuming Content-Length is set. That's probably one of the reasons it's become the default protocol.
C was very popular throughout the 1980s, and was the first choice for systems programming.
C was created in 1972 by Dennis Ritchie. In 1973 he used it to rewrite the entire Unix kernel. The popularity of both C and Unix grew alongside eachother.
The preeminent C book by Kernighan & Ritchie was published in 1978, with 2nd edition coming in 1988 with ANSI-C staged. The book was the first use of "Hello, world!". So even that was very influential.
No, I don't think so. Today's software engineering is optimizing for minimizing dev time, and this comes at the expense of using more resources than are necessary to accomplish our tasks. Yesterday's software engineering was optimizing for minimizing resource use rather than dev time. That mindset makes a tremendous difference in how you approach the engineering.
Software engineering is really distinct from computer science. The practices that are prevalent in an industry differ from the algorithms discovered in academia (or in industry)
I think it is not 100% obvious if the question is actually about CS or coding. They say CS, but then ask about built in languages. Of course, CS advantages must have impacted language design, but a lot of it is just improving conventions, right?
It's very similar to how mechanics now replace whole parts rather than machine or rebuild them. The cost now are man hours, not the components themselves
Agreed: Most of the computers from earlier eras did not have the speed, memory or disk space to even remotely support most of the CS practices today. The raw speed, multiple cores and gargantuan memories of today's CPUs makes things like AI feasible. Many of the ideas we use today existed in the late 50s and 60s. They simply were not practical to implement.
I would agree, I would also add that generally programmers today are not used to dealing with the memory limits of '80s computers. What is now considered an embedded device, back then was a desktop. My smartwatch likely has more memory than a PC Jr.
No idea what smart watch you have but the latest (series 9) Apple Watch has (up to) 2 GB of RAM and 5 Tflop of computing power, while the PC junior had 256 kilobytes of ram and .33 megaflops of computing.
nodejs' main binary on my laptop is 44 MiB.
The Apollo Guidance Computer (AGC) we put a man on the Moon with could do 4 kilo instructions per second and 0.43 megaflops and had 2KB of RAM. Apple watch is roughly 900x as fast as the AGC while being being .67 x as large and less than 1% of the weight.
Vs the PC Jr, it's almost 2000x the size of an Apple watch and like 150x heavier.
Computer Science hasn't advanced all that much. Some algorithms have been improved but it is a very slow process, as one would expect.
When it comes to 'software engineering', such as it is, we have regressed. We are incredibly wasteful, in the name of productivity. Some of that trade-off makes sense, but a lot doesn't, and there's diminishing returns. The microservices trend, that's spread like wildfire, is a good example. It makes sense in some cases, but the additional resource consumption is astronomical, both in computer resources, and humans, which is something it's supposed to help with. But the culture moved to an extent that, if you say a particular system is better off served by a monolith, you are seen as an alien in the best case, and a an unskilled hack in the worst case.
Sure, your team can package a microservice in a container, using any language, expose an API, ship it to kubernetes and publish your API spec without having to talk to anyone. Now, you have a whole other team managing your K8s and AWS environments, a whole bunch managing the different database systems that are isolated per service, yet another dealing with whatever messaging mechanism you have probably added once direct API calls became an issue, and you have a full team of Secops people deploying the latest and greatest tools from CNCF to fix the problems the architecture created. While a lot of that would have been, in other times, been solved with a function call. Worse yet, any runtime environments are duplicated, and even then simplest service now demands multiple gigabytes to run. Add HA requirements and you have an enormous overhead, to the point that deploying multiple copies of the monolith would have been far simpler – and much cheaper.
In the 80s, we had Lisp Machines. Nothing has come close ever since. Maybe if we had stuck with S-Expressions, we wouldn't have to reinvent markup languages every 5 years (XML, JSON, YAML, etc). Even without them, the Lisp condition system is amazing.
This is probably referencing the same idea that seeded https://cstheory.stackexchange.com/questions/12905/speedup-f.... I remember the claim, as well. That stack exchange indicates that it was never actually quantified. My gut would be that some of the algorithm advances require far more space that was available on early machines. Still a neat idea to look into.
How many of the new games (that looked better) had better circuitry on the game cartridge? I didn’t have a Sega (NES!), but cartridge systems could potentially have a lot more options to work with than the base system. For example, the Pokémon cartridge for the gameboy had its own battery powered memory (IIRC).
Not to take away from your argument - the more you know a system, the better it can be exploited.
However, in those days, it wasn’t always as simple as it seemed. This was especially true when you were (effectively) plugging expansion boards in with every game.
Other than Virtua Racer, I don't think any Sega cartridges of that era had extra processing circuitry.
There were some with extra control-pad ports, and I think one with a modem. Of course, you could count the 32X & MegaCD as a large cartridge - but that's stretching it!
Sega did not ship with a programming language. Hardware was closed so it took a while to find and explore its limits. Nothing to do with advances in CS, but a lot to do with devs' curiosity.
Right, by far, most of the tricks learned that improved game quality later in the life of a given console weren't based on new comp sci, but better understanding of the systems limitations (e.g. get more colors on screen by rewriting the color palettes between horizontal scan lines on the CRT) and improvements in tooling. I would say a rare exception to this is once video encoding/compression techniques came into play, in the late 90s/00s era where full motion video in games was cutting edge.
Learning techniques to maximise the efficiency of games consoles isn’t CS in the traditional sense. But I can see where you’re coming from.
You also have to bear in mind that cartridge based systems you ship with larger ROMs and sometimes even additional chipsets in later years of a consoles life. So even if you knew all the hacks for a specific console from day one, you still might not have a large enough ROM to take advantage of some of them.
Lisp Machines had high resolution displays, loads of disk space, a language that was mostly type safe, with built in editor with capabilities that modern IDEs still often lack, email, etc ...
I started programming in 1984. I have no doubt that if those computers were suddenly all we had, we would leverage them in surprisingly interesting ways.
A classic example is https://www.vogons.org/viewtopic.php?t=89435, which shows what can be done “today” with old hardware, which was thought impossible for all the years before.
There have undoubtedly been advances in algorithms that are hard to really communicate to people. However, I think it is also arguable that the advance in computing resources has enabled some of the algorithm advances that people would use in these discussions.
That said, the discussion on "better built-in languages" feels like it is aiming at something I don't understand. Older computers booting into a programming environment was something that worked really well for what it was aiming to do. You could argue that BASIC was a terrible language, but there were other options.
I think the current raspberry pi setup is quite a good example for what you are speaking to? Mathematica is preloaded with a free license and is near magical compared to the programs most of us are writing.
Guess it depends what you mean by better. Also, in today’s context we have exponentially more complexity thats abstracted away. OS, drivers, user land software, heck even browsers alone.
Given this, I do think we have a lot of glue and duct tape fastening our digital world together.
But some of the low level stuff done today is probably just as good as would be done in the 80’s if not more, given the now easier access to information, and complexity requirements to even participate in low level development being higher.
I imagine a dev from the 80’s (like my professor from college) may be dismally annoyed at the amount of abstractions some languages have.
Spring boot for Java? So much ‘magic’. He’d say “just write it in PHP” lol
Languages
* C should have had standard types comparable to Rust's Vec and String. Pascal did. Null terminated strings were a terrible idea, and are still a curse. Plus a sane set of string functions, not "strcat" and its posse. Didn't have to be part of a general object system. Should have happened early.
* Slices. Easy to implement in a compiler, eliminates most of the need for pointer arithmetic. Again, should have happened early.
Those two alone would have eliminated tens of thousands of pointer bugs and decades of security holes.
Operating systems
* UNIX should have had better interprocess communication. It took decades to get that in, and it's still not that good in Linux. QNX had that figured out in the 1980s. There were some distributed UNIX variants that had it. System V had something.
Networking
* Alongside TCP, there should have been a reliable message-oriented protocol. Send a message, of any length, it gets delivered reliably. Not a stream, so you don't have issues around "is there more coming?". There are RFCs for such protocols, but they never went anywhere.
Databases
* Barely existed. This resulted in many hokey workarounds.
With all of those, we could have had CRUD apps by the late 1980s. That covers a whole range of use cases. There were client/server systems, but each had its own system of client/server interconnection. With all of the above, it would have been much easier to write basic server applications.
HTTP works well enough for this for basic use cases, assuming Content-Length is set. That's probably one of the reasons it's become the default protocol.
Deleted Comment
C was created in 1972 by Dennis Ritchie. In 1973 he used it to rewrite the entire Unix kernel. The popularity of both C and Unix grew alongside eachother.
The preeminent C book by Kernighan & Ritchie was published in 1978, with 2nd edition coming in 1988 with ANSI-C staged. The book was the first use of "Hello, world!". So even that was very influential.
Great article for further reading: https://www.jakeo.com/words/clanguage.php
I think both phrases are floating signifiers in practice.
nodejs' main binary on my laptop is 44 MiB.
The Apollo Guidance Computer (AGC) we put a man on the Moon with could do 4 kilo instructions per second and 0.43 megaflops and had 2KB of RAM. Apple watch is roughly 900x as fast as the AGC while being being .67 x as large and less than 1% of the weight.
Vs the PC Jr, it's almost 2000x the size of an Apple watch and like 150x heavier.
In 2013 I bought Galaxy Gear which was one of the first smartwatches and even that one had 512 MB RAM. PC Jr. had a 64 KB RAM base in comparison.
And that is considered very affordable chip.
Deleted Comment
When it comes to 'software engineering', such as it is, we have regressed. We are incredibly wasteful, in the name of productivity. Some of that trade-off makes sense, but a lot doesn't, and there's diminishing returns. The microservices trend, that's spread like wildfire, is a good example. It makes sense in some cases, but the additional resource consumption is astronomical, both in computer resources, and humans, which is something it's supposed to help with. But the culture moved to an extent that, if you say a particular system is better off served by a monolith, you are seen as an alien in the best case, and a an unskilled hack in the worst case.
Sure, your team can package a microservice in a container, using any language, expose an API, ship it to kubernetes and publish your API spec without having to talk to anyone. Now, you have a whole other team managing your K8s and AWS environments, a whole bunch managing the different database systems that are isolated per service, yet another dealing with whatever messaging mechanism you have probably added once direct API calls became an issue, and you have a full team of Secops people deploying the latest and greatest tools from CNCF to fix the problems the architecture created. While a lot of that would have been, in other times, been solved with a function call. Worse yet, any runtime environments are duplicated, and even then simplest service now demands multiple gigabytes to run. Add HA requirements and you have an enormous overhead, to the point that deploying multiple copies of the monolith would have been far simpler – and much cheaper.
In the 80s, we had Lisp Machines. Nothing has come close ever since. Maybe if we had stuck with S-Expressions, we wouldn't have to reinvent markup languages every 5 years (XML, JSON, YAML, etc). Even without them, the Lisp condition system is amazing.
Developing a distributed system when you don't need it is a recipe for disaster.
I wrote about it at https://shkspr.mobi/blog/2020/11/what-would-happen-if-comput...
Look at early games on the Sega Megadrive / Genesis compared to the ones released towards the end of its run.
As we learn more about computer science, we can push systems further than their creators envisaged.
Not to take away from your argument - the more you know a system, the better it can be exploited.
However, in those days, it wasn’t always as simple as it seemed. This was especially true when you were (effectively) plugging expansion boards in with every game.
Here’s an example from the NES side: https://en.wikipedia.org/wiki/Memory_management_controller_(...
The reason why Super Mario 3 looked so much better than the original has a lot to do with evolution of these chips (on the cartridge).
Deleted Comment
There were some with extra control-pad ports, and I think one with a modem. Of course, you could count the 32X & MegaCD as a large cartridge - but that's stretching it!
You also have to bear in mind that cartridge based systems you ship with larger ROMs and sometimes even additional chipsets in later years of a consoles life. So even if you knew all the hacks for a specific console from day one, you still might not have a large enough ROM to take advantage of some of them.
Here is a pretty brochure for the LM-2 (a relabelled MIT CADR): http://www.bitsavers.org/pdf/symbolics/brochures/LM-2.pdf
But would you have paid 80k USD for one (http://www.bitsavers.org/pdf/symbolics/LM-2/LM-2_Price_List_...) -- and that is without monitor, disk, ...?
A classic example is https://www.vogons.org/viewtopic.php?t=89435, which shows what can be done “today” with old hardware, which was thought impossible for all the years before.
That said, the discussion on "better built-in languages" feels like it is aiming at something I don't understand. Older computers booting into a programming environment was something that worked really well for what it was aiming to do. You could argue that BASIC was a terrible language, but there were other options.
I think the current raspberry pi setup is quite a good example for what you are speaking to? Mathematica is preloaded with a free license and is near magical compared to the programs most of us are writing.
It was also a language that was discoverable to a curious 10-year-old.
So for the purpose of making the machine accessible to those who didn't know how to program, Basic was actually pretty good.
Given this, I do think we have a lot of glue and duct tape fastening our digital world together.
But some of the low level stuff done today is probably just as good as would be done in the 80’s if not more, given the now easier access to information, and complexity requirements to even participate in low level development being higher.
I imagine a dev from the 80’s (like my professor from college) may be dismally annoyed at the amount of abstractions some languages have.
Spring boot for Java? So much ‘magic’. He’d say “just write it in PHP” lol