Reading this made me so so sad, I do agree with the reasoning.
I learned to program on a course that follows SICP, I spent all my college years learning how to program from first principles, building all the pieces from scratch. Building compilers, soft threads implementations, graph parsing algorithms... and I was happy with that way of programming!
Today I'm an iOS developer, I spend most of my day 'poking' at the Cocoa Touch framework whose source I can't even read because it's closed source. The startup I work for moves so fast that I'm forced to use other peoples open source projects without even having the time to read the source.
I feel miserable doing the kind of programming I do nowadays! I wish I could go back to simpler times, where I actually had to reason about algorithms, data structures, hardware, etc...
I might be canned for this here, but, I see that enterprises have a lot more work of the fundamentals, simpler sort than startups. At startups, I worked with 100s of libraries not understanding, or not having the time to even look back at things we did. At enterprises, there are a lot of low hanging fruit, which when applied magic CS pixie dust, becomes more performant (to a lot of people).
This is one of the reasons why I moved away from product side programming to IT. My brain screams in pain, when I do lot of context switches, and I figured this is not going to make me a better programmer. The fun stuff is in handling data, not learning layers of apis(for me).
Now working in enterprise is not for everyone - the politics etc, but for me, it still beats the pain of working in a smelly, loud, fast context-switching, agile-kanban startup.
I feel miserable doing the kind of programming I do
nowadays! I wish I could go back to simpler times, where
I actually had to reason about algorithms, data
structures, hardware, etc...
I don't subscribe to the school of thought that values engineers lower on the stack more than those higher up, especially since there seem to be a lot more new jobs of the latter sort and we all need to make a living, and there are plenty of cool problems in those spaces (algorithms, data structures, performance... it's all there).
But I think the lucky ones are people who get to work low enough relative to their knowledge where it doesn't feel they are dealing with endless abstractions and layer upon layer of magic.
The startup I work for moves so fast that I'm forced to
use other peoples open source projects without even
having the time to read the source.
:( sounds to me that this might be more the problem. And worse is I suspect it's common.
I don't subscribe to the school of thought that
values engineers lower on the stack more than
those higher up
Meaning developers who understand low-level details vs. developers who just wire up high level libraries?
I've done the full range, from entire games written in assembly language and embedded C code, through high level full stack development with NodeJS, Python, and other languages.
The low-level coding is far more skilled work than the high level coding. Instead of most of the work being "how do these plug in to each other?", it's how exactly does my code function with respect to all of the potential events additional threads, and how all of the edge cases interact, and what is causing this obscure bug?
While that may not seem intrinsically harder, none of these are typically something you can Google or search StackOverflow for the answers on. So you're really just on your own. And developers who have grown up on "Google all the answers!" often hit a wall when they need to apply real problem solving skills.
Luckily I can find enjoyment in many levels, since a lot of the jobs I've found recently have been more of the "full stack" or "mobile development' category. It's easy and fun work.
I also have little problem piercing the magic and understanding how things fit together, but that means that I end up with opinions on many topics divergent with the crowd. For instance, I avoid writing Android/Java or iOS/Swift, and instead use cross-platform development stacks exclusively. Yes it means an entire extra layer of complexity, but it also means I write the app once for all target platforms. Far too big of a win to ignore.
I think he doesn't necessarily imply anything about being high or low in the chain. But about beauty and understanding.
SICP is one example, but I bet it's even possible to be happy coding GUIs or any other rather mundane things if one does it with an elegant toolset. For example, Oz comes to mind.
Some modern tools are not pleasing to use, often because architectures are a mess. Things move forward too quickly.
I was trained on Scheme 20 years ago. Scheme was the moment when I felt I "got" programming right (before that I did lots of Pascal, basic, assembler,etc).
Now I do Python. And with Python I "got" something too : writing tools, coding, etc. is not like mathematics (like Scheme) anymore. It became a social activity. I'm in the field of ERP right now (arguably not the most theory-oriented stuff). It's because I spend most of time using API's by other and making API's in the hope that they will be used by other (none of those API being worth much in terms of computer "science" (stuff like algorithm). I'm also building tools to augment productivity of other people; which is also quite social.
So from the abstract Scheme programming, I've moved to Python social programming. That keeps me happy (I must confess that studying Voronoi diagrams, or 3D structures packing remain what made me really tick :-) )
And to make this really annoying, you still need to know low-level algorithms and data structures to make it through most interview processes. What they should really do is give you a crappy API doc and have you make it do something useful.
Please take advantage of your job mobility. Changing jobs in our line of work isn't as simple as flipping a switch, but you still are in so much demand. There's no need for you do engineering work you're dissatisfied with.
> I feel miserable doing the kind of programming I do nowadays! I wish I could go back to simpler times, where I actually had to reason about algorithms, data structures, hardware, etc.
I made some 3000 commits all by myself since 2009. The main reason why this exists is the above: a project where I can stuff ideas, algorithms, whatever, into a coherent whole that has regular releases and is well-documented externally. And this coherent whole is useful to me in many ways.
In a startup context, the trick is to make sure the technical debt comes due after the point at which you find out whether you've got a product people actually want in the first place.
Thanks for this comment. Actually I have considered embedded a lot. I love C, specially free standing C.
I will probably move to embedded soon. Problem is I have lot of experience with iOS but not that much experience with embedded, I'm willing to get a pay cut just to get out of iOS but there's a limit of how low I can go.
There's also a lot more iOS jobs than embedded jobs!
edit: Other than C, I'm very curious about rust and nim-lang. I do not know C++ though. Hope that won't be a problem.
Serious question: What is stopping you from going back?
I'm naive but curious and I do not want to make guesses.
I wonder how are we going to preserve knowledge about programming from first principles if, under pressure from corporations and lazy peers, no one does it anymore?
It's mostly that I got 'stuck' with iOS. Once you specialise it's hard to get out. If I change jobs now for anything other then iOS, I have to take a pay cut and I need to find someone that will hire me without any previous experience in... embedded, for example.
There are also a lot more of this 'poking' jobs when compared to first principle coding jobs. Specially for someone like me who never worked for huge corporations.
I think nowadays the most important business in a company is to built things atop fundamental things. For example, we build apps atop Cocoa, web atop ror.
That is the business stop us from going back.
Sure. That makes it easy to reason about the code, see the abstractions clearly (at least as clearly as the author intended), make minor changes and submit patches upstream, recompile it with different options, ...
And why exactly you're wasting your skills on something as pitiful as this? There is a lot of work out there for people who can build complex solutions from the first principles.
I'm surprised and a bit dismayed to read Sussman's reasoning:
"...Sussman said that in the 80s and 90s, engineers built complex systems by combining simple and well-understood parts. The goal of SICP was to provide the abstraction language for reasoning about such systems.
Today, this is no longer the case. Sussman pointed out that engineers now routinely write code for complicated hardware that they don’t fully understand (and often can’t understand because of trade secrecy.) The same is true at the software level, since programming environments consist of gigantic libraries with enormous functionality. According to Sussman, his students spend most of their time reading manuals for these libraries to figure out how to stitch them together to get a job done. He said that programming today is 'More like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?''. ... "
Surely graduates of MIT, of all places, would be the ones building the libraries that ordinary programmers are stitching together and would still need the ability to build complex systems from fundamental CS abstractions? I'm absolutely baffled.
>Surely graduates of MIT, of all places, would be the ones building the libraries that ordinary programmers are stitching together and would still need the ability to build complex systems from fundamental CS abstractions? What's going on over there?
Ordinary programmers everywhere are building those libraries, just like your assumed wunderkind are building programs out other people's libraries. The nature of programming has changed for >everyone<.
Yes, but the reason it has changed for everyone is that no one understands the fundamentals any more, because they aren't being taught to anyone. The ecosystem is thus becoming infested by horrible hacks which kinda-sorta work, and which everyone uses, because they kinda sorta work, and there is nothing else. The idea that programmers need to understand how to "program by poking" thus becomes a self-fulfilling prophecy.
[UPDATE:] One of the symptoms of no one understanding the fundamentals is how excited people get about things like XML and JSON, both of which are just (bad) re-inventions of S-expressions.
A few years when MIT switched from SICP/Scheme to Python, Sussman had this to say:
"I asked him whether he thought that the shift in the nature of a typical programmer’s world minimizes the relevancy of the themes and principles embodied in scheme. His response was an emphatic ‘no’; in the general case, those core ideas and principles that scheme and SICP have helped to spread for so many years are just as important as they ever were"
In the context of the class (The Structure and Interpretation of Computer Programs), I feel like this is not such an outlandish view. It sounds to me like the field of software engineering has simply evolved since the 80s.
Don't get me wrong, if you are going for post-graduate studies such a course will always be relevant, but it sounds like he is talking within the context of undergraduates. And in the context of undergraduates, I too would be circumspect of how useful it would be for preparing you for your first job as a Software Engineer.
Their choice to go toward a Python-based course at the undergraduate level would also seem to reaffirm this view from afar...
> It sounds to me like the field of software engineering has simply evolved since the 80s.
What is ridiculous in the face of this "programming by experimentation" fantasy is that programming has evolved since the 1980s... to be even more about composable abstractions with provable semantics. Hindley-Milner-Damas types and monads are now everywhere.
MIT grad here: Nope. I've never had a workday that didn't involve reading through the docs or source of a library I didn't write. I am genuinely grateful to have taken 6.01, the course that replaced 6.001.
I am also grateful to have taken the condensed version of 6.001. You do need the ability to understand those abstractions in order to be an informed shopper though.
>Surely graduates of MIT, of all places, would be the ones building the libraries that ordinary programmers are stitching together and would still need the ability to build complex systems from fundamental CS abstractions?
"MIT" doesn't necessarily mean "super-elite programmer". I work in an office that's half MIT grads, and the non-MIT half are pretty much equivalently good (though with much worse connections around Cambridge). That's not to say MIT sucks or anything, but more to say that with luck, a really solid CS or EECS degree gets the student up to being able to build important components from scratch at all, which isn't necessarily the level needed to build those components from scratch for public release or for profit. That latter goal requires a good education followed by professional training and experience.
Calling it his "reasoning" with all the connotations that come with that word goes way too far.
This was his polite implicit criticism of the new core, which among other things also teaches much less in the way of EE fundamentals, a topic he's cared about very much since at least the late '70s (i.e. 6.002 is no longer a required course for all EECS majors).
The bottom line is that in the post-dot.com crash panic which saw MIT EECS enrollment drop by more than half, after it had been steady at 40% of the undergraduates for decades, one result was "MIT" deciding that what a MIT EECS degree meant would be substantially different.
It may have been that it just took 7 years to actually get a new course in place, but it wasn't until fall 2007 that MIT officially got rid of 6.001 as required course, well after the dot com crash.
There were a TON of changes that happened with the MIT EECS curriculum at that time, so perhaps it was a holistic response to the dot com crash that was beyond just 6.001.
Eh. Many students will have the rest of their lives to perfect the art of poking at a library. Getting the chance to play with the more sublime CS stuff is much harder outside of university.
Students also have the rest of their lives to learn the sublime CS stuff. Getting as many students motivated to learn CS, with Python, is a perfectly reasonable goal for a college.
This is how CMU does it. First course: Python. Most people who already have a CS background (usually AP CS) and are ready to dive in skip it. People who aren't sure take it and typically have a phenomenal experience. Final project is "make something using a Python graphics library and spend at least 20 hours on it". Then they have a little demo day where they show off the best of the projects. Second course: data structures and algorithms and C.
That's a perfectly reasonable goal for a high school or someone performing self study. The whole point of a university is to impart knowledge of subtle things.
Exactly, by all means force your students to use different libraries and program in modern paradigms which may be centred around different libraries but I wouldn't do a CS or SE degree where that was the focus. I didn't go to uni to learn any programming languages, it was just a side effect of everything else I learnt about computers.
the only thing that makes it harder is lack of time. but then that problem exists in university too. with one you can work on your own schedule during limited free time, and with the other you can devote a lot of time to it but it's on someone else's timeline.
I really wish SICP had been a 2nd year course (with a requisite increase in difficulty) instead of my very first course in the EECS department. Not having had a ton of background in programming beforehand, I feel that a lot of what SICP has to offer was lost on me to some degree due my not appreciating it at the time.
I suppose the same could probably be said for any intro course or just college in general...
I took 6.001 in the late nineties and hated it/did very poorly. I found it waaaaaay too difficult as a freshman. Oddly, doing a bunch of C++ and Java in high school made it even more difficult. Biggest problems:
1. the programming environment (an emacs clone in scheme) had an extraordinarily steep learning curve
2. S-expresssions were hugely difficult to visually parse and edit vs. languages with more familiar syntax.
3. very little exposure to practical projects in the class - felt like constantly working on toy projects or impractical abstract constructions
I got a lot more out of 6.170 (software engineering lab) and the other computer science classes.
I have a much greater appreciation for the class now after 15+ years and recently worked through SICP again. It's much easier with more programming experience, not to mention Emacs paredit mode.
I always thought 6.001 should have been a 2nd or 3rd year course. I would have gotten a lot more out of it.
I first read SICP 10 years into my programming career. I felt like that was a good point to read the book. Any earlier and much of it would have been lost on me. But maybe this is the kind of book that you read and re-read and always find new things.
My university used to have an Engineering Fundamentals class that all Engineering students were required to take their first year. Among several other things, it taught how to use Excel and more importantly, programming using Fortran. It wasn't in-depth, but you learned what programming was in a pretty easy environment.
We had a similar freshman pan-engineering course at Texas A&M that taught both excel and fortran among other things. Probably the most practically useful course in the entire program.
When 6.001 was introduced, a surprisingly large number, perhaps a majority, of MIT students arrived without having leaned how to program. SCIP was their first exposure to programming.
At my university it was a 3rd or 4th year course (depending on how quickly you were able to knock out the prereqs), and I think that was a great approach. At that point you've used programming languages enough that creating your own interpreter is a fascinating experience.
I was in the opposite situation. I'd been programming (mostly self-taught) for so long in imperative languages that I really struggled with 6.001, ultimately dropping the course. (I was course 2 taking .001 for "fun".)
In Math, you take Calculus. It's four semesters. Then you take Real Analysis which is the exact same material all over again, but you actually learn it and prove the theorems instead of just memorizing a few formulas and blindly applying them.
It's the same in Economics where you take Micro and Macro and then junior or senior year you do the same all over again but this time you take it seriously with logical reasoning instead of just memorizing a couple stock phrase explanations.
SICP wants to be both. It's a great book so maybe it can do both. But it's hard for a class to do both.
All that said, it is still worthwhile to work through all of SICP, if you want a deeper understanding of how certain tools work. Writing your own interpreter is a very rewarding experience.
Having to write my own shell in C for my operating systems class absolutely blew my mind. And then after that my professor made us implement a quine in the shell we just wrote. Yes, he was insane.
Which class/university was it? Any chance the materials are online?
I'm actually writing my own shell now; I'd be interested to compare :)
The fork/exec/pipe/dup calls are definitely an unusual and powerful paradigm. I think too many programmers don't know this because we were brainwashed to make "portable" programs, and that's the least portable part of Unix.
Genuinely curious: what are some of the interesting things in writing your own shell? Was it the programming language interpretation aspect, or having to learn the kernel's API for forking tasks and redirecting, etc.?
this is going to sound crazy, but in eighth grade I took a CS class where they had us create an interpreter in lisp. It was amazing, and I haven't done anything quite like it since (though dan boneh's crypto class on coursera was close).
It's good to know about the existence of quines. But people are (presumably) paying good money for (and more important: investing their perfectly valuable time in) their education, and there's only so much time in a systems programming class, and so much more fundamental stuff to cover (or cover more robustly).
The people who really need to figure out how to write quines will no doubt find time to do so, in the dead of night, no matter what you may try to do to stop them. (And try to make them 3 bytes shorter than the shortest known version in that particular language). The rest -- they just need to know that they're out there.
I think going through SICP and most of all, writing my own interpreter, made me an all around better programmer. I noticed my systems thinking clarified considerably. Not in an aloof way but by providing actual actionable insights on how to implement and structure things.
I took 6.001 as an undergrad at MIT. It changed my view of software forever.
Years later, I now spend most of my time training programmers at hi-tech companies in Python. The ideas that I got from 6.001 are pervasive in my training, and are fundamental (I think) for programmers to understand if they want to do their jobs well.
Given that I teach tons of Python, you might think that I'm happy that they switched to Python as the language of instruction. That's not the case; I think that Lisp offers more options for thinking about programming in different ways.
One of the great things about 6.001 was that it didn't try to teach you how to program for a job. It taught you how to think like a programmer, so that you could easily learn any language or technology they threw at you.
Poke all you like. I'll just be over here writing software that isn't a broken pile of hacks. SICP is one of the most important books I haven't read. It's actually on my shelf right now. I still haven't finished it.
The fact is, as a self-taught programmer, programming is intimidating. I can reason about code, and write it, and understand it if I squint at it long enough, but I still choke on any production code I read, and have trouble solving complex problems. SICP is a book all about this. It's about building better abstractions, and despite having not yet finished it, is probably the most helpful books on programming I've read. Many books can teach you how to program. SICP teaches you how to program /well/, and why.
Along with The Lambda Papers, it taught just how powerful Scheme could be. And I maintain that Lambda the Ultimate Declarative contains the only good explanation of what a continuation DOES.
It was the book that made me want to go to MIT. I don't know if I'll ever go there (I'm still in high school), but if the people there are advocating "programming by poking," it probably wouldn't be worth my time.
This book changed my life, and I haven't even finished it yet. It should be Required Reading™, and the thought of doing it in Java makes me sick. And not just because I despise Java. Java is probably the worst language for this sort of book. SICP is an exploration of programming paradigms and concepts. Java is married to one set of paradigms and concepts, and won't budge an inch against even a hydrogen bomb.
Besides, imagine a metacircular evaluator in Java. Yuck.
Some day we will recognize that some areas of "programming" are very different and require different skill sets, and eventually different titles.
We tend to call everything "software engineering" so that everybody can feel proud of such a title ("I'm an engineer"), but engineering is certainly not about figuring out how to vertically center divs with CSS (and it's also not about proving algebra theorems either -- even if it can be essential when it comes to specific problems that require it).
I can't imagine Linux and PostgreSQL being built without "science", they use a lot of it, and I'm pretty sure the authors all have read SICP and those theoretical books.
Poking at things proved to be efficient to building things quickly, but it's just not how one builds critical systems/software that are robust, efficient and maintainable.
Engineering (no matter whether mechanical, electrical, software...) is the process of designing an artifact to be constructed out of available materials, which meets a set of requirements while minimizing cost.
In mechanical engineering you design your artifact using off-the-shelf bearings, motors, pumps, etc.
In electrical engineering you design your artifact using off-the-shelf cables, contactors, relays, VSDs etc.
In electronic engineering you design your artifact using off-the-shelf ICs, resistors, capacitors, resonators etc.
In IC engineering you design your artifact using off-the-shelf silicon wafers, etching chemicals, core/logic designs etc.
It's turtles all the way down, and software is no different.
Most engineers are working under some quantifiable or standard set of requirements, rather than ad hoc "make it work good and look pretty" requirements. Most engineering disciplines also have processes to ensure that its adherents take the proper precautions to avoid poor and unsafe designs, delivered in standardized sets of guidelines and recommendations.
And many programmers aren't engineers, they're just interested tinkerers; people who play around in their free time enough to know how to make something work. Not unlike if you went to the store, bought some wires and batteries and tools, and then played with them until you got hired as an electrician.
Sharing culture is instinctive. People will do it. You might as well try to tell people they can't have sex without your permission unless they pay first. Oh wait, that's the porn industry. Everyone pays for porn, right?
> I can't imagine Linux and PostgreSQL being built without "science", they use a lot of it, and I'm pretty sure the authors all have read SICP and those theoretical books.
Unless you restrict "authors" to the people that worked on the original Postgres95 (and maybe not even then), I'm certain that that's not generally the case (being a postgres committer and not having read SICP).
Software Engineering is more about methodically solving software problems than it is about which problems are being solved. A web developer who writes rigorous formal tests for a new page is engineering just as hard as an embedded developer writing rigorous acceptance tests for a board-support package. The engineering comes from the rigor and the fact that there is a controlled process for how software features get implemented.
I agree with that. The engineering process can be applied on every type of problem.
Yet, for that specific web dev problem, I haven't seen any way of formally testing the rendering web pages, which would make it consistent on every browsers. The testing process (almost) always leave that up to the developers themselves, and refreshing pages is the norm.
I learned to program on a course that follows SICP, I spent all my college years learning how to program from first principles, building all the pieces from scratch. Building compilers, soft threads implementations, graph parsing algorithms... and I was happy with that way of programming!
Today I'm an iOS developer, I spend most of my day 'poking' at the Cocoa Touch framework whose source I can't even read because it's closed source. The startup I work for moves so fast that I'm forced to use other peoples open source projects without even having the time to read the source. I feel miserable doing the kind of programming I do nowadays! I wish I could go back to simpler times, where I actually had to reason about algorithms, data structures, hardware, etc...
This is one of the reasons why I moved away from product side programming to IT. My brain screams in pain, when I do lot of context switches, and I figured this is not going to make me a better programmer. The fun stuff is in handling data, not learning layers of apis(for me).
Now working in enterprise is not for everyone - the politics etc, but for me, it still beats the pain of working in a smelly, loud, fast context-switching, agile-kanban startup.
But I think the lucky ones are people who get to work low enough relative to their knowledge where it doesn't feel they are dealing with endless abstractions and layer upon layer of magic.
:( sounds to me that this might be more the problem. And worse is I suspect it's common.I've done the full range, from entire games written in assembly language and embedded C code, through high level full stack development with NodeJS, Python, and other languages.
The low-level coding is far more skilled work than the high level coding. Instead of most of the work being "how do these plug in to each other?", it's how exactly does my code function with respect to all of the potential events additional threads, and how all of the edge cases interact, and what is causing this obscure bug?
While that may not seem intrinsically harder, none of these are typically something you can Google or search StackOverflow for the answers on. So you're really just on your own. And developers who have grown up on "Google all the answers!" often hit a wall when they need to apply real problem solving skills.
Luckily I can find enjoyment in many levels, since a lot of the jobs I've found recently have been more of the "full stack" or "mobile development' category. It's easy and fun work.
I also have little problem piercing the magic and understanding how things fit together, but that means that I end up with opinions on many topics divergent with the crowd. For instance, I avoid writing Android/Java or iOS/Swift, and instead use cross-platform development stacks exclusively. Yes it means an entire extra layer of complexity, but it also means I write the app once for all target platforms. Far too big of a win to ignore.
SICP is one example, but I bet it's even possible to be happy coding GUIs or any other rather mundane things if one does it with an elegant toolset. For example, Oz comes to mind.
Some modern tools are not pleasing to use, often because architectures are a mess. Things move forward too quickly.
Now I do Python. And with Python I "got" something too : writing tools, coding, etc. is not like mathematics (like Scheme) anymore. It became a social activity. I'm in the field of ERP right now (arguably not the most theory-oriented stuff). It's because I spend most of time using API's by other and making API's in the hope that they will be used by other (none of those API being worth much in terms of computer "science" (stuff like algorithm). I'm also building tools to augment productivity of other people; which is also quite social.
So from the abstract Scheme programming, I've moved to Python social programming. That keeps me happy (I must confess that studying Voronoi diagrams, or 3D structures packing remain what made me really tick :-) )
Deleted Comment
Join the TXR project:
http://www.nongnu.org/txr
http://www.kylheku.com/cgit/txr/tree/
I made some 3000 commits all by myself since 2009. The main reason why this exists is the above: a project where I can stuff ideas, algorithms, whatever, into a coherent whole that has regular releases and is well-documented externally. And this coherent whole is useful to me in many ways.
This.
Moving fast, sprinting, only looks like you are making progress. At some point the debt comes too much
I will probably move to embedded soon. Problem is I have lot of experience with iOS but not that much experience with embedded, I'm willing to get a pay cut just to get out of iOS but there's a limit of how low I can go.
There's also a lot more iOS jobs than embedded jobs!
edit: Other than C, I'm very curious about rust and nim-lang. I do not know C++ though. Hope that won't be a problem.
I'm naive but curious and I do not want to make guesses.
I wonder how are we going to preserve knowledge about programming from first principles if, under pressure from corporations and lazy peers, no one does it anymore?
There are also a lot more of this 'poking' jobs when compared to first principle coding jobs. Specially for someone like me who never worked for huge corporations.
Deleted Comment
Oh, wait.
"...Sussman said that in the 80s and 90s, engineers built complex systems by combining simple and well-understood parts. The goal of SICP was to provide the abstraction language for reasoning about such systems.
Today, this is no longer the case. Sussman pointed out that engineers now routinely write code for complicated hardware that they don’t fully understand (and often can’t understand because of trade secrecy.) The same is true at the software level, since programming environments consist of gigantic libraries with enormous functionality. According to Sussman, his students spend most of their time reading manuals for these libraries to figure out how to stitch them together to get a job done. He said that programming today is 'More like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?''. ... "
Surely graduates of MIT, of all places, would be the ones building the libraries that ordinary programmers are stitching together and would still need the ability to build complex systems from fundamental CS abstractions? I'm absolutely baffled.
Ordinary programmers everywhere are building those libraries, just like your assumed wunderkind are building programs out other people's libraries. The nature of programming has changed for >everyone<.
[UPDATE:] One of the symptoms of no one understanding the fundamentals is how excited people get about things like XML and JSON, both of which are just (bad) re-inventions of S-expressions.
"I asked him whether he thought that the shift in the nature of a typical programmer’s world minimizes the relevancy of the themes and principles embodied in scheme. His response was an emphatic ‘no’; in the general case, those core ideas and principles that scheme and SICP have helped to spread for so many years are just as important as they ever were"
From: https://cemerick.com/2009/03/24/why-mit-now-uses-python-inst...
If anything, I would think Sussman is more practical, and understands what the world needs/expects(now).
Literally any programmer who hasn't read SICP before will benefit from it. I think the principles still apply.
Don't get me wrong, if you are going for post-graduate studies such a course will always be relevant, but it sounds like he is talking within the context of undergraduates. And in the context of undergraduates, I too would be circumspect of how useful it would be for preparing you for your first job as a Software Engineer.
Their choice to go toward a Python-based course at the undergraduate level would also seem to reaffirm this view from afar...
What is ridiculous in the face of this "programming by experimentation" fantasy is that programming has evolved since the 1980s... to be even more about composable abstractions with provable semantics. Hindley-Milner-Damas types and monads are now everywhere.
I am also grateful to have taken the condensed version of 6.001. You do need the ability to understand those abstractions in order to be an informed shopper though.
"MIT" doesn't necessarily mean "super-elite programmer". I work in an office that's half MIT grads, and the non-MIT half are pretty much equivalently good (though with much worse connections around Cambridge). That's not to say MIT sucks or anything, but more to say that with luck, a really solid CS or EECS degree gets the student up to being able to build important components from scratch at all, which isn't necessarily the level needed to build those components from scratch for public release or for profit. That latter goal requires a good education followed by professional training and experience.
Deleted Comment
I watched the SICP videos, and I remember Abelson specifically endorsing just that.
This was his polite implicit criticism of the new core, which among other things also teaches much less in the way of EE fundamentals, a topic he's cared about very much since at least the late '70s (i.e. 6.002 is no longer a required course for all EECS majors).
The bottom line is that in the post-dot.com crash panic which saw MIT EECS enrollment drop by more than half, after it had been steady at 40% of the undergraduates for decades, one result was "MIT" deciding that what a MIT EECS degree meant would be substantially different.
There were a TON of changes that happened with the MIT EECS curriculum at that time, so perhaps it was a holistic response to the dot com crash that was beyond just 6.001.
I suppose the same could probably be said for any intro course or just college in general...
1. the programming environment (an emacs clone in scheme) had an extraordinarily steep learning curve 2. S-expresssions were hugely difficult to visually parse and edit vs. languages with more familiar syntax. 3. very little exposure to practical projects in the class - felt like constantly working on toy projects or impractical abstract constructions
I got a lot more out of 6.170 (software engineering lab) and the other computer science classes.
I have a much greater appreciation for the class now after 15+ years and recently worked through SICP again. It's much easier with more programming experience, not to mention Emacs paredit mode.
I always thought 6.001 should have been a 2nd or 3rd year course. I would have gotten a lot more out of it.
That's inconceivable to me now.
It's the same in Economics where you take Micro and Macro and then junior or senior year you do the same all over again but this time you take it seriously with logical reasoning instead of just memorizing a couple stock phrase explanations.
SICP wants to be both. It's a great book so maybe it can do both. But it's hard for a class to do both.
I'm actually writing my own shell now; I'd be interested to compare :)
The fork/exec/pipe/dup calls are definitely an unusual and powerful paradigm. I think too many programmers don't know this because we were brainwashed to make "portable" programs, and that's the least portable part of Unix.
More like a pedantic jerk.
It's good to know about the existence of quines. But people are (presumably) paying good money for (and more important: investing their perfectly valuable time in) their education, and there's only so much time in a systems programming class, and so much more fundamental stuff to cover (or cover more robustly).
The people who really need to figure out how to write quines will no doubt find time to do so, in the dead of night, no matter what you may try to do to stop them. (And try to make them 3 bytes shorter than the shortest known version in that particular language). The rest -- they just need to know that they're out there.
I took 6.001 as an undergrad at MIT. It changed my view of software forever.
Years later, I now spend most of my time training programmers at hi-tech companies in Python. The ideas that I got from 6.001 are pervasive in my training, and are fundamental (I think) for programmers to understand if they want to do their jobs well.
Given that I teach tons of Python, you might think that I'm happy that they switched to Python as the language of instruction. That's not the case; I think that Lisp offers more options for thinking about programming in different ways.
One of the great things about 6.001 was that it didn't try to teach you how to program for a job. It taught you how to think like a programmer, so that you could easily learn any language or technology they threw at you.
Oh, well.
The fact is, as a self-taught programmer, programming is intimidating. I can reason about code, and write it, and understand it if I squint at it long enough, but I still choke on any production code I read, and have trouble solving complex problems. SICP is a book all about this. It's about building better abstractions, and despite having not yet finished it, is probably the most helpful books on programming I've read. Many books can teach you how to program. SICP teaches you how to program /well/, and why.
Along with The Lambda Papers, it taught just how powerful Scheme could be. And I maintain that Lambda the Ultimate Declarative contains the only good explanation of what a continuation DOES.
It was the book that made me want to go to MIT. I don't know if I'll ever go there (I'm still in high school), but if the people there are advocating "programming by poking," it probably wouldn't be worth my time.
This book changed my life, and I haven't even finished it yet. It should be Required Reading™, and the thought of doing it in Java makes me sick. And not just because I despise Java. Java is probably the worst language for this sort of book. SICP is an exploration of programming paradigms and concepts. Java is married to one set of paradigms and concepts, and won't budge an inch against even a hydrogen bomb.
Besides, imagine a metacircular evaluator in Java. Yuck.
We tend to call everything "software engineering" so that everybody can feel proud of such a title ("I'm an engineer"), but engineering is certainly not about figuring out how to vertically center divs with CSS (and it's also not about proving algebra theorems either -- even if it can be essential when it comes to specific problems that require it).
I can't imagine Linux and PostgreSQL being built without "science", they use a lot of it, and I'm pretty sure the authors all have read SICP and those theoretical books. Poking at things proved to be efficient to building things quickly, but it's just not how one builds critical systems/software that are robust, efficient and maintainable.
In mechanical engineering you design your artifact using off-the-shelf bearings, motors, pumps, etc.
In electrical engineering you design your artifact using off-the-shelf cables, contactors, relays, VSDs etc.
In electronic engineering you design your artifact using off-the-shelf ICs, resistors, capacitors, resonators etc.
In IC engineering you design your artifact using off-the-shelf silicon wafers, etching chemicals, core/logic designs etc.
It's turtles all the way down, and software is no different.
And many programmers aren't engineers, they're just interested tinkerers; people who play around in their free time enough to know how to make something work. Not unlike if you went to the store, bought some wires and batteries and tools, and then played with them until you got hired as an electrician.
Sharing culture is instinctive. People will do it. You might as well try to tell people they can't have sex without your permission unless they pay first. Oh wait, that's the porn industry. Everyone pays for porn, right?
Unless you restrict "authors" to the people that worked on the original Postgres95 (and maybe not even then), I'm certain that that's not generally the case (being a postgres committer and not having read SICP).
Yet, for that specific web dev problem, I haven't seen any way of formally testing the rendering web pages, which would make it consistent on every browsers. The testing process (almost) always leave that up to the developers themselves, and refreshing pages is the norm.
Deleted Comment