For anyone not in the loop, Norvig was the author of Paradigms of Artificial Intelligence Programming. This was a substantial contribution to the field of educational computer science literature, and helped to kickstart the idea that the way to learn is to read, not just write.
It's also one of the few AI books that isn't rooted squarely in Algol. It's written with fairly decent though not always portable Common Lisp, just like most Common Lisp books of the era.
For a native web copy, abuse Safari Online's free trial. It's what I assume everyone else does when they want to read a niche technical book that O'Reilly put out but doesn't print another run of.
Depending on the day, the book ranges anywhere from $2 to $60 on Amazon, used, if you want a hard copy.
(Edited to fix the very butchered title that I wrote in error initially.)
Computer science, historically, has been a field filled with books that encourage you to write code while not particularly having you read much of it. K&R is a good example of this. PAIP went against this notion, and has you spend much of the book reading code.
Yes, this is taking the way learning is done in almost every case and field and applying it to programming. No, it still hasn't really caught on universally in computer science.
Norvig presents complete programs for the reader to read and make changes to, instead of small snippets to practice with. You read more trying to comprehind the program.
I would have mentioned that, but it's mentioned in the Stanford post and is also less interesting from a "You Should Know" standpoint on a website built on a Common Lisp-inspired Lisp. Not to mention that the book isn't as good as PAIP.
Hey there! I'm one of the folks who's worked / working on making PAIP readable online. The Safari version is captured in the epub version, so no trial is needed.
Yes, I know. That's why I pointed out that zipped HTML was available, but that's not quite a native web copy! Especially given how the zipped HTML in the epub format is usually presented (a million tiny HTML files in a single directory).
Hey Dr Norvig, really enjoyed your podcast episode with Lex Fridman a couple of years ago. Once you get settled in it would be great to hear an update with how its going, some color around the background and objectives of the program and maybe just riff on the subject for a bit. Thanks!
I think sadly Fridman has since left the path of conducting interesting interviews with accomplished AI researchers and now caters to a kind of vapid pseudo-philosophical TED-talk crowd.
While we're on the topic, one of my favorite blog posts of all time is his "Teach Yourself Programming in Ten Years"
https://www.norvig.com/21-days.html
That sounds about right. Whenever I have to revisit code I wrote more than 3+ years ago I tend to think "Why the hell did I do it that way?"
Occasionally, now knowing the trend, I'll even comment in an apology to my future self. When I encounter those past comments my general sentiment is something like "yeah, thanks for the spaghetti asshole, would half a day to clean this up have really been so hard?"
And sometimes I remember the circumstances of that spaghetti, boiled around 2:00 AM, and remember "nope, wasn't time". Even though now I can do it better and faster.
It's funny you mention this, as there is a big thread above that discusses his contribution to learning by reading, and his second point here is "Program. The best type of learning is learning by doing".
I ran Search for 5 years or so; then ran all of Research for the next 5; then had an increasing smaller portion of a huge growing Research initiative. This past year I enjoyed mentoring startups in ML through the Google for Startups program. But m0gz got it pretty much right.
Sorry for a snarky comment, but doesn't that cover the time when HN started noticing, that search stopped returning results for the query that was requested, and instead started to be "too clever" about it with no way to override?
He was their first Director of Search Quality, and then switched to being Director of Research. IIRC he had a VP over him when I left (2014), but was still largely calling the shots in Research. Google Research had some very large wins come out of them in the mid/late 00s - their speech recognition and machine translation programs came out of that.
AIUI Norvig was also instrumental in Google's research philosophy, which is to embed research teams alongside the products they're developing rather than having a separate research lab that throws papers over the wall for later implementation. Somewhat ironic, given that he ended up heading the dedicated Research department, but Research was viewed as sort of an incubator whose successful projects would be "adopted" by some other product team. He's the reason machine-learning is pervasive at Google and ordinary SWEs use TensorFlow, rather than it being the sole province of Ph.Ds.
I saw that others, including he, replied, but, fun fact, this snippet has been on his site for years:
"Note to recruiters: Please don't offer me a job. I already have the best job in the world at the best company in the world. Note to engineers and researchers: see why." (sic)
Probably the same thing such folks do elsewhere: allowed some very smart people to not step on the rakes quite as often, and be even smarter by providing perspective and advice.
'One way to think of AI is as a process of optimization — finding the course of action, in an uncertain world, that will result in the maximum expected utility"
This specifically reminds me of reinforcement learning.
"Now that we have a great set of algorithms and tools, the more pressing questions are human-centered: Exactly what do you want to optimize? Whose interests are you serving? Are you being fair to everyone? Is anyone being left out? Is the data you collected inclusive, or is it biased?"
This is like a meta-view of automation. When tasks were automated, people were able to do higher order thinking. For example, when people no longer had to do menial excel spreadsheets work, they were able to glean more meaningful insights from the data.
"The next challenge is to reach people who lack self-confidence, who don’t see themselves as capable of learning new things and being successful, who think of the tech world as being for others, not them"
This is the same insight Ajay Bangha (CEO of mastercard) made on giving financial access to the Next Billion Users.
The Pentagon software developers with a Kindergarten grade education in Ai can route through Stanford as part of their professional development and climb the career ladder.
It's also one of the few AI books that isn't rooted squarely in Algol. It's written with fairly decent though not always portable Common Lisp, just like most Common Lisp books of the era.
It can be read here, in mobi or zipped HTML format: https://github.com/norvig/paip-lisp/releases/tag/1.1
Or here, in PDF: https://github.com/norvig/paip-lisp/releases/tag/v1.0
For a native web copy, abuse Safari Online's free trial. It's what I assume everyone else does when they want to read a niche technical book that O'Reilly put out but doesn't print another run of.
Depending on the day, the book ranges anywhere from $2 to $60 on Amazon, used, if you want a hard copy.
(Edited to fix the very butchered title that I wrote in error initially.)
Can you expand on this? Nearly every field is learned with extensive reading.
Yes, this is taking the way learning is done in almost every case and field and applying it to programming. No, it still hasn't really caught on universally in computer science.
the former already seems like a big project - the latter sounds impossible. (I'm assuming you're not skimming and actually doing the problem sets)
Is that in rambling prose? Dense math?
Deleted Comment
> We have noticed an unusual activity from your IP and blocked access to this website.
Deleted Comment
https://www.udacity.com/course/design-of-computer-programs--...
That sounds about right. Whenever I have to revisit code I wrote more than 3+ years ago I tend to think "Why the hell did I do it that way?"
Occasionally, now knowing the trend, I'll even comment in an apology to my future self. When I encounter those past comments my general sentiment is something like "yeah, thanks for the spaghetti asshole, would half a day to clean this up have really been so hard?"
And sometimes I remember the circumstances of that spaghetti, boiled around 2:00 AM, and remember "nope, wasn't time". Even though now I can do it better and faster.
How did you manage to keep sharp at coding?
AIUI Norvig was also instrumental in Google's research philosophy, which is to embed research teams alongside the products they're developing rather than having a separate research lab that throws papers over the wall for later implementation. Somewhat ironic, given that he ended up heading the dedicated Research department, but Research was viewed as sort of an incubator whose successful projects would be "adopted" by some other product team. He's the reason machine-learning is pervasive at Google and ordinary SWEs use TensorFlow, rather than it being the sole province of Ph.Ds.
"Note to recruiters: Please don't offer me a job. I already have the best job in the world at the best company in the world. Note to engineers and researchers: see why." (sic)
This specifically reminds me of reinforcement learning.
"Now that we have a great set of algorithms and tools, the more pressing questions are human-centered: Exactly what do you want to optimize? Whose interests are you serving? Are you being fair to everyone? Is anyone being left out? Is the data you collected inclusive, or is it biased?"
This is like a meta-view of automation. When tasks were automated, people were able to do higher order thinking. For example, when people no longer had to do menial excel spreadsheets work, they were able to glean more meaningful insights from the data.
"The next challenge is to reach people who lack self-confidence, who don’t see themselves as capable of learning new things and being successful, who think of the tech world as being for others, not them"
This is the same insight Ajay Bangha (CEO of mastercard) made on giving financial access to the Next Billion Users.