Sometimes I wonder if I'm using the same ChatGPT as everyone else. I pay for GPT4 access in ChatGPT, and it is a really terrible writer. I also pay for Copilot, and I think a can count the number of times it has been helpful with one hand. I'm trying to get on this hype train, but I'm left with an empty feeling every time I try to seriously spend some time with these new tools.
I feel like this is the bias of a very smart, highly educated, highly capable person. If you are already an excellent writer and coder, can take vast fields of knowledge and blossoming ideas and spit them out onto paper in skilled form with minimal anxiety, GPT probably does very little for you. Maybe you could see if you were missing something very obvious and average.
I read for probably 8-12 hours a day. I write every day. I think every day. I’ve been doing this for decades. GPT has been like a talking dog. It’s an insane achievement, but he’s not the worlds best conversationalist.
But to be 17 again and struggling all night to write a three page paper filled with unoriginal ideas, it would have been a game changer back then.
The thing is this will rob a whole generation of that learning. To struggle to learn to write, code, and all that is how you become a professional. A lot of people will never develop these skills, and the chasm between the talented and untalented will only grow.
I think the thing to remember is that there are far more people like me than you. People who are poor writers or middling thinkers outside of specific domains.
I agree that GPT isn’t much of a conversationalist. But it is an exceptionally good tool some of us. It has already proved invaluable for me in helping diagnose a medical issue with a family member — one that escaped several ICU doctors and nurses. And a couple of weeks ago helped me fix an issue with my car.
A good analogy might be a calculator. There are some people who can do calculations in their heads near instantly. And some of these people could do arithmetic of numbers too large to be represented by calculators even. But for most of us, calculators were immensely useful.
I would add: And doing the same thing over and over.
I find GPT very useful in that I can quickly become mediocre in any domain. Need a legal filing? I can get a good-enough legal filing. Need some code in a framework I've never used? I can get it instantly. Need something written in the language of some esoteric field of science? I can get that instantly from a bullet point list.
It doesn't do the hard parts well: What's being written, and how it's structured. That's where I fit back in.
But the only purpose of writing that paper is for you to practice doing research, thinking and writing. Most people who write deep, thought-provoking work started out doing assignments like that. What could possibly be the point of getting a computer program to do it for you?
I think you might be right. I can get GPT-4 to produce serviceable text on factual topics, if I prompt and guide it correctly. That's about it. Honestly, I'd very much prefer it if my actual dog could talk to me.
I'm not sure about "17 year old me struggling to write a short paper full of unoriginal ideas," but it sure could help me write that paper titled "Bullshit, bullshit, etc., etc., bullshit" that I always wanted an excuse to turn in for some college course or another. ;)
My worry is the innovator's dilemma. Given a few generations (software generations, not human) of continuing exponential improvement in capability of LLMs (or whatever replaces them), I worry that those who have spent much more time learning to co-work with a GPT-alike (or replacement) will move to the fore.
The tools are at their most useful when they're replacing Stack Overflow searches for common tasks.
An example I had last night: how do you split a pdf into separate pages on the command line, and how do you put them back together again (I replaced one of the pages with a scanned signed version).
This is a basic task, a beginner's task in the world of pdf munging, but I didn't know the answer off the top of my head. ChatGPT gave me command lines straight away.
That's what they're good for right now. Beginner / common tasks in areas you're not familiar with.
And a further implication, they're great for one man bands, generalist roles where simple things need to be done in lots of different areas. For specialist deep roles where you have all the tacit knowledge in mind already, not nearly as useful.
> The tools are at their most useful when they're replacing Stack Overflow searches for common tasks.
Interesting, because I still find it faster/easier to do a quick DDG search, with a click on the first relevant SO post, and scrolling straight to the top answer. Asking the same question to ChatGPT means waiting for its output, which may be objectively a few ms faster but creates the perception of slowness ( = irritation). (That is: waiting for X seconds is more disruptive than taking several actions for 1.5X seconds.)
And then it's definitely going to be much more verbose, which is worse 99% of the time. SO is nicely folded - if I want a deeper explanation, I look at the answer's comments, or I can read other answers, or I can even read the full question. ChatGPT doesn't have the same guarantee of such a clear information hierarchy in its answers.
Yes, I mostly use ChatGPT for ffmpeg commands. Just give it the description along with the full file path and it will output the correct command 99% of the time. Bard can also do that (not as well though) but the format of the answer is so much worse, the resulting command is not put in a code block for example.
For me, they are more miss than hit. I was amazed in the initial days but I have bounced back to searching from google and reading stackoverflow. It’s not just about finding solution for your problem, it’s also looking at what other options are there and comparing them mentally is a really good exercise. You learn a lot by that way. The comments and not accepted solution are also invaluable.
This is exactly what I don't understand about it. I copied your question directly from your comment into google and the top-ten results were all sufficient to answer the question. The nature of common tasks is that the answers are also common, so where's the value in paying for a service that regurgitates them to you?
I find copilot and chatgpt constantly useful as just a simple way of automating what would otherwise be tedious work. I think the most important thought in using them is that as you use them more you get a sense if what they are going to be able to do and what they aren't, which lets you know when you can use them (and therefore use them with more confidence and more often)
Especially in refactoring or working with poor codebases. I do notice with copilot specifically some people seem to get wildly different response times with it, for me it gives suggestions almost instantly so as im typing ive got an ongoing suggestion I can accept at any time as it gets closer to what I intend or ignore if not. A common really useful thing is when I have to set up a series of something (recently in a power plant code I set up parameters for a certain kind of battery) and then I just start writing for each other kind of battery it suggests the whole boilerplate for every case, I can go back and fill in the details and exceptions.
As for chatgpt its more useful at interpreting existing code. I often give it a giant function and say "I want you to split off this part and put it into its own thread, get the output with a queue" or something like that. Again, output can often need a little bit of massaging, but I find it reduces cognitive load involved in doing such a task. The trick is to be naturally skeptical of the output but appreciate how much of your life is spent writing nonsense.
I've also found that learning to put what I want exactly into words can be a particular skill that has helped my communication. Making sure you know the name and function of every tool you use. I think if I had said that to myself before I started using these things id have been incredulous, "of course I know the name of things" but no, its a different thing to be talking about them often, especially when you have an idiot savant that takes everything you say too literally. You have to learn to be consise and precise
People want to use LLM hammers like screw drivers and are disapointed.
Also, since the quality of the context matters a lot for completion, copilot will be way more useful on clean code with beautiful variable names and nice structure flows than on code-golf snippets. Same got chatgpt: write good prose, and it will help you better.
Totally agree, I also pay for api access and I am still trying to figure out how I can benefit from using it when it gives me blatantly incorrect commands or wrong answers.
I can see it guide me in the right direction if I have a more generic question but even then, if I am not specific enough I fear I might go down a rabbit hole that ends no where.
It may be better than google in finding something but when I manage to find something on google the information is usually correct and not totally wrong. Especially technical problem that someone wrote somewhere on a forum regarding a specific device.
Finding the things I am looking for on google allows me to judge the correctness of the data. When I find answers in what is clearly blog spam I know that the info is probably wrong. Which ChatGPT I have no idea how it arrived at the conclusion it gives me.
A good example is if I want to know the max current rating for a chip. If I search google and find the manufacturers datasheet, I know the information is correct. If however the same information is on stack overflow or some forum I may not take it as a fact but more as a guidance.
ChatGPT would be 100 times more useful is it cited a source. Like "The max rating of chip ... is ... according to the manufactures datasheet dated ... which you can find here ...". I guess Bing and Google are trying to do something like that.
> Totally agree, I also pay for api access and I am still trying to figure out how I can benefit from using it when it gives me blatantly incorrect commands or wrong answers.
Basically, I knew the package could do what I wanted; I knew it was almost certainly in GPT-4's data set; I could do it myself, but it would involve searching through all the documentation and figuring out the right bits. It just did it for me.
Now there were a few minor bugs: It duplicated a buggy error message, and at some point later it called a "Subtree" method rather than a "Tree" method. But those were a lot easier to fix than writing the code from scratch.
Once I had a list of 27 book names I wanted put into "canonical order" and in the form of a golang array. I could have done it myself but it would have been tedious; I just pasted the list into GPT-4, asked it what I wanted, and out popped the result.
Here's another place it was helpful recently; I prompted:
"We're hiring a new community manager for the $PROJECT. I'd like you to help me develop some interview questions. What information to you need about the role to help you develop more useful questions?"
The questions it asked me about the role were really good; just answering the questions was quite a useful exercise, and I'm sure the resulting document will be a good intro to whomever we hire. I wouldn't say the resulting interview questions were brilliant, but they were solid, and I used a couple of them.
There are times when "the form" is there for a reason; if you want something re-written in a specific form, GPT-4 can do a good job. I wrote an email recommending something to somebody's managers in a different company in a different country; then I pasted it into GPT-4 and asked if it had any suggestions. It did a fair amount of rewording, of which I decided to take about half. In this case, the "polite form" is there to avoid offense, and it's exactly what I wanted.
I've also asked it to write some Tweets highlighting some specific aspects of an upcoming conference I'm planning. It did a good job coming up with the sort of punchy, tweet-length messages which seem to do really well.
Connecting it to the context of the article: My day job is basically arguing with people on the internet. :-). I do read and write all day every day; but I don't write messages where diplomacy is critical, nor do I write tweets. Perhaps I could get better at those, but I don't think it's worth the effort. Am I the worse off for that? Probably not in the way the author thinks; I don't think being diplomatically polished would change my thinking that much -- much less being able to write punchy tweets.
If I started relying on it for the core writing, however, I'd certainly be selling myself short.
How impressed or unimpressed you are by it is a direct reflection of yourself and what you are trying to achieve with it. I find zero value in it. Others are very impressed by it, but I'm not very impressed by what they hold up as an example of its output. Sometimes I can only laugh when it's horribly wrong, but they herald it as amazing.
The whole thing reminds me of the blockchain hype train. Still using and loving databases here for the foreseeable—still writing things by hand for the foreseeable, and loving every moment.
Here on HN, a few days ago, there was a post about Microsoft publishing a GitHub repo that contained a "table recognizer AI". Basically, you feed it PDFs that contain horrible scanned images of finance records, and it spits out Excel spreadsheets. For some reason, Microsoft had just "thrown this over the fence" and released it to the public for free. This, despite man-years of effort developing the thing. It was working, and everything.
I made a comment wondering if Chat GPT 4 with the vision extension could solve the same problem. One of the devs that had worked on the aforementioned AI (for years!) mentioned that yes, yes it can.
Game over.
Those years of effort had just been replaced with a one-sentence English-language prompt that starts with "Please output a table from..."
If this doesn't blow your mind, then... I don't know how to help you understand just how much has changed, virtually overnight.
It might depend on your workflow. In many workflows (mine included), you tend to start out with a bunch of messy ideas and then slowly polish them into something usable. ChatGPT is a very powerful tool along the way here. It can help organize things, it can help rephrase your thoughts, and it can help solve minor issues along the way. Sometimes it can even do some fairly fancy things that previously could only be done by a human.
For now I'd treat it more like a sparring partner who can help you with your ideas and support you throughout your process, rather than a magical genie that can magically solve your problem for you. And in this manner I find it to be very useful indeed.
There is also a huge difference between code generation (be it Copilot or ChatGPT). With Code you still have the compiler that rejects the utter nonsense and then you hopefully have tests to check if what was generated makes sense.
With prose it's all your manual work again to check. If your text is mundane it might be trivial but for anything a pg would write I guess an AI is pretty useless in its current state.
It’s very difficult for me to wrap my head around people that aren’t blown away by both this and generative image AIs, and I tend to be in the skeptical side when it comes to fancy tech adoption.
Especially when folks are already out there LLM-generating SEO-spam, so even "well the answer also came up in a Google search" isn't a sign of accuracy.
Most people are terrible writers, and never read anything at all. So they have no way to judge if the wordy informationless crap gpt wrote is good or not. It's long, thus is good.
Try coding in a language/style you are not familiar with.
It is not revolutionary for areas you specialize in, but if I try using it for something like setting up a web scraper with python (something I understand conceptually but have never done) it feels pretty amazing.
My one concern is that if I use it for JS I can immediately spot issues with it, but I am too ignorant of python to spot subtle problems.
It’s definitely awesome for that, but for me, on side projects even in languages I’m familiar with, getting ChatGPT to write the boring annoying stuff, or things that would require a bit of referencing (like a library I’ve not completely memorized), I’ve still found it to be pretty useful.
My experience is very similar. I am neither a great writer, not a great coder, not a grammar nazi - but still have to make major corrections to the output of such tools.
I came to the sad conclusion that perhaps what I write and what I do is not as repetitive or as generic as others. Over the years when I have had to do more repetition, writing once well and ctrl-c / ctrl-v.
The differentiator is, I guess, expectations: I fully expect my own code to strike a certain balance of readability / optimization / correctness. In this context all but the most trivial code suggestions, sloppy as they usually come, instantly fall out of place.
On the other hand, there is a group of people (e.g. those spamming GitHub pull requests...) who expect the output to be kind of visually similar to some code. When they realize that the result also sometimes works and gives sane-looking results, they are absolutely blown away.
It fills the role of an intern. It can do stuff but the quality is inconsistent and the work needs to be verified.
Nonetheless it's better than Google at many things. I find it really good at giving paragraph-long introductions to things, instead of pages-long blogspam. It's also good at suggesting improvements when writing in a second language.
But I don't see what those AI startups see when they want to replace all sorts of skilled workers with this bullshit generator.
I paid for it too, but I'm slowly figuring out its utility.
The trick is to use it for what it's good at.
For example, I had this vague, fuzzy memory of a talk I saw online about a decade ago. It was about a database modelling paradigm. All I remembered was that the country it was developed in was in northern Europe, the methodology was based on splitting out columns as individual objects, and that it had funny little icons such as diamonds on the edges connecting things together in a graph.
Good luck finding that on Google. I had tried and failed. It was bugging me, because I needed something like it for my work, but for the life of me I couldn't dig up the reference.
5 minutes ago I copy pasted a 200 line react component and asked ChatGPT to explain why I was getting a certain bug. Although it didn't quite get it right, it allowed me to quickly identify the cause.
It only needs to save 10 minutes of my time a month for it to be worth the $20 - I would estimate it saves hours.
It is probably because you have standard / quality expectations and those are above the level of what GPT will give you. Keep your standards and quality expectations, I say.
I'm surprised about copilot. I find every day it saves me time by "nexting" whatever I'm doing. Whenever it's right about what I wanted to write, I save a bit of time.
My go-to example is if conditions. When I start with the first case, it almost always figures out the rest of the cases correctly.
I think copilot it's amazing saving time in boilerplate code.
But I can understand why some workflows may be a liability rather than help. Similar to chatgpt sometimes allusinate code with reserved words that don't exist in the language (but exists in others), or uses old code from a deprecated api / version of your framework.
In other words, somtimes the cleaning process can take as much time, as just writing the code yourself.
For writing I find it makes an excellent rubber duck. I absolutely do not expect it to write long form prose for me, but I can explain what I am trying to say and talk to it about how I might present that information.
Tell it the columns of a certain table. Ask it a GraphQL schema for a CRUD for this table. Ask it for a resolver using X libraries. Ask it for unit tests using Y libraries.
I suppose you have to play with it enough to get a feeling of what kind of stuff you can trust it with. For most tasks it's pretty bad, and this is why, if you just try it for something you just thought about without any kind of filter or feedback, you're going to be disappointed with the results.
I agree that there is currently way too much hype. I mean GPT4 is certainly cool and probably going to be useful in some limited use cases, but it's not the new electricity and it's not going to take over the world.
I needed to read this. Seeing posts about having gpt “write apps from scratch” vs me trying to get it to spit out what I want has been tedious. I’m assuming what’s not written about is the hours of just how tedious this process currently is, and how much “from scratch” is your own human brain doing the prompt edits bc you have a very good idea of what coding step comes next. It’s a lot of work, and not faster than me just concentrating hard and doing it myself.
I use Copilot with nvim and C-x C-e at my terminal. I can use it to build out pipelines in the shell so easily it's crazy.
But in the end, it's just a tool. If it isn't working for you, it isn't working for you. For me, I prefer thinking in what structure to me as bigger building blocks.
So I prefer vim bindings because I used up teenage time learning it. And now I don't think in terms of characters but in terms of text-blocks.
I prefer writing Java in IntelliJ and Rust in Clion because I don't think in char terms since I can use the built-in refactor tools. E.g. I write functions inline and then extract them rather than writing them as functions and calling.
Likewise when I'm at the shell I think of pipelines as less than pieces. Being an experienced shell user I can usually write a parallel pipeline with conditionals and loops correct first time verbally with no computer usually.
But my copilot flow is improved even better. I talk to the computer in components:
# get all private ip addresses in Alibaba region cn-zhangjiakou
ips=...prompted text...
# ssh to each and check last long line
...Prompted text...
It's faster than me and this is great for r-i-search with fzf.
I wonder if it’s time to start seriously looking at why some people have completely embraced copilot/chat-gpt and others can not get it to work at all.
I’d love to see which of the following might have any correlations with reported usefulness of chatgpt/copilot:
1. How articulate a person is.
2. How much of an expert they are in what they are trying to do with the tool.
3. How successful they are at giving instructions to a human to perform the same task.
4. How much experience they have managing/coaching junior devs/interne/newbies.
5. How much experience someone has at decomposing a problem into smaller parts and identifying the simple parts and complicated parts
There are huge consequences to either answer to the question: “Is using an AI tool effectively a coachable skill?”. I’m sure someone has already looked into this or is looking into this - if it turns out to be a coachable skill, and we can identify what those are, there’s a lot of money to be made in bringing a coursework to market.
No idea what you are doing there, I've been using ChatGPT Pro and Copilot since four weeks and it literally blows my mind every single day.
Give it a Django model and ask it to create a data visualisation. It'l perform complex ORM queries with filters, annotations and aggregations, then load it into a pandas dataframe, then create more columns in the dataframe, does more aggregations, then spits out matplotlib code to draw the chart. The time typing the prompt is time I would have needed even in the pre-GPT-era, since I would have to think about what I want to achieve anyways, now I'm typing it into ChatGPT, which makes the thinking process more structured as well!
I have built more tools in the last three weeks (for work and personal) than I have in the last three years combined. All thanks to OpenAI's brain constantly pushing me into the right direction.
I think ChatGPT is impressive, but it's revealed just how insular and vapid the thoughts of some HNers are. Hearing people's "endgame" for ChatGPT has been very depressing
I don’t know how you’re using it but it seems to step up its game if you challenge it more. You can also ask it to refrain from dumbing things down.
Copilot wasn’t useful for me because I want a back and forth conversation to straighten out concepts. With Copilot you don’t get that and it just sprays out little messes of code.
I use GPT4 like a rubber duck in the sense of “rubber duck debugging” and it can be surprisingly supple and creative in its solutions… many bad ones, but again, push back on those and you get to good ones.
Is it terrible at the content or the delivery of the content?
I find it to be bad at the prior but rather useful for the latter.
If you asked chatgpt to generate a document, its generates one that reads well but has terrible content. Eg. Super broad or just plain contradictory. But given some not so well written piece of writing, it can clean that up fairly well.
I find copilot most useful when writing comments tbh. Whenever I'm stuck contemplating how to phrase the rest of a comment it comes with a suggestion. It might be a terrible suggestion but that's usually enough to get me unstuck. And every now and then it's exactly what you need and it saves a bunch of typing :P
The point of it isn’t to write something like a book from scratch. Imagine if I told you “write a book about unicorn”, it would be trash.
The genius of it is that you could give it the notes of a meeting and get it to rewrite them in a different form. Nothing else can handle language like that.
Well, you made only one mistake in these 4 sentences you wrote, which is like a 400% improvement over the average messages I see outside HN, and an infinite improvement on some other people.
What effort have you put into learning useful prompts?
I only use the free 3.5+ ChatGPT and it's written some nice mapping code and some fun poems for me. Poem subjects ranged from rheology to texas hold'em.
I am trying to use ChatGPT for writing code and that code is mediocre to bad. I really hope that in couple of years LLMs will advance enought to support/refactor what they are generating now.
they are tools, you have to learn how to use them.
I use chatgpt everyday and am getting better and better at asking questions to get the answers I want.
I used copilot when it was free, and it was best at boilerplate, and sometimes not bad at auto completing the next few lines, I didn't think it was worth paying for at the time. I use Codeium now, that works in a similar way, probably not quite as good as copilot in that it doesn't understand as many languages, but still does a reasonable job with boilerplate type code.
It makes me feel like my grandmother who can’t use search. She can use the browser fine she just lacks any Google Fu and uses the most ridiculous search phrases.
Good old Peter Drucker formalized the personas in his advice to new employees to figure out how their bosses consumed information:
1 - The lawyer. Needs walls of text.
2 - The architect. Needs diagrams, visuals.
3 - The accountant. Needs numbers, spreadsheets.
4 - The actor. Needs a conversation, verbal.
It's a generalization, of course, but I have applied this pattern successfully. The core bit is that using the wrong method with any of the 4 personas flat out does not work.
Send a long article to an accountant and they're stumped. But send them a bunch of numbers and they hear the numbers sing. Etc.
Tangent on the Amazon way of write ups - Bezos forced his persona on the whole company. I would hste it. I am an architect, a few diagrams is all I need.
"The lawyer" is probably a bad name, but I've definitely worked with this sort of person before. "The novelist" might be a better name.
Of of my ex-bosses preferred long, well-written emails to just about any other form of communication. He was a really bright guy with a degree in English Literature that somehow stumbled his way into the software industry. When he wanted to start working with a new technology, he'd ingest a small pile of driest textbooks on the subject over the course of a week or so, understand and retain all of it, then start running laps around the folks who had been working with the tech for years. He's still the only engineer I've ever met that preferred learning-by-reading over learning-by-doing.
Definitely agree with your approach to drafting, but I think you could be overreacting a bit here. Even the shortest contracts might seem like a "wall of text" to the lay person, and short paragraphs stringed together is a wall of text.
At any rate the preferred format in most legal contexts is still "`wall' of text" (whether big or small wall, as opposed to diagrams, spreadsheets, or conversations), which is what I think was the original point.
I hate when a long technical document doesn't use BLUF, ie omits a summary at the start with the main topics.
There's too much info on the internet, and even in internal documentation: if I have to read half the thing to understand if it's relevant, I'm gonna spend most of my time on irrelevant stuff.
This is a problem with asking engineers to document without having a writing culture. I've seen it happen: once it's written down, that's it: it's documented, great work! But we should treat it more like code, with clear divisions of concern s, scopes and even refactors when needed!
You are describing the "good lawyer". Judging by the absolute walls of text given in the terms of service and contracts of almost every big company out there, most lawyers are not "good lawyers".
I really love how even the lawyers don't want these gigantic walls of texts they produce.
I totally get the walls of text are needed to a point, and we don't have good options to produce succint contracts. It just feels like an everybody lose situation.
Jeff Bezos is known for his long-term thinking and his annual letters to shareholders are famous for their extensive detail and length.
Mark Zuckerberg is known for his long-term vision and often communicates his ideas in lengthy blog posts and public statements.
ACCOUNTANTS - needs numbers, spreadsheets
Sundar Pichai has an engineering background and tends to rely on data and analysis in decision-making.
Satya Nadella has an engineering background and has emphasized the importance of data-driven decision-making at Microsoft.
Warren Buffett is famous for his value investing approach, which emphasizes analyzing financial statements and data to identify undervalued companies.
ARCHITECTS - needs diagrams, visuals
Tim Cook is known for his attention to design and user experience, and Apple's keynotes and product launches often feature detailed visuals and product demonstrations.
Elon Musk frequently communicates his ideas through visually compelling presentations, and is known for his ambitious plans for space exploration and sustainable energy.
ACTORS - needs a conversation, verbal
Mark Cuban is known for his outgoing and talkative personality. He often engages in conversations with fans and customers on social media and in public appearances, and is very comfortable in front of the camera.
Richard Branson is known for his flamboyant and charismatic personality. He often appears in public events and media interviews, and is known for his storytelling and ability to captivate audiences with his words.
This reads like Plato's warning (through Socrates' words) 2,400 years ago that writing will make people forgetful:
"For this invention [writing] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise." [1]
Plato's criticism is very true on an individual basis. What he didn't consider was the effect the written word had on a social scale in terms of being able to exchange ideas over physical and temporal distances.
It turns out that was a greater benefit that to some extent makes up for losing the exceptionally well trained memory of scholars from a verbal tradition. Not that that last part isn't a loss though, and his warning against false wisdom is more relevant than ever today when it feels like half the people you talk to online are just googling up wikipedia articles to base their world view upon on the fly.
I think it's better to think of these as different, rather than superior... especially given our times' proliferation of mediums with differing cognitive implications.
Is Twitter > books? Is HN > podcasts? Every medium has its tendencies... positive, negative and complicated. We don't actually choose between them. It's useful to keep an eye on losses, not just gains.
Oral traditions, for example, naturally evolve and diversify. A fairy tale can literally spread over the whole world, and develop thousands of versions. Each one adapted and localised. Written stories also spread, but remain identical... even more-so after printing.
This is, largely, the story of monotheist religions. Whether or not they had sacred scrolls in ancient times, the religions were mostly based in folk traditions, temple authority, prophets, judges... people. As the moved from oral tradition to authoritative canon, the religions became extremely stiff an unyielding.
It's also useful to point out that historical verbal tradition trained a very specific type of memory recall but that doesn't automatically make anyone wise.
Just because you memorized 10000 random articles on Wikipedia, doesn't mean you now have the wisdom to apply that in a particular circumstance.
plato is specifically thinking of the social impact but he's also not actually arguing against writing. after all, he's making his argument in writing. he's trying to push the reader to look into their souls for the truths that he thinks are already there. "remembrance" here is specific, so is "memory". It doesn't mean, to remember facts and things that happened, but to reach back into ones mind beyond sensation and to "remember" the eternal forms.
Socrates (for it is he who was opposed to writing) did not appreciate how the act of writing exercises our abilities of both memory and reasoning, at least as much as speaking and certainly more than listening. Someone who has written extensively on a topic is almost certainly well-versed in it, and more so than when they began (even crackpots are generally masters of their own flawed theories.)
I'm sorry, Plato wasn't thinking about the effects of the written word on social issues over time and space?!
The whole point of his writing all it down, the very literal foundations of ~2500 years of philosophy and the bedrock of Western (and others) civilization, was so that he could have an effect on social issues over time. That was the whole point! He is super clear about this in his writings and throughout his life and travels.
> You have invented an elixir not of memory, but of reminding;
> What he didn't consider was the effect the written word had on a social scale in terms of being able to exchange ideas over physical and temporal distances.
Why would his words be less potent when applied to billions of people, as opposed to only a few? Are those billions not just as susceptible to fallacy as the elite few?
Plato's warning is true, actually. Since I started to write, I remember less and less about the things I have written down, because the brain replaces them with a pointer.
However, writing for thinking and writing for storage is two different things, and it activates different parts of the brain.
I'm an avid pen and paper user, and using a real pen on a paper allows me to think much more deeply. It regulates thinking speed, so things doesn't escape or my focus doesn't get derailed. Also it changes how brain works so it can think better.
Also, blogging and writing documentation at work made me a much better thinker, because converting these abstract concepts to meaningful sentences with a context and easy to understand structure affects also how clearly you communicate in other parts of your life.
Offloading this really robs you of the joys and power of meaningful communication on many mediums.
I know this is HN, but can we please stop drawing these vague parallels between basic programming concepts and the functioning of the human brain?
No, the brain does not "replace written information with a pointer". I cannot tell you with complete confidence whether writing stuff down improves recall or diminishes it, but I can tell you that you can construct computer analogies in support of either case.
I could posit that writing actually improves your memory, because the minute movements of your arm require more of your brain to compute, therefore creating more neural connections and possibilities of recall.
This is likely just as wrong, but demonstrates that thinking up some vague explanation is insufficient to make a point about a system as complex as the human brain.
Most of the comments here are assigning this as Plato's opinion.
I'd just like to point out that Plato very rarely wrote in his own voice so it's very hard to say if it's his views or not that are being expressed.
In this case however, this is almost certainly an expression of Socrates' views, not Plato's. Not only because it's in the voice of Socrates but also by what's transparent in their actions: Socrates didn't leave anything in writing and Plato left us arguably the most important written corpus of classic Greek philosophy.
Maybe he felt ambivalent about it, but he certainly thought there was a value in the writing.
Plato turned out right. I'm not extremely old and I still remember time before the Internet, when to know something, you had to look it up, which took ages, so you simply had to know it. Yourself.
We measured intelligence by the things you knew. We still have shows with questions based on this concept. This concept no longer makes sense in the modern world. But it used to.
Writing is a crutch, Internet even more so, and AI even more so. Eventually you can build the entire thinker out of those crutches and you need no humans anymore.
We measured intelligence by the number of memorized facts we'd internalized, but little did we know that collecting more facts doesn't make us more intelligent
It's true. People can memorize 100'000s of words of text and this ability was common place among learned people. The pali canon is an example. I found myself remembering huge amounts, having just read a text, with no intentional effort to recall after a month in a forest monastery and in the 4 months I was there but afterwards when exposed to the internet and sensory stuff again found my ability to recall texts had gone.
"What is the cause of people forgetting texts they had learned before?" I can't remember the precise answer but that was one such line in the texts.
Today if you say that you sound kinda crazy.`
but like socrates there's still a few orders of wandering mendicant homeless-like orders out there (the theravada forest tradition being an example)
> [W]hen exposed to the internet and sensory stuff again [I] found my ability to recall texts had gone.
The overstimulation could be part of it as well, couldn’t it? There isn’t a lot of other information-rich inputs in a monastery.
Holding a summer school in a forest or on an isolated resort seems like exploiting a similar idea, and it does work quite well (for a month or so, before isolation sets in).
It may be true that writing makes people worse at remembering unaided, but better if they can check their diary. I imagine ChatGTP type things may likewise make people lazier at unaided thinking but better at AI aided thinking.
I tried asking ChatGTP what it thought and it came back with
>... it's also important to note that technology is a tool, and its impact largely depends on how it's used. For instance, AI writing tools ... can be used to augment human creativity and productivity, rather than replace it. These tools can help writers brainstorm ideas, overcome writer's block, write more quickly, and even learn to write better by providing examples and suggestions.
>Moreover, the use of AI tools might also stimulate new forms of thinking. Just as calculators didn't eliminate the need for mathematical understanding but rather allowed for more complex problems to be tackled, AI writing tools could help individuals refine and expand their thinking, enabling them to tackle more complex writing tasks or express ideas more effectively.
Which is kind of interesting - I didn't think of writers block on my own for example. Also lazy thinks at the moment probably tend to just go along with what they see in the media. AI aided thinking could improve on that.
I’ve argued that if ChatGPT means that motivated students spend less time doing expository writing, and more time:
* learning to ask good questions
* learning to fact check
* learning to edit the writing of others
Then they have truly learned the Socratic method. Which is a much more valuable tool than writing from scratch solipsistically: being able to actively engage with the writing of others.
Whether this applies depends on what situation you are supposedly replacing by writing.
If you replace intense IRL dialogue and discussion, with writing, this might be the case.
On the other hand, if you replace just letting your thoughts fade in the void, with writing them down as coherent though out ideas in writing, I would argue that the writing has actually both triggered retention from memory (thus exercising the memory), and helped you critically assess and make your own thinking clearer.
Thus, I think writing is an extremely helpful tool for processing your thoughts for most knowledge workers who are not already involved in constant dialogue with others (which is most of us I guess).
Actually, I always found this part much more interesting:
You know, Phaedrus, that is the strange thing about writing, which makes it truly correspond to painting. The painter’s products stand before us as though they were alive. But if you question them, they maintain a most majestic silence. It is the same with written words. They seem to talk to you as though they were intelligent, but if you ask them anything about what they say from a desire to be instructed they go on telling just the same thing forever.
Because it is now not the case, and AI trained on a corpus of many books can in fact say something new.
Plato had it easy... When we were young, there was no memory. You had to work everything out yourself, each time. And it was a hard life having none of that fancy thing of memorizing but, frankly, life was better for it.
I have seen this happen before my eyes:
When i was young (~17 y.o) i was able to remember all my family IDs, my friends phone numbers and a lot of data about them. Now i rely a lot on my smartphone to tell me their phone numbers, their birthdays, their address, etc... (Sadly today i dont even know the phone number of my gf, out of pure laziness). As i have become more and more lazy and let the tools do their job i'm losing my own skills on it "because i need my brain for bigger things - sure".
Maybe kinda pedantic but, this is not Plato/Socrates saying this here, but Socrates telling Phaedrus what King Thamus said to Thoth. It's kinda important I guess, because Socrates is ultimately somewhat ambivalent by the end of the dialogue on this issue.
Sure, the game of différance [1] can be played till the end of the universe. Socrates is well known as an aporia [2] lover, but at the end of the day even he had to take an executive decision, leave the agora, and go home to the nagging wife [3]. Ancient Greek misogyny aside, Xanthippe can be interpreted as Socrates' aporia solver, in the same manner his 'daemon' [4] would only tell him "No", Xanthippe would tell him "Yes".
Especially for this quote, I like attributing it to Socrates as Plato did. We obviously don't know the details and nuances. However, I feel it captures the (perhaps inexact) tension between Socrates' old, mostly oral tradition of philosophy and Plato's newer tradition of written philosophy.
It's ancient Marshall McLuhan. Oral philosophy and written philosophy create different philosophies, not just a different medium for the same philosophy.
I can only begin to wonder what would his opinion on Google search would be since we (developers) always practice the "don't memorize what you can easily search", and there's a sizeable amount of people claiming that memorization shouldn't even be a concern most of the time, and that's an advice given to people who are new in the field (like myself).
I think understanding is more important than both. Even if you're cut off fro the internet somehow, you can still re-derive things if only you understand them. And understanding also allows you to solve novel problems.
Do people really practice this? Like I usually physically unplug my computer from the network as I develop as it's such a distraction to have emails and stuff popping up. Doesn't really feel like a limitation to not be able to look stuff up.
> For this invention [writing] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding;
That is correct. I mean, some of that is exactly why I write at least 5000 words a day of journal notes while working. Because I want to offload everything from memory to a written form that is indexed.
> and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant...
This part is where we veer into prediction of how it will alter people and it is both right and wrong. I think the education system has shown us it's possible to have a great memory and to remember many facts and still be ignorant. At the same time, there's so much material that can be consumed and people have surface knowledge of many things and once you dig into a conversation with them on it, the ignorance pops up and it turns out that they don't know much about the topic. I know this. This has been me on many occasions and is still me on some occasion.
>... and hard to get along with, since they are not wise, but only appear wise.
Again, there's some truth to this. I think about the times where I've jumped into a conversation about politics with some half baked knowledge that I read somewhere but failed to understand or recall correctly and I think I must have a looked a fool at those times.
But then again, I've course corrected here and there and I've spent time writing and sharing thoughts with others where I've used previous writings to pull up facts and my own synthesis, and I feel like I've been a better participant at times like that. In that way, writing has helped me steer away from ignorance.
I can't comment further though on the quote because I've never read the rest of the context so I may be missing out on some stuff.
I think Paul is on to something here though. I can't count how many thoughts I've had that I thought were clever but when I started to write the thoughts down and organize it on my own, I discovered for myself that either I didn't have enough evidence to prove myself entirely or that I was just flat out wrong. Writing is a form of thinking in my opinion. And if we don't write, we do lose that side of thinking too. I do appreciate though that Paul doesn't go further into predicting the consequences as Plato's warning does here. But I'll be willing to bet that Paul is right that there will be some negative consequences for sure.
Socrates’ main argument is that the word itself cannot teach but that it takes two parties which are often a student and a text instead of a student and a teacher. If Socrates sat down and explained to a student, he could impart not only the written word but other examples from their own lives. But the word can be uprooted from its context and read by many who will learn just the word instead of the meaning of the whole of it. There’s even a whole class of philosophy who argues _inside_ the definition of words.
Books (largely) aren't written for memorisation these days, with possible exceptions of some poetry, or children's books.
Classic oral literary traditions are based on rhyme, meter, repetition, simile, metaphor, and references to cultural touchstones and themes, for the most part, all of which strongly assist with memorisation. These are not only necessary for a literature which isn't written down and instead is passed on through generations orally, but also quite likely represents a survivorship bias in that works which didn't exhibit these patterns didn't survive. And of course the versions we know are the end of a long pipeline of transmission (or the world's largest and oldest game of telephone), captured in writing and then passed on to us as (among) the first written traditions.
I've committed a few poems and homilies to memory, one example for nearly thirty years now. I do refresh that recollection from time to time (and realised I was dropping a stanza consistently). It's an interesting counterexample.
Then again, I turned up a piece earlier today I'd written myself about eight years ago and had all but no recollection of.
The point was that with Plato (but also ~2,000 years before his time, with the Sumerians, who probably invented writing) we were already discovering that we don't have to bring the exterior into an inside which we call "self", memory, we can actually leverage stigmergically [1] the environment to enhance and extend the "self", i.e. by writing, leaving a mark; the Greek word grapho γράφω [2] meaning literally to carve.
Not sure if LLMs will enhance our current "self" by themselves, but once we get a chip or two inside our brains, once we start controlling the cells to regenerate or grow specific limbs [3], we will surely consider our present "selves" as tiny as we consider the "self" of one of our cousins, the chimpanzee, in a zoo today. The journey and expansion of the "self" is merely starting.
[1] "the trace left in the environment by an individual action stimulates the performance of a succeeding action by the same or different agent", https://en.wikipedia.org/wiki/Stigmergy
There were the same warnings about the printing press, both books and newspapers; "kids don't play anymore, they read books all day", or "people don't share news anymore, they all read their own newspapers".
I'm not going to deny it's a change, but it's too early to decide if it's a bad change. Every generation has its own big shift in the past 100-200 years, and every generation complains about the change.
Eh, the title here is one possible framing of a series of short tweets expressing his concerns about the potential impact of ChatGPT.
I homeschooled. My sons had trouble learning to write and one really hated writing.
I spent several years having him verbally explain material to me, having him learn grammar and vocabulary and other essentials from entertaining videos and graphic books and such.
Thinking and writing don't have to be so closely intertwined as they are in Paul Graham's mind. He's a writer and writing helps him think.
Some people think while taking long walks, working out at the gym, drawing their ideas, etc.
The kind of thinking Paul Graham does may become less common. I have no reason to believe that's some kind of holy grail and thinking via other means is inherently less valuable.
I say this as a blogger who may be even more guilty than Paul Graham of using writing to help me think.
When people say "writing is thinking" what they're really talking about is not so much the "writing" or even "thinking" per se, but rather the "crystallizing your thoughts in to something concrete". Certainly my own thinking tends to be somewhat vague until I actually write it down, at which point I tend to get less sure of things because "oh, but what about this?" start to turn up, or I spot fallacies in my argument (the better reason to learn about fallacies is to spot them in your own arguments, not to bludgeon other people's arguments with).
In my experience writing is a very good way to do that, but if verbally works better for some then that's fine too.
In that sense I do agree with Graham that "I have some thoughts about X and I'll let ChatGPT take care of the details" is perhaps not entirely ideal.
All of this also applies to code by the way, in more or less the same way.
In addition to what you mentioned, I also often come up with new and interesting ideas while writing on a topic. I guess when the mind is tasked with crystallizing the ideas, it forces the mind to systematically sort out the clutter, and allows novel ideas to form.
I've lost count of how many times I've started writing on a topic, and then scrapping it altogether in favor of a better idea on a tangentially related thing.
The key is probably explaining/creating something in a way that doesn't have a lot of latitude for just glossing over the details. I'm not sure long-form writing is especially better than giving a presentation to an audience or working through a problem/creating an artifact of some sort/etc. The key is that you can't just read something and go "Yeah yeah I totally get it." Though I suspect some outputs can be easier to handwave through than others--I know I've done it to some degree on presentations. ("This aspect is complicated and we don't have time to get into it today.")
You make me think of the anecdote about the professor who had half the class graded for making one amazingly beautiful piece of pottery and the other half graded by total weight of the work they created.
In the end, those graded by weight had nicer pieces as it involved more practice with producing something.
I'm sorry but your job as a homeschooler is to teach your sons to write. That's not a skill you can enter adulthood without, and expect to achieve success. Liking it, not liking it... none of that matters when you're asked to document a process or argue in favor of an idea.
Imagine the value a professional teacher would have provided for them...
After five years of not being forced to write while learning grammar etc some other way, he discovered fan fiction and began writing thousands of words per day.
But he didn't learn to write by writing because he had dysgraphia and emotional baggage about writing from public school when I pulled him out.
His blog writing gets more traffic than mine. He's a writer by trade, though he doesn't yet make much money at it.
> Most people hate to write as much as they hate math. Way more than admit it
I fucking love writing. Most people I know also fucking love writing, and I do mean love. Some do it for pleasure, some do it as a career, others do it as a means to an end in their career, and they haven't found value in LLMs.
There's writing and writing. If all you're doing is rewording what someone else has already done like a high school essay or a competing product description for what's essentially the same product? An LLM will probably suffice. Original and novel? Human beings all the way.
LLMs highlight that a lot of "knowledge work" is intrinsically bullshit, i.e., a high school essay doesn't actually assess somebody's understanding of a topic, just their ability to hit specific keywords and points, hence even essay mills can churn out an "original" essay for a kid if they're too xyz to do it themselves.
Whilst I don't care for LLMs, I do support a huge shakeup of the existing educational assessment paradigm as it is self-evidently not fit for purpose, IMO.
"Indeed, I find an intoxicating exuberance in writing, a passion that's as profound as it is poignant. It's akin to dancing with words, a ballet of ideas and sentiments, pirouetting across the stage of one's imagination. There's a rhythm, a cadence to it, a harmonious symphony of phrases and thoughts that, when assembled with care and cunning, can stir the soul and ignite the mind. I don't just love writing, I am fervently, ardently, unabashedly in love with it. The artistry of it, the texture and tone of each word, each sentence; the way they flow, ebb and surge like the ocean's tide. There's a divine repetition in the crafting, a reiteration of thoughts, feelings, experiences, each time subtly different, each time uniquely enlightening. Writing is my symphony, my ballet, my ocean, my universe. It is in the weaving of words, the crafting of tales, the elucidation of the esoteric, that I truly come alive. It's not merely that I love writing; writing, my dear friend, is the very essence of my love."
Prompt: Using profanity and in the style of Allen Ginsberg, write an introductory paragraph in narrative style about why you love writing.
ChatGPT:
Holy shit, I fucking love this mad dance of letters, this wild waltz of words that paint the goddamn cosmos on a blank page! The rhapsodic rhythm, the sonorous symphony of syntax -- it's like jazz on paper, like Kerouac and Ginsberg on a bender in the bowels of the Beat Generation, spewing poetic profanity into the yawning abyss of the universe. It's the goddamn primal scream of the soul, the raw, unfiltered fuck-you to silence and nothingness. The alphabetic alchemy, where ink and imagination bleed together to birth whole new worlds, where lines live and breathe and fuck and fight and love. It's a heady high, an intoxicating inebriation, a goddamn narcotic nectar. It's the literary lust that throbs in the very marrow of my being, the fevered flame that flickers in the dark and dances to the tune of the typewriter's tap-tap-tapping. And oh, what a tune it is -- as chaotic as Coltrane, as radical as Rimbaud, as transcendent as Thoreau. It's why I fucking love writing, this goddamn glorious game of god-creation and galaxy-making. It's the ultimate trip, man, the ultimate fucking trip.
Sarcastic critic of the following initial sentence:
> I fucking love writing. Most people I know also fucking love writing, and I do mean love.
Well, well, well. If it isn't the unabashed, unfiltered bard of the 21st century. Look at the sheer depth of passion here, folks! We've got 'fucking love' not once, but twice, because why not? Repetition is the spice of life, after all. And such eloquent use of the F-word, too. Truly, Shakespeare would be jealous.
Also, let's take a moment to appreciate the vague and incredibly relatable 'most people I know.' Because who doesn't know a handful of people who also 'fucking love' writing? Clearly, this is the universal experience we've all been waiting for.
And then, the pièce de résistance, 'and I do mean love.' As if the preceding profanity didn't quite hammer home the sentiment. Thanks for the clarification. We were worried you only 'kinda liked' writing. A masterpiece of modern literature, truly.
Great. Now show us what initial sentence you'd have used, same style. Just one brief impactful sentence with the same level of familiarity.
"Writing is my jam, and I swear, everybody I know is just as stoked about it – and I mean stoked."
Do people really hate writing? If so the comment sections of HN and Reddit, the timelines of Facebook and Twitter, the whole Medium... would be so empty.
There are a lot of people not on Reddit or HN or Medium
(Also a lot of people whose terse and incoherent interactions and tendency to simply repost on Facebook and Twitter suggest that writing is a regrettably necessary means rather an end)
People really struggle writing simple essays. It takes a lot of work externalizing words from your own context inside your mind so that the sentences can be understood by themselves. It’s one of the reasons we used to teach “word problems” in our mathematics classes alongside composition, but now we just want to compute arbitrary answers.
I can write thousands of words if they're on a topic I'm thinking intensely about and need to write it down to sort out my thoughts.
But if anyone just throws me a random topic and ask me to write 2000 words on it, I'd tell them to go ___ yourself. (Yes, I hated writing essays in school...)
Writing is thinking. You do it as much for yourself as you do your readers and having to put effort into it is how you realize gains. Like strength training.
Use tools, of course, but don't let them do the hard part for you. It'd be like buying a robot to go to the gym and move weights up and down — sorta misses the point.
(Before I get the usual sorts of replies: I'm not talking about writing boilerplate or marketing copy or canned responses to e-mails or just generating reams of prose so you can slap ads on it and make money or whatever. I'm talking about writing with the purpose of getting your personal thoughts and ideas out into the world.)
I don't think marketing copy deserves to be grouped with 'boilerplate' and 'canned responses'.
It takes a lot of brain power to create marketing copy that isn't boilerplate / canned / generic.
Good marketing copy persuades people to take a specific action, and figuring out what this takes just as much critical thinking, iteration, and getting your reps in as any other compelling content.
Which is why most marketing copy feels bland / generic / boilerplate / canned.
Don't even need manic episodes to understand this (and if you went through one I'm terribly sorry, they can be such cruel events people disregard), music is a good example. You can be so profoundly moved by a song in an entirely different language you don't understand not because of writing, but because of what the person conveys through sound. Instruments as another example of this.
What our thoughts are made up of visuals (shapes, lights), feelings, and sounds. Not words, however words are convenient shortcuts to communication so we use them a lot. This is part of why we continue trying to understand animals. It's also why multilingual people will have these jarring stops in speaking/writing, and struggle to connect those visuals/feelings/sounds in their mind to the expression (speaking/writing).
It's literally the reason that "love" is so elusive to us. Despite numerous people trying their best over centuries to express it in writing, music, art, etc. it's still this profound feeling we feel has endless potential. We know it's just brain signals, and yet it continues to be deeply embedded in our thoughts.
> If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.
- Plato
Although this is different, because it is still written, just not by oneself, which probably makes the effect of relying on it and forgetting even worse.
Plato here is often brought out as a counter example for why saying a technology will damage cognition is obviously wrong, but I don't think its that simple. Writing and the proliferation of knowledge is an obvious good, but I think plato is otherwise correct here. Societies without reading and writing seem to have people who are able to memorize much larger amounts of information, the early muslims memorized the koran at first, which is quite a feat, before writing it down. A more accurate view is that there are trade offs.
I guess I disagree with Plato, unless he's talking about how I no longer know anyone's phone numbers off the top of my head.
I think it's fairly well established that when you think about a topic well enough to generate ideas and put them on the page that you also cement those ideas in your mind. If you want to remember something you read (or see or do), write about it. In your own words.
But what if AI will write a more compelling paragraph than you ever could, even if you devoted your life to writing? So it might only be people who use AI who get their personal thoughts and ideas out into the world.
"Use tools, of course, but don't let them do the hard part for you."
I didn't say I was opposed to using the AI tools. I'm opposed to using them to think for you — to do the "hard part." If they help with a turn-of-phrase or vocabulary or structure or whatever then they're just a more advanced version of tools people already use for those things.
This is so not true... For the whole Middle Ages people rarely could write, but still there were lawyers (yes, lawyers who could not write), doctors, scientists, poets, merchants, mighty and clever kings (yes, there were kings who could not write).
Middle Ages gave us crucial inventions that built European civilization (mills, windmills, agriculture inventions like three-field system, ships building technology and so on), also we have got well developed philosophy - especially scientific methods of reasoning and logic, that were cornerstone of all later science development.
How it was possible without writing?
Middle Ages life was simply not writing/visual oriented, it was based on speaking and listening and in no way people lost ability to think because of that. For instance PhD thesis was presented as a lecture followed by discussion.
When people wanted to "sign" something they were exchanging or giving some symbolic item (like a ring) and that's all. That's why various saints relics were so popular then, they were symbolizing people's affection, etc. Or people were just signing, as most people could write their names even though they could not write as we understand it today - despite many believes people in Middle Ages were quite educated, including "Third Estate", that is "the Peasantry", more, typically they were going to the same schools as rich people kids, the only difference was that rich people had to pay for the school.
I really recommend "Lumière du Moyen Age" ("Those Terrible Middle Ages!: Debunking the Myths") by Régine Pernoud, it nicely describes a really rich, open and dynamic culture that didn't really need writing.
"Middle Age" Europe is poorly named. It's based on an idea that ancient europe is Greece and Rome. Then, dark/middle ages. Then Renaissance and modernity.
The problem is geography. France, Britain, Germany and such were never part of the ancient world. Rome had colonies there. But, these were (relatively) short lived outposts of civilisation. Civilisation was never a dominant culture in non-Mediterranean "Europe."
In the East, civilisation existed for thousands of years before Rome. They always had roads, kingdoms, bureaucracy, restaurants, banks and such. Even in dark ages, after collapse... these were collapsed civilisations. They didn't revert to tribal, oral societies.
The european Renaissance (excl Italy) was never a Renaissance. It's the actual birth of civilisation in europe. It's the first time writing became a major thing.
I don't think it really matters if you do your PhD orally in a lecture or write a paper. The essence of it remains the same: you have to make your thinking concrete and form an argument that makes sense and is coherent. The exact form this takes isn't all that important, what is important is that you're the one doing the work instead of ChatGPT.
My other comment almost perfectly fits here too, so I won't repeat it in full[1], but only by doing the work yourself will you spot flaws and errors in your arguments; ChatGPT can't do that for you.
The moyen age had plenty of writing, it's just that most of it was religious writing: hymns and saints' and martyrs' lives, and treatises on the soul and so on. You have most likely seen the beautiful illustrated manuscripts produced by the monks in monasteries all over Europe.
The other thing that I have noticed is that a great deal of the art from the time is wonky and out of proportion. I was recently at the Musée Cluny in Paris and it was really striking how much of the art in there is pretty badly made, as if by very inept creators. La Dame à la licorne stands out in part because it's not that badly made (though mainly because it's an epic multi-piece tapestry that you'll never see the like of anywhere else). I also remember a statue of a saint whose creator was named (I don't remember the name but it was something like "Le maitre de St. Joseph" or perhaps he had the name of a city) and that was obviously head and shoulders above everything else around it in terms of craftsmanship, which had an almost modern quality.
Then I went to the Louvre and I noticed again how the art before and after the middle ages very clearly lacks this shoddy and haphazard quality of European middle ages art. The Greeks in particular practically worshipped proportion, and the Italian masters would not be caught dead painting an unbalanced figure, by the looks of it. But also Egyptians and Assyro-Babylonians, and all the earlier cultures seemed to take well-crafted art pretty seriously (the bad artists were probably flayed alive early on ... brrr).
So something definitely happened in the middle ages that did not do good to art and culture. Whether it had any impact in everyday life, the ability of people to produce food, build houses, etc, I don't know. There's certainly plenty of architecture that's survived from that time, so clearly not everything deteriorated.
Btw, I think it was the same in the Eastern middle ages too. You can see slap-dash ecclesiastical icons in Byzantine churches of the time. And when you visit Ravenna, and see the mosaics in St Vitale (which are from the very early middle ages), it is clear that something was lost in the ensuing years.
Perhaps we're about to sink into a new "dark age" (regardless of how "dark" was the original "dark ages"). Maybe we'll get lucky this time too, and the light will not be forever extinguished.
I think it's the other way around. With certain thought techniques, you will find yourself writing. I'm sympathetic towards having a mode of thought that allows you to be a prolific writer and then bemoaning the lonely path you're on as new technologies allow people to completely bypass what you are good at.
People simply do not understand what they are missing.
However, people who don't write weren't going to be writing themselves anyway. It's similar with math. Everyone talks about how important it is for children to learn math in school. And everyone recoils with primal terror if I so much as mention the most minor of mathematical constructs.
Be joyous that you are good at something. But I don't think you should look down upon those who find a way to survive by not being good at your thing. Even if things would objectively be better if everyone was a mathematician, scribe, and poet, this is probably not a realistic goal.
This is precisely what I came in here to say. People who say writing and thinking ability are hand in hand are typically good writers who were told they’re good thinkers because their writing ability set them apart. Similarly, the ability to do rote math skills once set people apart as geniuses, literal computers in the classic sense of the word. I see these tools, as applied to this task, as an equalizer of sorts that lets people who have great ideas express them in a compelling way.
In my own life I was terrible at arithmetic, and I grew up at a time when memorizing the multiplication tables was a key life skill. I couldn’t to save my life. I strongly believed I was bad at math and struggled through on sheer willpower and mediocre grades, until I went back to college after I couldn’t make it through calc 1 and dropped out a decade earlier. This time they allowed calculators, graphing at that. I got top marks in every math class from then on and graduated summa cum laude in a top engineering computer science school.
How many people have been marginalized because they couldn’t express their thoughts as clearly and compellingly as a better writer? Maybe a leveling of the fields away from mechanical abilities and towards completeness of thought isn’t a bad thing? Will people lean on it and lose what has been a core skill? Some will no doubt. But will it hurt them? Only as much as it hurts to use a calculator to do your menial math.
“People simply do not understand what they are missing.”
I think you can say this about people who haven’t learned to write well. ChatGPT seems like a great thing because they can get it to excrete a content-like substance that reads well, and call it writing.
Prompting ChatGPT to produce a text lets you move faster when you have an idea and want to cut to the finish.
The problem is that part of writing is starting with an idea, thinking about how to express it, and then realizing that your initial idea might be flawed or need work.
Sure - ChatGPT seems fine for spitting out web copy or something where the role of the writer isn’t to examine an idea, but just to go from start to done as quickly as possible.
But I think he’s talking about the types of writing that involve wrestling with an idea and, for example, trying to persuade others. ChatGPT will happily spit out copy in that form. But it won’t trip over a paragraph and say “hey, shit, now that I write this out, I think it’s wrong or not well-expressed.”
People bemoan co-workers who coast by on people skills and bullshit over people who have expertise and do the work. ChatGPT is likely to be similar - helping people who can’t write (express or work with ideas effectively) sound authoritative and generate more content that they don’t really even understand.
I read for probably 8-12 hours a day. I write every day. I think every day. I’ve been doing this for decades. GPT has been like a talking dog. It’s an insane achievement, but he’s not the worlds best conversationalist.
But to be 17 again and struggling all night to write a three page paper filled with unoriginal ideas, it would have been a game changer back then.
I agree that GPT isn’t much of a conversationalist. But it is an exceptionally good tool some of us. It has already proved invaluable for me in helping diagnose a medical issue with a family member — one that escaped several ICU doctors and nurses. And a couple of weeks ago helped me fix an issue with my car.
A good analogy might be a calculator. There are some people who can do calculations in their heads near instantly. And some of these people could do arithmetic of numbers too large to be represented by calculators even. But for most of us, calculators were immensely useful.
I find GPT very useful in that I can quickly become mediocre in any domain. Need a legal filing? I can get a good-enough legal filing. Need some code in a framework I've never used? I can get it instantly. Need something written in the language of some esoteric field of science? I can get that instantly from a bullet point list.
It doesn't do the hard parts well: What's being written, and how it's structured. That's where I fit back in.
I'm not sure about "17 year old me struggling to write a short paper full of unoriginal ideas," but it sure could help me write that paper titled "Bullshit, bullshit, etc., etc., bullshit" that I always wanted an excuse to turn in for some college course or another. ;)
My worry is the innovator's dilemma. Given a few generations (software generations, not human) of continuing exponential improvement in capability of LLMs (or whatever replaces them), I worry that those who have spent much more time learning to co-work with a GPT-alike (or replacement) will move to the fore.
Jealous. What's your job?
An example I had last night: how do you split a pdf into separate pages on the command line, and how do you put them back together again (I replaced one of the pages with a scanned signed version).
This is a basic task, a beginner's task in the world of pdf munging, but I didn't know the answer off the top of my head. ChatGPT gave me command lines straight away.
That's what they're good for right now. Beginner / common tasks in areas you're not familiar with.
And a further implication, they're great for one man bands, generalist roles where simple things need to be done in lots of different areas. For specialist deep roles where you have all the tacit knowledge in mind already, not nearly as useful.
Interesting, because I still find it faster/easier to do a quick DDG search, with a click on the first relevant SO post, and scrolling straight to the top answer. Asking the same question to ChatGPT means waiting for its output, which may be objectively a few ms faster but creates the perception of slowness ( = irritation). (That is: waiting for X seconds is more disruptive than taking several actions for 1.5X seconds.)
And then it's definitely going to be much more verbose, which is worse 99% of the time. SO is nicely folded - if I want a deeper explanation, I look at the answer's comments, or I can read other answers, or I can even read the full question. ChatGPT doesn't have the same guarantee of such a clear information hierarchy in its answers.
Especially in refactoring or working with poor codebases. I do notice with copilot specifically some people seem to get wildly different response times with it, for me it gives suggestions almost instantly so as im typing ive got an ongoing suggestion I can accept at any time as it gets closer to what I intend or ignore if not. A common really useful thing is when I have to set up a series of something (recently in a power plant code I set up parameters for a certain kind of battery) and then I just start writing for each other kind of battery it suggests the whole boilerplate for every case, I can go back and fill in the details and exceptions.
As for chatgpt its more useful at interpreting existing code. I often give it a giant function and say "I want you to split off this part and put it into its own thread, get the output with a queue" or something like that. Again, output can often need a little bit of massaging, but I find it reduces cognitive load involved in doing such a task. The trick is to be naturally skeptical of the output but appreciate how much of your life is spent writing nonsense.
I've also found that learning to put what I want exactly into words can be a particular skill that has helped my communication. Making sure you know the name and function of every tool you use. I think if I had said that to myself before I started using these things id have been incredulous, "of course I know the name of things" but no, its a different thing to be talking about them often, especially when you have an idiot savant that takes everything you say too literally. You have to learn to be consise and precise
I can't tell any more what is satire and what is not.
People want to use LLM hammers like screw drivers and are disapointed.
Also, since the quality of the context matters a lot for completion, copilot will be way more useful on clean code with beautiful variable names and nice structure flows than on code-golf snippets. Same got chatgpt: write good prose, and it will help you better.
I can see it guide me in the right direction if I have a more generic question but even then, if I am not specific enough I fear I might go down a rabbit hole that ends no where.
It may be better than google in finding something but when I manage to find something on google the information is usually correct and not totally wrong. Especially technical problem that someone wrote somewhere on a forum regarding a specific device.
Finding the things I am looking for on google allows me to judge the correctness of the data. When I find answers in what is clearly blog spam I know that the info is probably wrong. Which ChatGPT I have no idea how it arrived at the conclusion it gives me.
A good example is if I want to know the max current rating for a chip. If I search google and find the manufacturers datasheet, I know the information is correct. If however the same information is on stack overflow or some forum I may not take it as a fact but more as a guidance.
ChatGPT would be 100 times more useful is it cited a source. Like "The max rating of chip ... is ... according to the manufactures datasheet dated ... which you can find here ...". I guess Bing and Google are trying to do something like that.
So here's one that saved me a bunch of time:
https://gitlab.com/-/snippets/2535443
Basically, I knew the package could do what I wanted; I knew it was almost certainly in GPT-4's data set; I could do it myself, but it would involve searching through all the documentation and figuring out the right bits. It just did it for me.
Now there were a few minor bugs: It duplicated a buggy error message, and at some point later it called a "Subtree" method rather than a "Tree" method. But those were a lot easier to fix than writing the code from scratch.
Once I had a list of 27 book names I wanted put into "canonical order" and in the form of a golang array. I could have done it myself but it would have been tedious; I just pasted the list into GPT-4, asked it what I wanted, and out popped the result.
Here's another place it was helpful recently; I prompted:
"We're hiring a new community manager for the $PROJECT. I'd like you to help me develop some interview questions. What information to you need about the role to help you develop more useful questions?"
The questions it asked me about the role were really good; just answering the questions was quite a useful exercise, and I'm sure the resulting document will be a good intro to whomever we hire. I wouldn't say the resulting interview questions were brilliant, but they were solid, and I used a couple of them.
There are times when "the form" is there for a reason; if you want something re-written in a specific form, GPT-4 can do a good job. I wrote an email recommending something to somebody's managers in a different company in a different country; then I pasted it into GPT-4 and asked if it had any suggestions. It did a fair amount of rewording, of which I decided to take about half. In this case, the "polite form" is there to avoid offense, and it's exactly what I wanted.
I've also asked it to write some Tweets highlighting some specific aspects of an upcoming conference I'm planning. It did a good job coming up with the sort of punchy, tweet-length messages which seem to do really well.
Connecting it to the context of the article: My day job is basically arguing with people on the internet. :-). I do read and write all day every day; but I don't write messages where diplomacy is critical, nor do I write tweets. Perhaps I could get better at those, but I don't think it's worth the effort. Am I the worse off for that? Probably not in the way the author thinks; I don't think being diplomatically polished would change my thinking that much -- much less being able to write punchy tweets.
If I started relying on it for the core writing, however, I'd certainly be selling myself short.
The whole thing reminds me of the blockchain hype train. Still using and loving databases here for the foreseeable—still writing things by hand for the foreseeable, and loving every moment.
Here on HN, a few days ago, there was a post about Microsoft publishing a GitHub repo that contained a "table recognizer AI". Basically, you feed it PDFs that contain horrible scanned images of finance records, and it spits out Excel spreadsheets. For some reason, Microsoft had just "thrown this over the fence" and released it to the public for free. This, despite man-years of effort developing the thing. It was working, and everything.
I made a comment wondering if Chat GPT 4 with the vision extension could solve the same problem. One of the devs that had worked on the aforementioned AI (for years!) mentioned that yes, yes it can.
Game over.
Those years of effort had just been replaced with a one-sentence English-language prompt that starts with "Please output a table from..."
If this doesn't blow your mind, then... I don't know how to help you understand just how much has changed, virtually overnight.
For now I'd treat it more like a sparring partner who can help you with your ideas and support you throughout your process, rather than a magical genie that can magically solve your problem for you. And in this manner I find it to be very useful indeed.
[0] https://github.com/paperswithcode/axcell
With prose it's all your manual work again to check. If your text is mundane it might be trivial but for anything a pg would write I guess an AI is pretty useless in its current state.
1. Use it when you know how to do something but don't want to.
2. Use it when you don't know something, and don't know how to find the answer.
3. Never trust. Verify.
Clearly, this means the examples that impress others will often not impress you.
Especially when folks are already out there LLM-generating SEO-spam, so even "well the answer also came up in a Google search" isn't a sign of accuracy.
Most people are terrible writers, and never read anything at all. So they have no way to judge if the wordy informationless crap gpt wrote is good or not. It's long, thus is good.
It's also somewhat structured, which puts it at above average for writing tasks.
It is not revolutionary for areas you specialize in, but if I try using it for something like setting up a web scraper with python (something I understand conceptually but have never done) it feels pretty amazing.
My one concern is that if I use it for JS I can immediately spot issues with it, but I am too ignorant of python to spot subtle problems.
I came to the sad conclusion that perhaps what I write and what I do is not as repetitive or as generic as others. Over the years when I have had to do more repetition, writing once well and ctrl-c / ctrl-v.
The differentiator is, I guess, expectations: I fully expect my own code to strike a certain balance of readability / optimization / correctness. In this context all but the most trivial code suggestions, sloppy as they usually come, instantly fall out of place.
On the other hand, there is a group of people (e.g. those spamming GitHub pull requests...) who expect the output to be kind of visually similar to some code. When they realize that the result also sometimes works and gives sane-looking results, they are absolutely blown away.
Nonetheless it's better than Google at many things. I find it really good at giving paragraph-long introductions to things, instead of pages-long blogspam. It's also good at suggesting improvements when writing in a second language.
But I don't see what those AI startups see when they want to replace all sorts of skilled workers with this bullshit generator.
The trick is to use it for what it's good at.
For example, I had this vague, fuzzy memory of a talk I saw online about a decade ago. It was about a database modelling paradigm. All I remembered was that the country it was developed in was in northern Europe, the methodology was based on splitting out columns as individual objects, and that it had funny little icons such as diamonds on the edges connecting things together in a graph.
Good luck finding that on Google. I had tried and failed. It was bugging me, because I needed something like it for my work, but for the life of me I couldn't dig up the reference.
Chat GPT 4 found it just three prompts: https://www.anchormodeling.com/
That's insane.
It only needs to save 10 minutes of my time a month for it to be worth the $20 - I would estimate it saves hours.
My go-to example is if conditions. When I start with the first case, it almost always figures out the rest of the cases correctly.
But I can understand why some workflows may be a liability rather than help. Similar to chatgpt sometimes allusinate code with reserved words that don't exist in the language (but exists in others), or uses old code from a deprecated api / version of your framework.
In other words, somtimes the cleaning process can take as much time, as just writing the code yourself.
Couple hours work done in a few minutes.
I agree that there is currently way too much hype. I mean GPT4 is certainly cool and probably going to be useful in some limited use cases, but it's not the new electricity and it's not going to take over the world.
But in the end, it's just a tool. If it isn't working for you, it isn't working for you. For me, I prefer thinking in what structure to me as bigger building blocks.
So I prefer vim bindings because I used up teenage time learning it. And now I don't think in terms of characters but in terms of text-blocks.
I prefer writing Java in IntelliJ and Rust in Clion because I don't think in char terms since I can use the built-in refactor tools. E.g. I write functions inline and then extract them rather than writing them as functions and calling.
Likewise when I'm at the shell I think of pipelines as less than pieces. Being an experienced shell user I can usually write a parallel pipeline with conditionals and loops correct first time verbally with no computer usually.
But my copilot flow is improved even better. I talk to the computer in components:
# get all private ip addresses in Alibaba region cn-zhangjiakou ips=...prompted text... # ssh to each and check last long line ...Prompted text...
It's faster than me and this is great for r-i-search with fzf.
Overall, quite happy. But just a tool.
I’d love to see which of the following might have any correlations with reported usefulness of chatgpt/copilot:
1. How articulate a person is.
2. How much of an expert they are in what they are trying to do with the tool.
3. How successful they are at giving instructions to a human to perform the same task.
4. How much experience they have managing/coaching junior devs/interne/newbies.
5. How much experience someone has at decomposing a problem into smaller parts and identifying the simple parts and complicated parts
There are huge consequences to either answer to the question: “Is using an AI tool effectively a coachable skill?”. I’m sure someone has already looked into this or is looking into this - if it turns out to be a coachable skill, and we can identify what those are, there’s a lot of money to be made in bringing a coursework to market.
Deleted Comment
Give it a Django model and ask it to create a data visualisation. It'l perform complex ORM queries with filters, annotations and aggregations, then load it into a pandas dataframe, then create more columns in the dataframe, does more aggregations, then spits out matplotlib code to draw the chart. The time typing the prompt is time I would have needed even in the pre-GPT-era, since I would have to think about what I want to achieve anyways, now I'm typing it into ChatGPT, which makes the thinking process more structured as well!
I have built more tools in the last three weeks (for work and personal) than I have in the last three years combined. All thanks to OpenAI's brain constantly pushing me into the right direction.
"Hype" can also mean marketing.
Copilot wasn’t useful for me because I want a back and forth conversation to straighten out concepts. With Copilot you don’t get that and it just sprays out little messes of code.
I use GPT4 like a rubber duck in the sense of “rubber duck debugging” and it can be surprisingly supple and creative in its solutions… many bad ones, but again, push back on those and you get to good ones.
If you asked chatgpt to generate a document, its generates one that reads well but has terrible content. Eg. Super broad or just plain contradictory. But given some not so well written piece of writing, it can clean that up fairly well.
The genius of it is that you could give it the notes of a meeting and get it to rewrite them in a different form. Nothing else can handle language like that.
I only use the free 3.5+ ChatGPT and it's written some nice mapping code and some fun poems for me. Poem subjects ranged from rheology to texas hold'em.
I use chatgpt everyday and am getting better and better at asking questions to get the answers I want.
I used copilot when it was free, and it was best at boilerplate, and sometimes not bad at auto completing the next few lines, I didn't think it was worth paying for at the time. I use Codeium now, that works in a similar way, probably not quite as good as copilot in that it doesn't understand as many languages, but still does a reasonable job with boilerplate type code.
GPT-4 Passed the Bar Exam, so I think there's something to it https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389233
Good old Peter Drucker formalized the personas in his advice to new employees to figure out how their bosses consumed information:
1 - The lawyer. Needs walls of text.
2 - The architect. Needs diagrams, visuals.
3 - The accountant. Needs numbers, spreadsheets.
4 - The actor. Needs a conversation, verbal.
It's a generalization, of course, but I have applied this pattern successfully. The core bit is that using the wrong method with any of the 4 personas flat out does not work.
Send a long article to an accountant and they're stumped. But send them a bunch of numbers and they hear the numbers sing. Etc.
Tangent on the Amazon way of write ups - Bezos forced his persona on the whole company. I would hste it. I am an architect, a few diagrams is all I need.
Lawyer and law professor here: No, no, NO!
Wall-of-words text is a sign of a L.O.A.D. — a Lazy Or Arrogant Drafter.
I teach contract drafting to 3Ls and stress The Two Great Rules of Readability:
1. SSSSP: Short, Simple, Single-Subject Paragraphs. https://toedtclassnotes.site44.com/Notes-on-Contract-Draftin...
2. BLUF: Bottom Line Up Front. https://toedtclassnotes.site44.com/Notes-on-Contract-Draftin...
Of of my ex-bosses preferred long, well-written emails to just about any other form of communication. He was a really bright guy with a degree in English Literature that somehow stumbled his way into the software industry. When he wanted to start working with a new technology, he'd ingest a small pile of driest textbooks on the subject over the course of a week or so, understand and retain all of it, then start running laps around the folks who had been working with the tech for years. He's still the only engineer I've ever met that preferred learning-by-reading over learning-by-doing.
Definitely agree with your approach to drafting, but I think you could be overreacting a bit here. Even the shortest contracts might seem like a "wall of text" to the lay person, and short paragraphs stringed together is a wall of text.
At any rate the preferred format in most legal contexts is still "`wall' of text" (whether big or small wall, as opposed to diagrams, spreadsheets, or conversations), which is what I think was the original point.
There's too much info on the internet, and even in internal documentation: if I have to read half the thing to understand if it's relevant, I'm gonna spend most of my time on irrelevant stuff.
This is a problem with asking engineers to document without having a writing culture. I've seen it happen: once it's written down, that's it: it's documented, great work! But we should treat it more like code, with clear divisions of concern s, scopes and even refactors when needed!
I totally get the walls of text are needed to a point, and we don't have good options to produce succint contracts. It just feels like an everybody lose situation.
Also a quotation link would be appreciated.
LAWYERS - needs walls of text
Jeff Bezos is known for his long-term thinking and his annual letters to shareholders are famous for their extensive detail and length.
Mark Zuckerberg is known for his long-term vision and often communicates his ideas in lengthy blog posts and public statements.
ACCOUNTANTS - needs numbers, spreadsheets
Sundar Pichai has an engineering background and tends to rely on data and analysis in decision-making.
Satya Nadella has an engineering background and has emphasized the importance of data-driven decision-making at Microsoft.
Warren Buffett is famous for his value investing approach, which emphasizes analyzing financial statements and data to identify undervalued companies.
ARCHITECTS - needs diagrams, visuals
Tim Cook is known for his attention to design and user experience, and Apple's keynotes and product launches often feature detailed visuals and product demonstrations.
Elon Musk frequently communicates his ideas through visually compelling presentations, and is known for his ambitious plans for space exploration and sustainable energy.
ACTORS - needs a conversation, verbal
Mark Cuban is known for his outgoing and talkative personality. He often engages in conversations with fans and customers on social media and in public appearances, and is very comfortable in front of the camera.
Richard Branson is known for his flamboyant and charismatic personality. He often appears in public events and media interviews, and is known for his storytelling and ability to captivate audiences with his words.
Deleted Comment
PRFAQs, COEs, and WBRs.
"For this invention [writing] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise." [1]
[1] Phaedrus 275a-b, http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%...
It turns out that was a greater benefit that to some extent makes up for losing the exceptionally well trained memory of scholars from a verbal tradition. Not that that last part isn't a loss though, and his warning against false wisdom is more relevant than ever today when it feels like half the people you talk to online are just googling up wikipedia articles to base their world view upon on the fly.
--edit-- sometimes me not grammar good
Is Twitter > books? Is HN > podcasts? Every medium has its tendencies... positive, negative and complicated. We don't actually choose between them. It's useful to keep an eye on losses, not just gains.
Oral traditions, for example, naturally evolve and diversify. A fairy tale can literally spread over the whole world, and develop thousands of versions. Each one adapted and localised. Written stories also spread, but remain identical... even more-so after printing.
This is, largely, the story of monotheist religions. Whether or not they had sacred scrolls in ancient times, the religions were mostly based in folk traditions, temple authority, prophets, judges... people. As the moved from oral tradition to authoritative canon, the religions became extremely stiff an unyielding.
Just because you memorized 10000 random articles on Wikipedia, doesn't mean you now have the wisdom to apply that in a particular circumstance.
Very much like early AI models.
Well you're lucky if they do; often they just make stuff up on the fly depending on what they saw on some dubious media or social network.
The whole point of his writing all it down, the very literal foundations of ~2500 years of philosophy and the bedrock of Western (and others) civilization, was so that he could have an effect on social issues over time. That was the whole point! He is super clear about this in his writings and throughout his life and travels.
> What he didn't consider was the effect the written word had on a social scale in terms of being able to exchange ideas over physical and temporal distances.
Why would his words be less potent when applied to billions of people, as opposed to only a few? Are those billions not just as susceptible to fallacy as the elite few?
However, writing for thinking and writing for storage is two different things, and it activates different parts of the brain.
I'm an avid pen and paper user, and using a real pen on a paper allows me to think much more deeply. It regulates thinking speed, so things doesn't escape or my focus doesn't get derailed. Also it changes how brain works so it can think better.
Also, blogging and writing documentation at work made me a much better thinker, because converting these abstract concepts to meaningful sentences with a context and easy to understand structure affects also how clearly you communicate in other parts of your life.
Offloading this really robs you of the joys and power of meaningful communication on many mediums.
No, the brain does not "replace written information with a pointer". I cannot tell you with complete confidence whether writing stuff down improves recall or diminishes it, but I can tell you that you can construct computer analogies in support of either case.
I could posit that writing actually improves your memory, because the minute movements of your arm require more of your brain to compute, therefore creating more neural connections and possibilities of recall.
This is likely just as wrong, but demonstrates that thinking up some vague explanation is insufficient to make a point about a system as complex as the human brain.
And, unfortunately, as you get older and memory starts to fill up, the pointers get overwritten.
I'd just like to point out that Plato very rarely wrote in his own voice so it's very hard to say if it's his views or not that are being expressed.
In this case however, this is almost certainly an expression of Socrates' views, not Plato's. Not only because it's in the voice of Socrates but also by what's transparent in their actions: Socrates didn't leave anything in writing and Plato left us arguably the most important written corpus of classic Greek philosophy.
Maybe he felt ambivalent about it, but he certainly thought there was a value in the writing.
We measured intelligence by the things you knew. We still have shows with questions based on this concept. This concept no longer makes sense in the modern world. But it used to.
Writing is a crutch, Internet even more so, and AI even more so. Eventually you can build the entire thinker out of those crutches and you need no humans anymore.
"What is the cause of people forgetting texts they had learned before?" I can't remember the precise answer but that was one such line in the texts.
Today if you say that you sound kinda crazy.`
but like socrates there's still a few orders of wandering mendicant homeless-like orders out there (the theravada forest tradition being an example)
The overstimulation could be part of it as well, couldn’t it? There isn’t a lot of other information-rich inputs in a monastery.
Holding a summer school in a forest or on an isolated resort seems like exploiting a similar idea, and it does work quite well (for a month or so, before isolation sets in).
I tried asking ChatGTP what it thought and it came back with
>... it's also important to note that technology is a tool, and its impact largely depends on how it's used. For instance, AI writing tools ... can be used to augment human creativity and productivity, rather than replace it. These tools can help writers brainstorm ideas, overcome writer's block, write more quickly, and even learn to write better by providing examples and suggestions.
>Moreover, the use of AI tools might also stimulate new forms of thinking. Just as calculators didn't eliminate the need for mathematical understanding but rather allowed for more complex problems to be tackled, AI writing tools could help individuals refine and expand their thinking, enabling them to tackle more complex writing tasks or express ideas more effectively.
Which is kind of interesting - I didn't think of writers block on my own for example. Also lazy thinks at the moment probably tend to just go along with what they see in the media. AI aided thinking could improve on that.
* learning to ask good questions
* learning to fact check
* learning to edit the writing of others
Then they have truly learned the Socratic method. Which is a much more valuable tool than writing from scratch solipsistically: being able to actively engage with the writing of others.
If you replace intense IRL dialogue and discussion, with writing, this might be the case.
On the other hand, if you replace just letting your thoughts fade in the void, with writing them down as coherent though out ideas in writing, I would argue that the writing has actually both triggered retention from memory (thus exercising the memory), and helped you critically assess and make your own thinking clearer.
Thus, I think writing is an extremely helpful tool for processing your thoughts for most knowledge workers who are not already involved in constant dialogue with others (which is most of us I guess).
You know, Phaedrus, that is the strange thing about writing, which makes it truly correspond to painting. The painter’s products stand before us as though they were alive. But if you question them, they maintain a most majestic silence. It is the same with written words. They seem to talk to you as though they were intelligent, but if you ask them anything about what they say from a desire to be instructed they go on telling just the same thing forever.
Because it is now not the case, and AI trained on a corpus of many books can in fact say something new.
That and Meno's paradox!
Consider also Derrida's thoughts on these matters: https://en.wikipedia.org/wiki/Pharmakon_(philosophy)
[1] https://en.wikipedia.org/wiki/Diff%C3%A9rance
[2] https://en.wikipedia.org/wiki/Aporia
[3] https://en.wikipedia.org/wiki/Xanthippe
[4] https://en.wikipedia.org/wiki/Daimon#Socrates
Especially for this quote, I like attributing it to Socrates as Plato did. We obviously don't know the details and nuances. However, I feel it captures the (perhaps inexact) tension between Socrates' old, mostly oral tradition of philosophy and Plato's newer tradition of written philosophy.
It's ancient Marshall McLuhan. Oral philosophy and written philosophy create different philosophies, not just a different medium for the same philosophy.
It embeds what is remembered in a current context.
Did Plato take this into account?
That is correct. I mean, some of that is exactly why I write at least 5000 words a day of journal notes while working. Because I want to offload everything from memory to a written form that is indexed.
> and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant...
This part is where we veer into prediction of how it will alter people and it is both right and wrong. I think the education system has shown us it's possible to have a great memory and to remember many facts and still be ignorant. At the same time, there's so much material that can be consumed and people have surface knowledge of many things and once you dig into a conversation with them on it, the ignorance pops up and it turns out that they don't know much about the topic. I know this. This has been me on many occasions and is still me on some occasion.
>... and hard to get along with, since they are not wise, but only appear wise.
Again, there's some truth to this. I think about the times where I've jumped into a conversation about politics with some half baked knowledge that I read somewhere but failed to understand or recall correctly and I think I must have a looked a fool at those times.
But then again, I've course corrected here and there and I've spent time writing and sharing thoughts with others where I've used previous writings to pull up facts and my own synthesis, and I feel like I've been a better participant at times like that. In that way, writing has helped me steer away from ignorance.
I can't comment further though on the quote because I've never read the rest of the context so I may be missing out on some stuff.
I think Paul is on to something here though. I can't count how many thoughts I've had that I thought were clever but when I started to write the thoughts down and organize it on my own, I discovered for myself that either I didn't have enough evidence to prove myself entirely or that I was just flat out wrong. Writing is a form of thinking in my opinion. And if we don't write, we do lose that side of thinking too. I do appreciate though that Paul doesn't go further into predicting the consequences as Plato's warning does here. But I'll be willing to bet that Paul is right that there will be some negative consequences for sure.
Classic oral literary traditions are based on rhyme, meter, repetition, simile, metaphor, and references to cultural touchstones and themes, for the most part, all of which strongly assist with memorisation. These are not only necessary for a literature which isn't written down and instead is passed on through generations orally, but also quite likely represents a survivorship bias in that works which didn't exhibit these patterns didn't survive. And of course the versions we know are the end of a long pipeline of transmission (or the world's largest and oldest game of telephone), captured in writing and then passed on to us as (among) the first written traditions.
I've committed a few poems and homilies to memory, one example for nearly thirty years now. I do refresh that recollection from time to time (and realised I was dropping a stanza consistently). It's an interesting counterexample.
Then again, I turned up a piece earlier today I'd written myself about eight years ago and had all but no recollection of.
Not sure if LLMs will enhance our current "self" by themselves, but once we get a chip or two inside our brains, once we start controlling the cells to regenerate or grow specific limbs [3], we will surely consider our present "selves" as tiny as we consider the "self" of one of our cousins, the chimpanzee, in a zoo today. The journey and expansion of the "self" is merely starting.
[1] "the trace left in the environment by an individual action stimulates the performance of a succeeding action by the same or different agent", https://en.wikipedia.org/wiki/Stigmergy
[2] https://en.wiktionary.org/wiki/%CE%B3%CF%81%CE%AC%CF%86%CF%8...
[2] "HCN2 Rescues brain defects by enforcing endogenous voltage pre-patterns", https://www.nature.com/articles/s41467-018-03334-5
Deleted Comment
Of course I have no way to check whether it's done correctly, but from what I hear a lot of people have done this.
I'm not going to deny it's a change, but it's too early to decide if it's a bad change. Every generation has its own big shift in the past 100-200 years, and every generation complains about the change.
I homeschooled. My sons had trouble learning to write and one really hated writing.
I spent several years having him verbally explain material to me, having him learn grammar and vocabulary and other essentials from entertaining videos and graphic books and such.
Thinking and writing don't have to be so closely intertwined as they are in Paul Graham's mind. He's a writer and writing helps him think.
Some people think while taking long walks, working out at the gym, drawing their ideas, etc.
The kind of thinking Paul Graham does may become less common. I have no reason to believe that's some kind of holy grail and thinking via other means is inherently less valuable.
I say this as a blogger who may be even more guilty than Paul Graham of using writing to help me think.
In my experience writing is a very good way to do that, but if verbally works better for some then that's fine too.
In that sense I do agree with Graham that "I have some thoughts about X and I'll let ChatGPT take care of the details" is perhaps not entirely ideal.
All of this also applies to code by the way, in more or less the same way.
I've lost count of how many times I've started writing on a topic, and then scrapping it altogether in favor of a better idea on a tangentially related thing.
In the end, those graded by weight had nicer pieces as it involved more practice with producing something.
Imagine the value a professional teacher would have provided for them...
But he didn't learn to write by writing because he had dysgraphia and emotional baggage about writing from public school when I pulled him out.
His blog writing gets more traffic than mine. He's a writer by trade, though he doesn't yet make much money at it.
I fucking love writing. Most people I know also fucking love writing, and I do mean love. Some do it for pleasure, some do it as a career, others do it as a means to an end in their career, and they haven't found value in LLMs.
There's writing and writing. If all you're doing is rewording what someone else has already done like a high school essay or a competing product description for what's essentially the same product? An LLM will probably suffice. Original and novel? Human beings all the way.
LLMs highlight that a lot of "knowledge work" is intrinsically bullshit, i.e., a high school essay doesn't actually assess somebody's understanding of a topic, just their ability to hit specific keywords and points, hence even essay mills can churn out an "original" essay for a kid if they're too xyz to do it themselves.
Whilst I don't care for LLMs, I do support a huge shakeup of the existing educational assessment paradigm as it is self-evidently not fit for purpose, IMO.
One can tell from the opening of your comment. Composition, cadence, vocabulary, repetition. It ties together.
I prompted an LLM to produce an enthusiastic expression around writing, but it came up with drivel like
> Writing fills my heart with pure delight, igniting my soul and inspiring my mind with boundless creativity.
I found myself tempted to instruct it on how to write well but that sort of defeats the point.
And maybe I suck at prompting but the appropriate combination of profanity, brevity, yet seriousness was hard for me to get it to do.
"Indeed, I find an intoxicating exuberance in writing, a passion that's as profound as it is poignant. It's akin to dancing with words, a ballet of ideas and sentiments, pirouetting across the stage of one's imagination. There's a rhythm, a cadence to it, a harmonious symphony of phrases and thoughts that, when assembled with care and cunning, can stir the soul and ignite the mind. I don't just love writing, I am fervently, ardently, unabashedly in love with it. The artistry of it, the texture and tone of each word, each sentence; the way they flow, ebb and surge like the ocean's tide. There's a divine repetition in the crafting, a reiteration of thoughts, feelings, experiences, each time subtly different, each time uniquely enlightening. Writing is my symphony, my ballet, my ocean, my universe. It is in the weaving of words, the crafting of tales, the elucidation of the esoteric, that I truly come alive. It's not merely that I love writing; writing, my dear friend, is the very essence of my love."
Prompt: Using profanity and in the style of Allen Ginsberg, write an introductory paragraph in narrative style about why you love writing.
ChatGPT: Holy shit, I fucking love this mad dance of letters, this wild waltz of words that paint the goddamn cosmos on a blank page! The rhapsodic rhythm, the sonorous symphony of syntax -- it's like jazz on paper, like Kerouac and Ginsberg on a bender in the bowels of the Beat Generation, spewing poetic profanity into the yawning abyss of the universe. It's the goddamn primal scream of the soul, the raw, unfiltered fuck-you to silence and nothingness. The alphabetic alchemy, where ink and imagination bleed together to birth whole new worlds, where lines live and breathe and fuck and fight and love. It's a heady high, an intoxicating inebriation, a goddamn narcotic nectar. It's the literary lust that throbs in the very marrow of my being, the fevered flame that flickers in the dark and dances to the tune of the typewriter's tap-tap-tapping. And oh, what a tune it is -- as chaotic as Coltrane, as radical as Rimbaud, as transcendent as Thoreau. It's why I fucking love writing, this goddamn glorious game of god-creation and galaxy-making. It's the ultimate trip, man, the ultimate fucking trip.
> I fucking love writing. Most people I know also fucking love writing, and I do mean love.
Well, well, well. If it isn't the unabashed, unfiltered bard of the 21st century. Look at the sheer depth of passion here, folks! We've got 'fucking love' not once, but twice, because why not? Repetition is the spice of life, after all. And such eloquent use of the F-word, too. Truly, Shakespeare would be jealous.
Also, let's take a moment to appreciate the vague and incredibly relatable 'most people I know.' Because who doesn't know a handful of people who also 'fucking love' writing? Clearly, this is the universal experience we've all been waiting for.
And then, the pièce de résistance, 'and I do mean love.' As if the preceding profanity didn't quite hammer home the sentiment. Thanks for the clarification. We were worried you only 'kinda liked' writing. A masterpiece of modern literature, truly.
Great. Now show us what initial sentence you'd have used, same style. Just one brief impactful sentence with the same level of familiarity.
"Writing is my jam, and I swear, everybody I know is just as stoked about it – and I mean stoked."
ish
https://www.reddit.com/r/slatestarcodex/comments/9rvroo/most...
(Also a lot of people whose terse and incoherent interactions and tendency to simply repost on Facebook and Twitter suggest that writing is a regrettably necessary means rather an end)
But if anyone just throws me a random topic and ask me to write 2000 words on it, I'd tell them to go ___ yourself. (Yes, I hated writing essays in school...)
Use tools, of course, but don't let them do the hard part for you. It'd be like buying a robot to go to the gym and move weights up and down — sorta misses the point.
(Before I get the usual sorts of replies: I'm not talking about writing boilerplate or marketing copy or canned responses to e-mails or just generating reams of prose so you can slap ads on it and make money or whatever. I'm talking about writing with the purpose of getting your personal thoughts and ideas out into the world.)
I don't think marketing copy deserves to be grouped with 'boilerplate' and 'canned responses'.
It takes a lot of brain power to create marketing copy that isn't boilerplate / canned / generic.
Good marketing copy persuades people to take a specific action, and figuring out what this takes just as much critical thinking, iteration, and getting your reps in as any other compelling content.
Which is why most marketing copy feels bland / generic / boilerplate / canned.
It's not easy.
As a lazy writer who never skips legs day, I appreciate this comment. It inspires me to write more than snarky snippets online.
It only takes one or two manic episodes to realize that they're not always super correlated.
What our thoughts are made up of visuals (shapes, lights), feelings, and sounds. Not words, however words are convenient shortcuts to communication so we use them a lot. This is part of why we continue trying to understand animals. It's also why multilingual people will have these jarring stops in speaking/writing, and struggle to connect those visuals/feelings/sounds in their mind to the expression (speaking/writing).
It's literally the reason that "love" is so elusive to us. Despite numerous people trying their best over centuries to express it in writing, music, art, etc. it's still this profound feeling we feel has endless potential. We know it's just brain signals, and yet it continues to be deeply embedded in our thoughts.
> If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom, but with the conceit of wisdom, they will be a burden to their fellows.
- Plato
Although this is different, because it is still written, just not by oneself, which probably makes the effect of relying on it and forgetting even worse.
I think it's fairly well established that when you think about a topic well enough to generate ideas and put them on the page that you also cement those ideas in your mind. If you want to remember something you read (or see or do), write about it. In your own words.
----
Righting. verb.
1. writing to change the way the world is oriented
> I resolve to continue righting until the wrongs go away.
2. writing to change the way oneself is oriented
> I resolve to continue righting until I'm not wrong anymore.
----
Ambiguity in usage being more or less the point.
"Use tools, of course, but don't let them do the hard part for you."
I didn't say I was opposed to using the AI tools. I'm opposed to using them to think for you — to do the "hard part." If they help with a turn-of-phrase or vocabulary or structure or whatever then they're just a more advanced version of tools people already use for those things.
"Tools are the subtlest of traps. We become reliant upon them and in their absence we are vulnerable, weak, defenseless..."
I'd much rather type on a computer than write longhand in notebooks.
Middle Ages gave us crucial inventions that built European civilization (mills, windmills, agriculture inventions like three-field system, ships building technology and so on), also we have got well developed philosophy - especially scientific methods of reasoning and logic, that were cornerstone of all later science development.
How it was possible without writing?
Middle Ages life was simply not writing/visual oriented, it was based on speaking and listening and in no way people lost ability to think because of that. For instance PhD thesis was presented as a lecture followed by discussion.
When people wanted to "sign" something they were exchanging or giving some symbolic item (like a ring) and that's all. That's why various saints relics were so popular then, they were symbolizing people's affection, etc. Or people were just signing, as most people could write their names even though they could not write as we understand it today - despite many believes people in Middle Ages were quite educated, including "Third Estate", that is "the Peasantry", more, typically they were going to the same schools as rich people kids, the only difference was that rich people had to pay for the school.
I really recommend "Lumière du Moyen Age" ("Those Terrible Middle Ages!: Debunking the Myths") by Régine Pernoud, it nicely describes a really rich, open and dynamic culture that didn't really need writing.
The problem is geography. France, Britain, Germany and such were never part of the ancient world. Rome had colonies there. But, these were (relatively) short lived outposts of civilisation. Civilisation was never a dominant culture in non-Mediterranean "Europe."
In the East, civilisation existed for thousands of years before Rome. They always had roads, kingdoms, bureaucracy, restaurants, banks and such. Even in dark ages, after collapse... these were collapsed civilisations. They didn't revert to tribal, oral societies.
The european Renaissance (excl Italy) was never a Renaissance. It's the actual birth of civilisation in europe. It's the first time writing became a major thing.
My other comment almost perfectly fits here too, so I won't repeat it in full[1], but only by doing the work yourself will you spot flaws and errors in your arguments; ChatGPT can't do that for you.
[1]: https://news.ycombinator.com/item?id=35913419
The other thing that I have noticed is that a great deal of the art from the time is wonky and out of proportion. I was recently at the Musée Cluny in Paris and it was really striking how much of the art in there is pretty badly made, as if by very inept creators. La Dame à la licorne stands out in part because it's not that badly made (though mainly because it's an epic multi-piece tapestry that you'll never see the like of anywhere else). I also remember a statue of a saint whose creator was named (I don't remember the name but it was something like "Le maitre de St. Joseph" or perhaps he had the name of a city) and that was obviously head and shoulders above everything else around it in terms of craftsmanship, which had an almost modern quality.
Then I went to the Louvre and I noticed again how the art before and after the middle ages very clearly lacks this shoddy and haphazard quality of European middle ages art. The Greeks in particular practically worshipped proportion, and the Italian masters would not be caught dead painting an unbalanced figure, by the looks of it. But also Egyptians and Assyro-Babylonians, and all the earlier cultures seemed to take well-crafted art pretty seriously (the bad artists were probably flayed alive early on ... brrr).
So something definitely happened in the middle ages that did not do good to art and culture. Whether it had any impact in everyday life, the ability of people to produce food, build houses, etc, I don't know. There's certainly plenty of architecture that's survived from that time, so clearly not everything deteriorated.
Btw, I think it was the same in the Eastern middle ages too. You can see slap-dash ecclesiastical icons in Byzantine churches of the time. And when you visit Ravenna, and see the mosaics in St Vitale (which are from the very early middle ages), it is clear that something was lost in the ensuing years.
Perhaps we're about to sink into a new "dark age" (regardless of how "dark" was the original "dark ages"). Maybe we'll get lucky this time too, and the light will not be forever extinguished.
Deleted Comment
However, people who don't write weren't going to be writing themselves anyway. It's similar with math. Everyone talks about how important it is for children to learn math in school. And everyone recoils with primal terror if I so much as mention the most minor of mathematical constructs.
Be joyous that you are good at something. But I don't think you should look down upon those who find a way to survive by not being good at your thing. Even if things would objectively be better if everyone was a mathematician, scribe, and poet, this is probably not a realistic goal.
In my own life I was terrible at arithmetic, and I grew up at a time when memorizing the multiplication tables was a key life skill. I couldn’t to save my life. I strongly believed I was bad at math and struggled through on sheer willpower and mediocre grades, until I went back to college after I couldn’t make it through calc 1 and dropped out a decade earlier. This time they allowed calculators, graphing at that. I got top marks in every math class from then on and graduated summa cum laude in a top engineering computer science school.
How many people have been marginalized because they couldn’t express their thoughts as clearly and compellingly as a better writer? Maybe a leveling of the fields away from mechanical abilities and towards completeness of thought isn’t a bad thing? Will people lean on it and lose what has been a core skill? Some will no doubt. But will it hurt them? Only as much as it hurts to use a calculator to do your menial math.
I think you can say this about people who haven’t learned to write well. ChatGPT seems like a great thing because they can get it to excrete a content-like substance that reads well, and call it writing.
Prompting ChatGPT to produce a text lets you move faster when you have an idea and want to cut to the finish.
The problem is that part of writing is starting with an idea, thinking about how to express it, and then realizing that your initial idea might be flawed or need work.
Sure - ChatGPT seems fine for spitting out web copy or something where the role of the writer isn’t to examine an idea, but just to go from start to done as quickly as possible.
But I think he’s talking about the types of writing that involve wrestling with an idea and, for example, trying to persuade others. ChatGPT will happily spit out copy in that form. But it won’t trip over a paragraph and say “hey, shit, now that I write this out, I think it’s wrong or not well-expressed.”
People bemoan co-workers who coast by on people skills and bullshit over people who have expertise and do the work. ChatGPT is likely to be similar - helping people who can’t write (express or work with ideas effectively) sound authoritative and generate more content that they don’t really even understand.