I’m tired of that line. I remember first seeing it on “the best minds of our generation being employed to sell you ads”. Making a computer go brrr doesn’t qualify anyone for a “best mind”.
I’d hope a “best mind” would be, above all, empathetic. Concerned about the well being of their fellow humans. Philosophical about the state of the world. Patient. Curious. Wise and not just smart.
That we keep putting greedy assholes on a “best minds” pedestal due to their ability to exploit others for personal profit is part of the problem.
I feel like this is offensive on more than one level. The waste of our best minds... sure. Fine. But - "to sell you ads."
Marketting has the dubious distinction of being one area of human endeavor where technological advances serve mostly to make your life worse, make your mind less focused, make your wallet more empty.
The spherical-cow "ideal" 100% efficient perfect marketing campaign/tactic literally hypnotizes you into dropping all your money on an arbitrary good or service. It is isomorphic to somebody mugging you. What does it look like if we achieve 10% efficiency? 1%? How is this infringement on your agency and financial well-being a positive social good? What if we could achieve profitable returns even by flooding the zone at 0.001%? How many ads do you want to be subjected to per thing that somebody in your neighborhood buys?
> The spherical-cow "ideal" 100% efficient perfect marketing campaign/tactic literally hypnotizes you into dropping all your money on an arbitrary good or service. It is isomorphic to somebody mugging you.
I like the quote and despise advertising of any kind for mostly the same reason, but you could apply this logic to any kind of business and it starts to fall apart.
> The spherical-cow "ideal" 100% efficient perfect pharmaceutical is physically addictive. It is isomorphic to narcotics trafficking.
> The spherical-cow "ideal" 100% efficient perfect medical system keeps you sick.
There’s arguments to be made that all of these things are true, but what we’re talking about is not essential to the activities themselves but the profit motive that underlies them. Profit motive in and of itself is not sufficient to cause this kind of behavior; it requires a disregard for the consequences that your actions will have upon others. A lot of marketers fall into this category, but I’m not convinced that marketing in and of itself can be reduced to this spherical-cow in a vacuum.
I disagree. The spherical cow of marketing is a system that connects consumers with EXACTLY what they are looking for. What you are describing is the capitalist spherical cow.
Gingsberg stole it from Yeats — “the best lack all conviction…” / “the best minds of my generation…” — many similar verses, e.g., “what rough beast…” / “what sphinx of cement…”
Having been in IT for close to 15 years now, a lot of good minds work in IT a lot of good minds don't.
But I've encountered a lot of stupid (for lack of a better word) people in IT who were convinced they are good at _everything_ just because they grokked algorithms and data structures. Not sure it's a phenomenon unique to IT, but what DOGE is doing is exactly what I mean.
Can confirm, this isn't just an IT thing. Physicians are a prime example—people tend to put doctors on a pedestal, and some doctors start believing they know everything about everything, even when it's clearly outside their wheelhouse. Being smart in one area doesn’t automatically make you an expert in another, but it’s easy for everyone involved to forget that.
I mean, there are probably only a few places on the internet with more people who believe their insights into other disciplines are profound simply because they understand how computers work than the HN comment section. So we should all be able to relate.
> I’m tired of that line. I remember first seeing it on “the best minds of our generation being employed to sell you ads”. Making a computer go brrr doesn’t qualify anyone for a “best mind”.
It's the other way around. It's not just, or primarily, about run of the mill software devs. It's also about the would-be top mathematicians and physicists and psychologists and others - best minds in various domains, that in a better world would be busy solving real problems, but due to quirk of the economy end up working on ruining lives of other people, at scale, because adtech pays well while almost all useful work pays a pittance.
I remember this quote not as judging or categorizing people by smartness, but as lamenting a world which mismanages humanity's potential so badly, by literally directing our best problem solvers to work on creating problems for everyone.
I think the argument is more about whether someone involved in hostile behavior deserves to be called a "best mind" compared to someone worse at math but better at empathy.
Fwiw I kinda agree with both of you. Don't really know how to square it
From context, we can infer that "best minds" means "smart people who make new things." I feel it's fair to lament that, while these people could put their intelligence towards the betterment of everybody, they are often instead working on shit like ad tech.
The best tech stack I have ever worked on was in an adtech company. The code was beautiful and the utility functions to interface with various AWS services were really really neat. I built a near real time estimator using Theta Sketches.
While job searching, I have tried to use techs like Sketches in my filters but it mostly draws up a blank. Would love to work on genuinely interesting stuff like that again.
It honestly did leave a bad taste in my mouth whenever I thought what the end goal of it all was.
I think it's a reference to Allen Ginsberg's the Howl, which chronicles Ginsberg watching brilliant people of his generation die in war or become drug casualties, their potential squandered either by evil men or navel-gazing.
I think "best" in this case stands for highest performance and not necessarily moral values - with this definition, unfortunately, making a computer go brrr does qualify for a best mind, if the brrr is especially impressive.
This reminds me of Bret Devereaux's series on Alexander III of Macedon and the meaning of "greatness" [0]:
> Only once we’ve stripped away the mythology and found the man can we then ask that key question: was Alexander truly great and if so, what does that say not about Alexander, but about our own conceptions of greatness?
"Great" and "best" are both just words, words whose definition is defined by us who use them.
It is curious to me that so many revered as “great” by culture often end up treating those close to them in abhorrent ways. Many worship Moloch not overtly but in the lack of accountability they require of high status individuals. It seems that once someone gets past a certain thermocline of status then accusations of character become less likely to stick. I’ve come to believe this is seen as a feature and not a bug by many people. Not everyone, but many.
Ultimately we get what we incentivize and reward as a society. Recent events are a painful reminder of that.
I treat this expression as Ayenbite of Inwit. When I use it (because I'm super pretentious) I don't mean the book itself or the saner English version of the title "Remorse of Conscience". I mean it as a parody of the later. Like, when someone comes up with a contrived blame for wrongdoing that either didn't happen or was so minor that it doesn't warrant the discussion.
I don't think author means "best minds" literally. I think they mean it ironically, more like "those who are most eager to perform the task assigned to them".
But maybe I'm saying the obvious. Irony is admittedly hard to translate into writing.
I've always considered references to "the best minds" to mean those glorified because they make others large sums of money. "The best minds" are never the ones profiting from their ideas.
"would be, above all, empathetic. Concerned about the well being of their fellow humans."
I'm tired of this line. We are animals first and foremost. Complicated animals, but animals nonetheless. We will never consistently be this. The best of us will try but most of us will animalistically react to the incentives in front of us in selfish ways.
We are better of thinking of systems and the incentives they create than hopelessly waiting for us to become not animals. Aligning good goals with personal profit is the name of the game.
I don't think that's how it's meant to be read, if you consider it in the context of the original quote, it's about good and empathetic minds going to waste because of the demands of society. We put so much social and economic capital into tech that it's better to push five lines of code for Facebook than be a doctor, which are the "good minds going to waste".
You reminded me of one of my favorite piece of media in gaming (HZD). It feels especially relevant in the current times, with how war and AI is progressing in tandem with geopolitical unrest.
Its an old recording that the main character listens to in the end cutscene while visiting Elisabet's grave, and (spoilers) the main character was created as a clone of Elisabet. It's very hard hitting after the whole experience.
> GAIA: Query: What did she say?
Elisabet Sobeck: She said I had to care. She said, "Elisabet, being smart will count for nothing if you don't make the world better. You have to use your smarts to count for something, to serve life, not death."
GAIA: You often tell stories of your mother. But you are childless.
Elisabet Sobeck: I never had time. I guess it was for the best.
GAIA: If you had had a child, Elisabet, what would you have wished for him or her?
Elisabet Sobeck: I guess... I would have wanted her to be... curious. And willful - unstoppable, even... but with enough compassion to... heal the world... just a little bit.
> I’d hope a “best mind” would be, above all, empathetic. Concerned about the well being of their fellow humans. Philosophical about the state of the world. Patient. Curious. Wise and not just smart.
A person may be all of those things, but they still need to pay bills and eat. That requires a job, and jobs depend on the bourgeoise, that's none of those things.
The best song that starts off with that quote: https://youtu.be/UryTypo2qeU?si=CdTPe0ufnktLB_6i
“ I Should Be Allowed to Think” by They Might Be Giants
Change a couple words here and there and it’s talking about social media
I don't think best minds ever implied empathy or even should.
Best, to me, means people who are the top of their specialty, whether it be mathematics, astrophysics, rocketry, economics, business, politics, pedagogy, archeology, etc. People with unnerving dedication and pursuit of knowledge to advance whatever their specialty is. It could be the marketing most of us loathe but it can also be any of the above and more and only a few pursuits only tangentially imply empathy.
The point of that quote is not at all that those minds are so smart, but rather that their industry is sad and detrimental.
That is: to fan the fire that is burning the world so that a select few can get more comfortable.
I'm not talking about a flyer here or a store face there, but there is definitely a point where ads don't benefit us anymore. Can I buy the sky and project ads on it? Can I buy the ocean and project ads on it?
There is one trait common to nearly everyone participating in the computing industry: overweening pride in one's own intelligence.
People in the industry like to reduce 'intelligence' to a single dimension. You can see this phenomenon directly in the current AI wave in which "intelligence" has been reduced to "does well at 'knowledge worker' tasks and passes arbitrary benchmarks we have defined".
Compared to the population at large, how does their definition do? Compared to philosophers and neuroscientists, it is lacking, no doubt. But that's the top 1% of the population in being able to define intelligence. So where does this view of intelligence rank compared on a more global sense? It seems better than those who just go with a "I know it when I see it" gut check (especially given how often that gut check is now letting newer models pass as long as they don't know it is an AI model). Or the "humans, because humans are clearly better" view that assigns mythical status to the human brain. Is it in the top 5%? Top 2%?
For a group to come up with a good enough definition that still ranks among the top and which is suited for the specific tasks at hand, seems like a show of intelligence. It isn't perfect, but to what extent is that avoiding premature optimization? Once the definition has issues, it'll be refined more. No need to waste time refining it if we never build tools that hit the limits of the current definition.
I came here to say exactly this. I can't tell if it's because I'm not a traditional "tech" person, but 90% of the best minds I've seen are nowhere near tech in general.
Having a brilliant mind, going to an "elite" school (because someone told you it's elite), joining a FAANG company, building software that you deep down know is killing society, but doing it because the "problems are fun and money is great" is antithetical to what a "best mind of a generation" would truly be.
It's wasted talent with few redeeming qualities for society and a lack of innovation/creativity around using your talents to improve the world.
A "best mind of the generation" would find a way to be successful without riding such a lazy conveyor belt of life.
Agree - but don't think there is a "best mind(s)". We're creatures of repetition and thus specialization, and so our minds can get really good at very specific things, but ultimately we're all dumb apes trying to survive as best we can.
Thank you. They act as if the “best minds” need not read or reason beyond logic and math. Having a “best mind” requires a lifelong dedication to understanding other people, ideas, and history above all else.
Why is being empathetic a trait of a "great mind"? Wouldn't it also be possible that "great minds" don't consider things like empathy as useful? Looking back at human history, its usually people that aren't empathetic who end up being successful. At least in a way that people consider "successful". Humanity has long outlived the usefulness of empathy.
Because we are inherently social creatures. It’s arguably what allowed us to move to the top of the animal hierarchy.
Consider this: if you were to ask a parent “would you rather your child grow up to be wildly intelligent but have sociopathy to the point of being utterly alone, or frankly average (or even dull) intellectually but have a rich social life with meaningful relationships, which would you choose?” I think you’d be hard pressed to find someone willing to take the former.
I've come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things.
The younger generations do seem to be embracing AI more, but mainly because it can do their homework for them without requiring them to learn anything. For now at least, until curriculums have time to adjust to this new reality.
I initially picked up programming because I wanted to create things (well, initially break things, but moved on from that), and the programming was one way of doing that. I only learned how to structure my programs, because they became hard to change. I only learned testing and refactoring, because I noticed I was faster when the code was better and more tested, even if the upfront cost was slightly higher.
If I was 14-15 around this time, when I first picked up programming, but had an LLM on my side, I'm not sure what the outcome would be, to be honest. I'd use them, that's for sure, but once I got a working application out of them, would I be curious enough to understand as much as I understand now, if it wasn't required? Or would I have been able to learn even more and faster, since I wouldn't have been all alone banging my head against some trivial problem for weeks?
I don't know if it's "embracing" it, it's just a fact of life.
I remember in an interview with Marc Andreessen he spoke about introducing his 8 year old to chat gpt. He described the moment as monumental, likening it to "bringing fire down from the mountains." However, his son was unimpressed by the technology, responding, "It's a computer. Of course you ask it questions and it gives you answers. What else is it for?"
I am old. But when I was young I could not comprehend why they wouldn't not let us use calculators for stupid calculation task that humans have a hard time to compute.
It's creating devastating effects in higher education here. I'm a bit older but did a masters after working a few years and I've now decided to quit because most - if not all - students just upload the sheets to an llm and copy the output. Group projects used to be really intense and interesting here - now my partners in the group-projects ask me to explain their code to them. It's not an ivy-league university here but it used to be that I had a lot of fun working hard with other students to work on the projects and we learned a lot doing this - this is completely gone. It's 100% transactional - how can I go through this as fast as possible - as a result people fail the exams at unseen rates like 50-80% in classes that can be passed by learning a few days and doing the exercises yourself.
I'm suffering from quite a bit of ADHD symptoms - for the past 20 years - I already got an diagnosis and I can survive even if it's a shitty thing to have - but it feels like now everyone around me shares the same fate and people seem to forgot how to work or study or worse - never learned it. I've used to be an outlier, sharing my fate with 2 or 3 other people in the class back when doing the bachelors - we failed in spectacular ways in some areas while outshining everyone else in other areas but it was a honest struggle. I'm okay with that - I'm not made to be a researcher or writing a PhD in computer science or math. But I can work in my area of expertise - however what is happening with all these graduates?
Is this the fault of AI? Not really but society isn't really prepared for what is happening now. People correcting the exercises tell me it's impossible to proof LLM usage and 90% of the results are just ChatGPT - funnily enough this was a machine learning 101 class.
Another thing I've noticed that often when looking at LLMs prompts from other students and their application of the results that they kind of don't help them to really learn and improve yourself - you are stuck on your level of knowledge and so are your prompts and the quality of questions you ask and the way you handle the answers which results in very weird effects. So you are talking with your group member to load some binary serialized arrays for a computer vision projects and use numpy to do some calculations. Next meeting you have some code that does something but it's using another dataset and completely different code, runs 100x slower and solves a slightly different problem. All you get is a shrug. I'm better off watching Youtube or working than staying in university. It's not the fault of the teachers but I've came back because of human interaction and because I don't want to learn alone. This is almost gone here.
All of this - even if it sucks - would be somehow okay but the thing I'm scared of the most lately is the blatant dishonesty and lying I've been seeing in other students about their usage of AI - it's creating a kind of person that only pretends to be able to understand what it's doing but fails reliable to actually understand what LLMs tell them. I'm not made to deal with this and I'm getting angry. Tell me you've used an llm and you are not sure about the results, we can talk about it, work through it and improve upon it. Then it's actually a great thing to have LLMs - but I'm not seeing it.
I received my PhD in Computer Science focused on NLP and creative text generation last year and I think the hype around LLMs is ridiculous (academics are no better than industry on chasing hype). They're trained to predict the next token given a context, and that's exactly what they're good at.
A transistor is just a way of connecting some wires, and that's exactly what it's good for. It's reducing a phenomenon into some core essence and pretending like there's not a bigger picture.
Doesn’t that feel a little bit like saying that the hype around transistor-based logic gates is ridiculous because they’re designed to execute Boolean logic, and that’s exactly what they’re good at? The simple mechanism isn’t what’s exciting. The exciting part is composing that into a symphony of functionality, running fast and cheap, to better our lives.
I was in my 20s when crypto was "it" and I was definitely on the last group about it, so it's definitely not just about age, even thought there's probably some correlation.
> "is new and exciting and revolutionary and you can probably get a career in it."
Does this not explain why you just got your PhD in this? ("This" being broad, but "NLP and creative text generation" sounds like it's in the same ballpark as LLMs.)
Thanks. Sometimes I feel like I'm going insane attempting to ŕeason with people who think the opposite. That these are oracles imbued with human level intellect and creativity.
Now, sure, these models can be impressive - but it's a warped lens of humanities own impressive (selected) corpuses.
I'm in my mid-forties and I think the LLM revolution is amazing.
It reminds me of the dotcom era in many ways: a genuinely transformative technology which is currently no more than maybe 20% of the way into realising its potential; a technology for which expectations have been hyped up to maybe 200% of potential; and a technology around which a stockmarket bubble has formed.
I'll leave the rest of the LLM story to the reader's imagination, but to see this slightly fragmented and ossified mind it's extremely obvious what happens next, and then what happens after that, and then after that (which is when we get to the really good bit). So no, I'm not bored, and I'm not tired. I'm as happy to be working in technology now as when I was a younger man. Happier in some ways, even.
I'm a big fan of Douglas Adams, but there is a reason he was (best known as) a comedy writer and not, for example, a sociologist. Trotting this out adds nothing to the conversation and just comes across as vaguely ageist.
Call it ageist, but this aligns with the conversation about "it" at my job. The cutoff is around 42, but there is a significant split by age group of engineers on the value of "it".
I found the parent comment humorous as it cites a lighthearted quote from Douglas Adams. it is relevant to the conversation in a similar way an xkcd is, when called relevant.
This can be taken as an instance of the Shifting Baseline phenomenon [1]. The fact that we can only perceive certain changes over large timescales doesn't mean we can safely ignore them. It's harmful to ignore experienced perspectives.
I do think this is true. I'm 46 and I find myself wondering when things are going to "return to normal". But I can't really define what that is besides saying "2019". I'm not even sure what I'm referring to other than I hate short form video. I don't know how I feel about AI. It does seem like something that has a lot of promise though if we can figure out the context issues.
This was written at a time when people had lived through inventions like the internet, personal computers, refrigerators, microwave ovens, jet liners, vaccines, television, etc. While all of those inventions had negative externalities, their primary function overall improved people's lives.
Well, if you have ADHD, you are always fifteen. Novelty-seeking is built in, part of physiology, cannot be ignored. I am 42, and LLMs still provide a lot of excitement to me.
As much as I like Douglas Adams' work, I think that quote describes what, for lack of a better term, and in no way trying to be derogatory, I'd call a "normal person"'s reaction to technologies.
I mean obviously the age boundaries may change a bit but otherwise he's spot on. Also, I'd say it's not just technology as I've seen a similar attitude towards other things (e.g, how a dress code is not that common any more, or how you can walk up to a store or even go to a hospital and be greeted by someone with tattoos and green hair "is against the natural order of things").
And to stress again that I'm not being derogatory, I've got close people who have those reactions and I love them even if I disagree with them.
But for people not in that "normal" group my experience is nothing like that.
I've seen:
- People being amazed at how exciting something that existed when they were born is, to the point of building emulators and even physical replicas of it.
- People way after 35 who go deep in on very new things that drastically change the way they work (the example I'm thinking of here is the old musicians I know who've embraced modelers and DAWs, things that only got good enough to be used by professionals in, what, the last 20 years?
- In that same line though, I know young kids who use portastudios and hate DAWs. Some release cassettes too. One has to guess their target audience never wasted life untangling tape and rewinding with a ball pen ...
Still, I think your last line is most likely spot on as far as the author goes. Probably going through some mid-life crisis (lest this come across as ageist, I'm way past half my country's life expectancy).
Also, "Every pub conversation winding up talking about it."
That's just the author going to the wrong pubs! Like the Jonathan Richman song says:
"Well the first bar things were alright
But in this bar, things were Friday night."
> Someone said something cool once hence it's valid ad vitam aeternam and can be used as an argument in a discussion.
Mass automatised eugenics robots ? Well if you don't like it you must be a boomer.
Brain implants that are controlled by your employer and can literally kill you on the push of a button if you don't follow the rules ? What ? you don't like it ? You dumb luddite
Between that and the "TeChNoLoGy iS JuSt a ToOl, a HaMmEr Is NeItHeR gOoD nOr BaD" people...
How is the moral dilemma of employer-controlled brain implants or eugenics equal to AI? The reason the quote can be applied is because it is a genuinely useful technology to lots of people. That's not the case for eugenics robots.
Did you grow up watching Data (Star Trek), C3PO (Star Wars), KITT (Knight Rider) thinking "Who comes up with these violent sadistic ideas"?
well i dont know who in the world Douglas is, probably some geriatric weirdo but he is completely wrong.
am in the second group and ur comment just feels like ragebait, sure there are enough “hip and trendy” teenagers to twenty, thirty something people who like influence more than technology and are on their “vibe code” grind and idk, fucking their AI every night with their claude wrapped fleshlight but I for one am not one of them.
Remember, being terminally online is a choice. There's nothing to be bored of you don't choose to be constantly confronted by it. The current thing is only the current thing if you choose to surround yourself by people who deeply care about the current thing.
It's hard to take refuge from it when you are working in the tech industry, I hear something about "let's try to use AI for this" at least twice a week for the past year at my work.
I do use LLMs for some specific tasks, they can be quite good at some stuff but the general hype of it by non-technical folks trying to fit it into every single use-case under the sun is absolutely tiring... Having to explain for the n-th time why what we are trying to do is not a good fit for AI™ is exhausting, not because I have to explain it again but because I know I will have to do it again next week, at least another couple of times.
AI is being viewed in this hype as almost literal magic, it can do anything, we just have to wish for AI to do it (whatever the fuck AI means by now, it's just an umbrella for magical thinking).
Yes, but people (like this writer) want a better community. A person can abstain from the internet entirely, but they still have to live in a terminally online world.
This is how I've felt about politics as of late. It's Logan Paul-KSI tier nonsense, but made worse by the fact that I can ignore Logan Paul and influencers. I can't ignore it when my government is run by an influencer.
Normally I would agree that you can just choose not to engage with the [current thing], but AI is so pervasive that you will be confronted by the consequences of this technology whether you like it or not. These annoying hype cycles don't usually raise the internets noise floor permanently, or DDoS random sites while trying to strip mine their data, or break core assumptions about being able to trust what you see and hear.
My software working group has spent much of the quarter discussing whether or not it’s made us more productive. They still can’t decide, but I bet we’ve racked up 1000 coder-hours debating it.
I don’t use it because our products have the potential to harm other people and I’m not personally comfortable assuming that risk. Nobody else seems particularly moved by that argument, however.
This is insane. We have created the greatest tool in human history and people are complaining. I can use it to help me code, fix modeling issues as I learn CAD, help me troubleshoot the issues in my two-stroke leafblower engine and can consistently walk me through complex leetcode algorithms. It literally knows everything and people still complain.
It isn’t even close to being the greatest tool in human history. This type of misunderstanding and hyperbole is exactly why people are tired/bored/frustrated of it.
The uncomfortable truth is that AI is the world’s greatest con man. The tools and hype around them have created an environment where AI is incredibly effective at fooling people into thinking it is knowledgeable and helpful, even when it isn’t. And the people it is fooling aren’t knowledgeable enough in the topics being described to realize they’re being conned, and even when they realize they’ve been conned, they’re too proud to admit it.
This is exactly why you see people that are deeply knowledgeable in certain areas pointing out that AI is fallible, meanwhile you have people like CEOs that lack the actual technical depth in topics praising AI. They know just enough to think they know what “good” looks like, but not enough to realize when the “good” output is just lipstick on a pig.
What is the greatest tool in human history in your opinion?
I think it's too early to call whether AI is the answer to that question, but I think it could be. Yes, LLMs are terrible in all kinds of ways, but there's clearly something there that's of great value. I use it all day every day as a staff-level engineer, and it's making me much better and faster. I can see glimmers of intelligence there, and if we're on a road that delivers human-level intelligence in the next decade, it's difficult to see what else would qualify as the greatest tool humanity has ever invented.
It’s not hype when it’s released and used for concrete tasks. Some are hyping future potential sure. But GP is hyped about how he can use it NOW. Which I agree is very cool.
The human still needs to think, of course. But, I can get to my answer or my primary source using a tool faster than a typical search engine. That's a super power, when used right!
People want to remain valuable and this tool takes that away. As long as you still find meaningful ways to contribute, all is good. But this says nothing about all the skills mastered that have been rendered effectively useless. And in time, as this tool gets better, it could rob you of the agency to change your environment.
Maybe the tool knows nothing.
But it allows me to learn niche things often much faster than via a web browser. So it has to value for me.
I think there’s lot of dangers and problems with it and frankly I’d probably be happier if it was never invented. But even then I can still see the value it has
A tool that constantly generates incorrect information, lacks any real awareness or internal state, and doesn’t even recognize its own mistakes, even when you explicitly point them out is, frankly, pretty useless.
Ever had this conversation with ChatGPT?
- ChatGPT: Here's my solution!
- You: This is wrong, you need to do X.
- ChatGPT: You're right! My solution was wrong because [repeats what you said]. Here’s my revised answer!
- You: This is still wrong. I said do X.
- ChatGPT: Understood! This clarifies: [still gets it wrong].
Or worse, you can trick it:
- ChatGPT: X + Y = Z (which is actually correct)
- You: No, X + Y = Q (which is false)
- ChatGPT: You're right, X + Y = Q is correct because...
I guess it's useful for generating boilerplate code or text, but even then, it often makes mistakes.
This. Precise text auto-completer. Without reasoning or cognitive processes whatsoever, just a very marketable illusion of it. Despite the lies, a great tool.
As any tool, it takes knowledge and responsibility. Just lile the unix chainsaw.
So would the greatest tool in human history in your mind be something that is used to plagiarise most content in the world and then output correct-30%-of-the-time slop? Or is there another definition you would use?
I'm struggling to think how this could even be in the top 10 tools in human history.
As a counterpoint, if I were to be teleported naked onto an abandoned island 10000 years ago and could bring one "tool" with me, a solar powered terminal with an LLM would be my #1 pick. An able-bodied and resourceful individual equipped with an LLM could accomplish far far more than with any other tool I can think of.
The Internet was the greatest tool in human history and it has lead to all sorts of issues. Misinformation at amazing scale that has undone lots of social progress that was made in the 20th century and 2000s. It has driven the greatest wealth disparities ever seen. It has become a harmful addiction for millions of people.
That's not to say the Internet hasn't caused good things to happen, but to ignore the bad things is counterproductive. Maybe it's ok to slow down and take a step back to make sure we're not doing more bad than good.
And it's cheap. Imagine I told you that you could have direct access to every PhD in the world and they would respond to all of your questions instantly... for $20/mo. Mind-blowing stuff and people still complain.
It’s not actually cheap, just subsidized. Becoming reliant on it now virtually guarantees you will have a tough decision to make later when profitability is actually important.
AI is my full time job and I generally agree. Love answering questions about the nitty gritty specifics of how it _works_, bored to tears of “do you think it will”
Recently I had a convo starting with: "Maybe we can use AI to infer whether we are dealing with an experiment or a control based on the metadata of these public studies."
These data are tables, people call "controls" anything from "control" to "ctrl" to "ctr" to "t0", either in the file name, a random column, etc etc etc. It worked well and I'm glad we tried it. In time I think we will derive value from deciding to use it. I'm glad nobody tuned out.
I've been doing a thought experiment on what it would take to refuse to ingest any non public domain or creative commons licensed content. If one wanted to opt out of commercial entertainment, how hard would that be?
This gets complicated pretty quickly, because so much IP is implicitly granted, and poorly labeled.
Hyped as it is, it is important to be here to discuss its uses, misuses and implications. Some if it is fascinating, and other parts are fascinatingly bad.
I understand the fatigue with it. But that it is used right (or at all) is a conversation worth having.
Here's some of what I think is my personal best advice:
Learn to live in the gray areas. Don't be dogmatic. The world isn't black and white. Take some parts of the black and white. And, don't be afraid to change your mind about some things.
This may sound obvious to some of you, and sure, in theory this is simple. But in practice? Definitely not, at least in my experience. It requires a change in mindset and worldview, which generally becomes harder as you age ("because your want to conserve the way of life you enjoy").
the thing is, engaging with a poem only as a literal screed on the general topic is a very black and white way to engage with it. The author is negative on LLMs sure, but I'm sure the feeling this piece evokes clicks with many people; including people who use LLMs as power users (like myself). I don't have to fully / always agree with this. It's something that should be said. And there are times when I want to take off my technological wizard hat and put on my simple humanity hat and enjoy a poem like this. And then sigh, double check my impulse to look at my phone, double check my impulse to talk about money making, sex pursuing schemes, look at my friend in the bar and realize they won't be around forever and say, genuinely, 'hows it going bud'
I’m tired of that line. I remember first seeing it on “the best minds of our generation being employed to sell you ads”. Making a computer go brrr doesn’t qualify anyone for a “best mind”.
I’d hope a “best mind” would be, above all, empathetic. Concerned about the well being of their fellow humans. Philosophical about the state of the world. Patient. Curious. Wise and not just smart.
That we keep putting greedy assholes on a “best minds” pedestal due to their ability to exploit others for personal profit is part of the problem.
I feel like this is offensive on more than one level. The waste of our best minds... sure. Fine. But - "to sell you ads."
Marketting has the dubious distinction of being one area of human endeavor where technological advances serve mostly to make your life worse, make your mind less focused, make your wallet more empty.
The spherical-cow "ideal" 100% efficient perfect marketing campaign/tactic literally hypnotizes you into dropping all your money on an arbitrary good or service. It is isomorphic to somebody mugging you. What does it look like if we achieve 10% efficiency? 1%? How is this infringement on your agency and financial well-being a positive social good? What if we could achieve profitable returns even by flooding the zone at 0.001%? How many ads do you want to be subjected to per thing that somebody in your neighborhood buys?
I like the quote and despise advertising of any kind for mostly the same reason, but you could apply this logic to any kind of business and it starts to fall apart.
> The spherical-cow "ideal" 100% efficient perfect pharmaceutical is physically addictive. It is isomorphic to narcotics trafficking.
> The spherical-cow "ideal" 100% efficient perfect medical system keeps you sick.
> The spherical-cow "ideal" 100% efficient perfect food product induces constant cravings.
There’s arguments to be made that all of these things are true, but what we’re talking about is not essential to the activities themselves but the profit motive that underlies them. Profit motive in and of itself is not sufficient to cause this kind of behavior; it requires a disregard for the consequences that your actions will have upon others. A lot of marketers fall into this category, but I’m not convinced that marketing in and of itself can be reduced to this spherical-cow in a vacuum.
(Or perhaps computing.)
https://www.poetryfoundation.org/poems/43290/the-second-comi...
But I've encountered a lot of stupid (for lack of a better word) people in IT who were convinced they are good at _everything_ just because they grokked algorithms and data structures. Not sure it's a phenomenon unique to IT, but what DOGE is doing is exactly what I mean.
You can regularly find it on this very site :)
It's the other way around. It's not just, or primarily, about run of the mill software devs. It's also about the would-be top mathematicians and physicists and psychologists and others - best minds in various domains, that in a better world would be busy solving real problems, but due to quirk of the economy end up working on ruining lives of other people, at scale, because adtech pays well while almost all useful work pays a pittance.
I remember this quote not as judging or categorizing people by smartness, but as lamenting a world which mismanages humanity's potential so badly, by literally directing our best problem solvers to work on creating problems for everyone.
Fwiw I kinda agree with both of you. Don't really know how to square it
While job searching, I have tried to use techs like Sketches in my filters but it mostly draws up a blank. Would love to work on genuinely interesting stuff like that again.
It honestly did leave a bad taste in my mouth whenever I thought what the end goal of it all was.
I'm not sure what Ginsberg meant when he used the term but I imagine it wasn't the same type of mind.
https://www.poetryfoundation.org/poems/49303/howl
> Only once we’ve stripped away the mythology and found the man can we then ask that key question: was Alexander truly great and if so, what does that say not about Alexander, but about our own conceptions of greatness?
"Great" and "best" are both just words, words whose definition is defined by us who use them.
[0] https://acoup.blog/2024/05/17/collections-on-the-reign-of-al...
Ultimately we get what we incentivize and reward as a society. Recent events are a painful reminder of that.
I don't think author means "best minds" literally. I think they mean it ironically, more like "those who are most eager to perform the task assigned to them".
But maybe I'm saying the obvious. Irony is admittedly hard to translate into writing.
I'm tired of this line. We are animals first and foremost. Complicated animals, but animals nonetheless. We will never consistently be this. The best of us will try but most of us will animalistically react to the incentives in front of us in selfish ways.
We are better of thinking of systems and the incentives they create than hopelessly waiting for us to become not animals. Aligning good goals with personal profit is the name of the game.
Its an old recording that the main character listens to in the end cutscene while visiting Elisabet's grave, and (spoilers) the main character was created as a clone of Elisabet. It's very hard hitting after the whole experience.
> GAIA: Query: What did she say?
Elisabet Sobeck: She said I had to care. She said, "Elisabet, being smart will count for nothing if you don't make the world better. You have to use your smarts to count for something, to serve life, not death."
GAIA: You often tell stories of your mother. But you are childless.
Elisabet Sobeck: I never had time. I guess it was for the best.
GAIA: If you had had a child, Elisabet, what would you have wished for him or her?
Elisabet Sobeck: I guess... I would have wanted her to be... curious. And willful - unstoppable, even... but with enough compassion to... heal the world... just a little bit.
A person may be all of those things, but they still need to pay bills and eat. That requires a job, and jobs depend on the bourgeoise, that's none of those things.
The more subtle point is that many empathetic people have bullshit jobs: they too work in service of it for simply their livelihood.
Deleted Comment
Best, to me, means people who are the top of their specialty, whether it be mathematics, astrophysics, rocketry, economics, business, politics, pedagogy, archeology, etc. People with unnerving dedication and pursuit of knowledge to advance whatever their specialty is. It could be the marketing most of us loathe but it can also be any of the above and more and only a few pursuits only tangentially imply empathy.
That is: to fan the fire that is burning the world so that a select few can get more comfortable.
I'm not talking about a flyer here or a store face there, but there is definitely a point where ads don't benefit us anymore. Can I buy the sky and project ads on it? Can I buy the ocean and project ads on it?
People in the industry like to reduce 'intelligence' to a single dimension. You can see this phenomenon directly in the current AI wave in which "intelligence" has been reduced to "does well at 'knowledge worker' tasks and passes arbitrary benchmarks we have defined".
For a group to come up with a good enough definition that still ranks among the top and which is suited for the specific tasks at hand, seems like a show of intelligence. It isn't perfect, but to what extent is that avoiding premature optimization? Once the definition has issues, it'll be refined more. No need to waste time refining it if we never build tools that hit the limits of the current definition.
(Not that I'd automatically accept Ginsberg's evaluation of best minds.)
Having a brilliant mind, going to an "elite" school (because someone told you it's elite), joining a FAANG company, building software that you deep down know is killing society, but doing it because the "problems are fun and money is great" is antithetical to what a "best mind of a generation" would truly be.
It's wasted talent with few redeeming qualities for society and a lack of innovation/creativity around using your talents to improve the world.
A "best mind of the generation" would find a way to be successful without riding such a lazy conveyor belt of life.
Deleted Comment
Deleted Comment
Deleted Comment
Dead Comment
Dead Comment
There seems to be no logical connection from "shitty people are successful" to "empathy is not useful to society."
And """success""" is not a value. Taken in those ways, it is the mark of the psychopath.
Of course they did, because it is easier to achieve goals if you cut corners. In a gaming framework, it amounts to cheating.
Deleted Comment
Consider this: if you were to ask a parent “would you rather your child grow up to be wildly intelligent but have sociopathy to the point of being utterly alone, or frankly average (or even dull) intellectually but have a rich social life with meaningful relationships, which would you choose?” I think you’d be hard pressed to find someone willing to take the former.
Let me guess your age.
If I was 14-15 around this time, when I first picked up programming, but had an LLM on my side, I'm not sure what the outcome would be, to be honest. I'd use them, that's for sure, but once I got a working application out of them, would I be curious enough to understand as much as I understand now, if it wasn't required? Or would I have been able to learn even more and faster, since I wouldn't have been all alone banging my head against some trivial problem for weeks?
I remember in an interview with Marc Andreessen he spoke about introducing his 8 year old to chat gpt. He described the moment as monumental, likening it to "bringing fire down from the mountains." However, his son was unimpressed by the technology, responding, "It's a computer. Of course you ask it questions and it gives you answers. What else is it for?"
I'm suffering from quite a bit of ADHD symptoms - for the past 20 years - I already got an diagnosis and I can survive even if it's a shitty thing to have - but it feels like now everyone around me shares the same fate and people seem to forgot how to work or study or worse - never learned it. I've used to be an outlier, sharing my fate with 2 or 3 other people in the class back when doing the bachelors - we failed in spectacular ways in some areas while outshining everyone else in other areas but it was a honest struggle. I'm okay with that - I'm not made to be a researcher or writing a PhD in computer science or math. But I can work in my area of expertise - however what is happening with all these graduates?
Is this the fault of AI? Not really but society isn't really prepared for what is happening now. People correcting the exercises tell me it's impossible to proof LLM usage and 90% of the results are just ChatGPT - funnily enough this was a machine learning 101 class.
Another thing I've noticed that often when looking at LLMs prompts from other students and their application of the results that they kind of don't help them to really learn and improve yourself - you are stuck on your level of knowledge and so are your prompts and the quality of questions you ask and the way you handle the answers which results in very weird effects. So you are talking with your group member to load some binary serialized arrays for a computer vision projects and use numpy to do some calculations. Next meeting you have some code that does something but it's using another dataset and completely different code, runs 100x slower and solves a slightly different problem. All you get is a shrug. I'm better off watching Youtube or working than staying in university. It's not the fault of the teachers but I've came back because of human interaction and because I don't want to learn alone. This is almost gone here.
All of this - even if it sucks - would be somehow okay but the thing I'm scared of the most lately is the blatant dishonesty and lying I've been seeing in other students about their usage of AI - it's creating a kind of person that only pretends to be able to understand what it's doing but fails reliable to actually understand what LLMs tell them. I'm not made to deal with this and I'm getting angry. Tell me you've used an llm and you are not sure about the results, we can talk about it, work through it and improve upon it. Then it's actually a great thing to have LLMs - but I'm not seeing it.
This will be interesting - not in the good way.
I received my PhD in Computer Science focused on NLP and creative text generation last year and I think the hype around LLMs is ridiculous (academics are no better than industry on chasing hype). They're trained to predict the next token given a context, and that's exactly what they're good at.
How old do you think I am?
Does this not explain why you just got your PhD in this? ("This" being broad, but "NLP and creative text generation" sounds like it's in the same ballpark as LLMs.)
Now, sure, these models can be impressive - but it's a warped lens of humanities own impressive (selected) corpuses.
It reminds me of the dotcom era in many ways: a genuinely transformative technology which is currently no more than maybe 20% of the way into realising its potential; a technology for which expectations have been hyped up to maybe 200% of potential; and a technology around which a stockmarket bubble has formed.
I'll leave the rest of the LLM story to the reader's imagination, but to see this slightly fragmented and ossified mind it's extremely obvious what happens next, and then what happens after that, and then after that (which is when we get to the really good bit). So no, I'm not bored, and I'm not tired. I'm as happy to be working in technology now as when I was a younger man. Happier in some ways, even.
Dead Comment
This is not about age really: "NFTs"/"web3.0"/"Blockchain technologies" for instance were hyped by every age group.
Terry Pratchett
[1]: https://en.wikipedia.org/wiki/Shifting_baseline
Deleted Comment
What's my age?
Generative AI is all negative externalities.
I mean obviously the age boundaries may change a bit but otherwise he's spot on. Also, I'd say it's not just technology as I've seen a similar attitude towards other things (e.g, how a dress code is not that common any more, or how you can walk up to a store or even go to a hospital and be greeted by someone with tattoos and green hair "is against the natural order of things").
And to stress again that I'm not being derogatory, I've got close people who have those reactions and I love them even if I disagree with them.
But for people not in that "normal" group my experience is nothing like that. I've seen: - People being amazed at how exciting something that existed when they were born is, to the point of building emulators and even physical replicas of it. - People way after 35 who go deep in on very new things that drastically change the way they work (the example I'm thinking of here is the old musicians I know who've embraced modelers and DAWs, things that only got good enough to be used by professionals in, what, the last 20 years? - In that same line though, I know young kids who use portastudios and hate DAWs. Some release cassettes too. One has to guess their target audience never wasted life untangling tape and rewinding with a ball pen ...
Still, I think your last line is most likely spot on as far as the author goes. Probably going through some mid-life crisis (lest this come across as ageist, I'm way past half my country's life expectancy).
Also, "Every pub conversation winding up talking about it."
That's just the author going to the wrong pubs! Like the Jonathan Richman song says:
"Well the first bar things were alright But in this bar, things were Friday night."
Mass automatised eugenics robots ? Well if you don't like it you must be a boomer.
Brain implants that are controlled by your employer and can literally kill you on the push of a button if you don't follow the rules ? What ? you don't like it ? You dumb luddite
Between that and the "TeChNoLoGy iS JuSt a ToOl, a HaMmEr Is NeItHeR gOoD nOr BaD" people...
Did you grow up watching Data (Star Trek), C3PO (Star Wars), KITT (Knight Rider) thinking "Who comes up with these violent sadistic ideas"?
I hate AI.
am in the second group and ur comment just feels like ragebait, sure there are enough “hip and trendy” teenagers to twenty, thirty something people who like influence more than technology and are on their “vibe code” grind and idk, fucking their AI every night with their claude wrapped fleshlight but I for one am not one of them.
I do use LLMs for some specific tasks, they can be quite good at some stuff but the general hype of it by non-technical folks trying to fit it into every single use-case under the sun is absolutely tiring... Having to explain for the n-th time why what we are trying to do is not a good fit for AI™ is exhausting, not because I have to explain it again but because I know I will have to do it again next week, at least another couple of times.
AI is being viewed in this hype as almost literal magic, it can do anything, we just have to wish for AI to do it (whatever the fuck AI means by now, it's just an umbrella for magical thinking).
I'm tired, and definitely bored.
Deleted Comment
I don’t use it because our products have the potential to harm other people and I’m not personally comfortable assuming that risk. Nobody else seems particularly moved by that argument, however.
The uncomfortable truth is that AI is the world’s greatest con man. The tools and hype around them have created an environment where AI is incredibly effective at fooling people into thinking it is knowledgeable and helpful, even when it isn’t. And the people it is fooling aren’t knowledgeable enough in the topics being described to realize they’re being conned, and even when they realize they’ve been conned, they’re too proud to admit it.
This is exactly why you see people that are deeply knowledgeable in certain areas pointing out that AI is fallible, meanwhile you have people like CEOs that lack the actual technical depth in topics praising AI. They know just enough to think they know what “good” looks like, but not enough to realize when the “good” output is just lipstick on a pig.
I think it's too early to call whether AI is the answer to that question, but I think it could be. Yes, LLMs are terrible in all kinds of ways, but there's clearly something there that's of great value. I use it all day every day as a staff-level engineer, and it's making me much better and faster. I can see glimmers of intelligence there, and if we're on a road that delivers human-level intelligence in the next decade, it's difficult to see what else would qualify as the greatest tool humanity has ever invented.
It is incapable of knowledge.
I’m bored of it.
I think there’s lot of dangers and problems with it and frankly I’d probably be happier if it was never invented. But even then I can still see the value it has
Ever had this conversation with ChatGPT?
- ChatGPT: Here's my solution!
- You: This is wrong, you need to do X.
- ChatGPT: You're right! My solution was wrong because [repeats what you said]. Here’s my revised answer!
- You: This is still wrong. I said do X.
- ChatGPT: Understood! This clarifies: [still gets it wrong].
Or worse, you can trick it:
- ChatGPT: X + Y = Z (which is actually correct)
- You: No, X + Y = Q (which is false)
- ChatGPT: You're right, X + Y = Q is correct because...
I guess it's useful for generating boilerplate code or text, but even then, it often makes mistakes.
As any tool, it takes knowledge and responsibility. Just lile the unix chainsaw.
Not about the tool, though, but about people (who, granted, have some connection to the tool, even if indirectly).
We've had people for hundreds of thousands of years, so fair to say that they have become quite boring.
So would the greatest tool in human history in your mind be something that is used to plagiarise most content in the world and then output correct-30%-of-the-time slop? Or is there another definition you would use?
I'm struggling to think how this could even be in the top 10 tools in human history.
That's not to say the Internet hasn't caused good things to happen, but to ignore the bad things is counterproductive. Maybe it's ok to slow down and take a step back to make sure we're not doing more bad than good.
These data are tables, people call "controls" anything from "control" to "ctrl" to "ctr" to "t0", either in the file name, a random column, etc etc etc. It worked well and I'm glad we tried it. In time I think we will derive value from deciding to use it. I'm glad nobody tuned out.
This gets complicated pretty quickly, because so much IP is implicitly granted, and poorly labeled.
Hyped as it is, it is important to be here to discuss its uses, misuses and implications. Some if it is fascinating, and other parts are fascinatingly bad.
I understand the fatigue with it. But that it is used right (or at all) is a conversation worth having.
Learn to live in the gray areas. Don't be dogmatic. The world isn't black and white. Take some parts of the black and white. And, don't be afraid to change your mind about some things.
This may sound obvious to some of you, and sure, in theory this is simple. But in practice? Definitely not, at least in my experience. It requires a change in mindset and worldview, which generally becomes harder as you age ("because your want to conserve the way of life you enjoy").