Readit News logoReadit News
ec664 · 3 years ago
tpowell · 3 years ago
Yesterday, I randomly watched his full interview from a month ago with CBS Morning, and found the discussion much more nuanced than today's headlines. https://www.youtube.com/watch?v=qpoRO378qRY&t=16s

The next video in my recommendations was more dire, but equally as interesting: https://www.youtube.com/watch?v=xoVJKj8lcNQ&t=2847s

WinLychee · 3 years ago
Watching that interview, I got the impression that Geoff is a very curious person, driven by his sense of wonder. At the same time I couldn't help but feel that he comes across as very naive or perhaps innocent in his thinking. While he wouldn't personally use his creations for morally gray or evil things, I think it's clear we're already in living in a world where ML and AI are in the hands of people with less than pure intentions.
adamwk · 3 years ago
Why is it surprising that a full interview is more nuanced than a headline?
rowls66 · 3 years ago
This 'On The Media' interview from a few months back is also very good: https://www.wnycstudios.org/podcasts/otm/segments/how-neural...
yeahwhatever10 · 3 years ago
I don't understand the "safety" concerns from the example in the second video.
kragen · 3 years ago
don't forget cade metz was the guy who doxed scott alexander
newswasboring · 3 years ago
I can't access this page. Can anyone else? I can open Twitter, but this page just shows a something went wrong page.
CartyBoston · 3 years ago
somebody has a no disparage
hn_throwaway_99 · 3 years ago
Trying to be diplomatic, but this is such an unnecessary snarky, useless response. Google obviously did go slow with their rollout of AI, to the point where most of the world criticized them to no end for "being caught flat footed" on AI (myself included, so mea culpa).

I don't necessarily think they did it "right", and I think the way they set up their "Ethical AI" team was doomed to fail, but at least they did clearly think about the dangers of AI from the start. I can't really say that about any other player.

chongli · 3 years ago
Cade Metz is the same muckraker who forced Scott Alexander to preemptively dox himself. I don’t know Hinton apart from the fact that he’s a famous AI researcher but he has given no indication that he’s untrustworthy.

I’ll take his word over Metz’s any day of the week!

hnarn · 3 years ago
That’s not how a non-disparagement clause works.

It puts restrictions on what you’re allowed to say. It doesn’t require you to correct what other people say.

If your badly thought through assumption was correct, the logical response from him would be to simply say nothing.

AdmiralAsshat · 3 years ago
I've always thought about leaving a little text file buried somewhere on my website that says "Here are all of the things that Future Me really means when he issues a press statement after his product/company/IP is bought by a billion-dollar company."

But then I remember I'm not that important.

ttul · 3 years ago
More like HR said, “Well, there is option A where you leave and are free to do what you wish. And then there is option B (points at bag of cash) where you pretend none of this ever happened…”
nmstoker · 3 years ago
orzig · 3 years ago
Saving a click, because this basically invalidates the NYT headline:

> In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.

d23 · 3 years ago
This seems roughly in line with the article. He left to talk about the dangers.
werlrndkis · 3 years ago
Nah it’s just circular semantic wank. Criticize does not need to be interpreted through negative emotions.

He left Google because he would not be allowed to work there will pooping in the roadmap they’re putting together to counter OpenAI.

STEM minded folks need to eat their own science; the emotional response to certain language is not evenly distributed. It’s thought policing af to take your reaction to “criticize” as a universal one.

Deleted Comment

dlkf · 3 years ago
Cade Metz is the same hack who tried to smear Scott Alexander. This guy is the personification of journalistic malpractice.
jglamine · 3 years ago
Yeah, I was confused because I felt like the article didnt do a good job of clearly stating Hilton's beliefs - it was meandering around. Felt off.

Then I saw the Cade Metz byline at the end and became instantly sceptical of everything I had just read.

Metz is more interested in pushing a nerative than reporting the truth. He doesn't outright lie, just heavily implys things and frames his articles in a misleading way.

tivert · 3 years ago
> Cade Metz is the same hack who tried to smear Scott Alexander. This guy is the personification of journalistic malpractice.

He didn't "smear" Scott Alexander. That's just the hit-job framing pushed by Alexander's fans, who were mad he didn't write a puff piece and they couldn't just make up rules on about stuff on their websites (e.g. about using people's self-disclosed real names) and have the rest of the world be obligated to follow them.

alphabetting · 3 years ago
I have no clue but could be more a problem of his assignments and framing from NYT editors. His book on history of AI was very good.
whimsicalism · 3 years ago
Scott Alexander needs no help in digging his own holes.
neatze · 3 years ago
“The idea that this stuff could actually get smarter than people — a few people believed that,” said Hinton to the NYT. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Calculators are smarter then humans in calculating, what does he mean by that?

JeremyNT · 3 years ago
This quote is the first thing I've seen that really makes me worried.

I don't think of ChatGPT as being "smart" at all, and comparing it to a human seems nonsensical to me. Yet here is a Turing award winning preeminent expert in the field telling me that AI smarter than humans is less (implied: much less) than 30 years away and quitting his job due to the ramifications.

Version467 · 3 years ago
He is far from the only one.

If you're interested in exploring this further I can really recommend taking a look at some of the papers that explore GPT-4's capabilities. Most prominent among them are the "Sparks of AGI" paper from Microsoft, as well as the technical report from openai. Both of them are obviously to be taken with a grain of salt, but they serve as a pretty good jumping off point.

There are some pretty good Videos on Youtube exploring these papers if you don't want to read them yourself.

Also take a look at the stuff that Rob Miles has published over on Computerphile, as well as his own channel. He's an Alignment Researcher with a knack for explaining. He covers not just the theoretical dangers, but also real examples of misaligned ai, that alignment researchers have predicted would occur as capabilities grow.

Also I think it's important to mention that just a short while ago virtually no-one thought that shoving more layers into an llm would be enough to reach AGI. It's still unclear that it will get us all the way there, but recent developments have made a lot of ai researchers rethink that possibility, with many of them significantly shortening their own estimates as to when and how we will get there. It's very unusual that the people that are better informed and closer to the research are more worried than the rest of the world and it's worth keeping this in mind as you explore the topic.

maxdoop · 3 years ago
Every single retort of “these machines aren’t smart or intelligent” requires answering the question, “what is intelligence”?

I struggle to see how GPT-4 is not intelligent by any definition that applies to a human.

AndrewKemendo · 3 years ago
I had lunch with Yoshua Bengio at the AGI 2014 conference in Laval, CA. This was just before his talk on pathways to AGI via neural networks.

Everyone at that conference, including myself, have assumed we will eventually create smarter than human computers and beyond.

So it’s not a new position for people who have been in AI for a long time, though generally it was seen as an outsider position until recently.

There’s a ton of really great work done prior to all of this around these questions and technical approaches - I think my mentor Ben Goertzel was the pioneer here holistically, but others were doing good technical work then too.

e12e · 3 years ago
I think the question about llms being AGI or not (or "actually" intelligent or not) is interesting, but also somewhat beside the point.

We have LLMs that can perform "read and respond", we have systems that can interpret images and sound/speech - and we have plugins that can connect generated output to api calls - that feed back in.

Essentially this means that we could already go from "You are an automated home security system. From the front door camera you see someone trying to break in. What do you do?" - to actually building such a system.

Maybe it will just place a 911 call, maybe it will deploy a tazer. Maybe the burglar is just a kid in a Halloween costume.

The point is that just because you can chain a series of AI/autonomous systems today - with the known, gaping holes - you probably shouldn't.

Ed: Crucially the technology is here (in "Lego parts") to construct systems with (for all intents and purposes) real "agency" - that interact both with the real world, and our data (think: purchase a flight based off an email sent to your inbox).

I don't think it really matters if these simulacra embody AGI - as long as they already demonstrate agency. Ed2: Or demonstrate behavior so complex that it is indistinguishable to agency for us.

ssnistfajen · 3 years ago
By saying "I no longer think that", it's not necessarily that he thinks ChatGPT is smart than humans. Google Search has been far more capable at indexing and retrieving information than humans for over two decades now. He's talking about AGI no longer being 30-50 years away but instead may arrive far sooner than society is ready to deal with.
TechBro8615 · 3 years ago
The guy is well past retirement age, so is "quitting his job" evidence of taking an unusually meaningful stance?
drcode · 3 years ago
I think GPT4 can converse on any subject at all as well as a (let's say) 80 IQ human. On some subjects it can converse much better.

That feels fundamentally different than a calculator.

titzer · 3 years ago
I feel like lost in this conversation is that ChatGPT is incredibly good at writing English. It basically never makes grammatical mistakes, it doesn't spew gibberish, and for the most part has extremely well-structured replies. The replies might be bullshit or hallucinations, but it's not gibberish.

It's kind of breathtaking that we forgot about that being hard.

The goalposts are moving again.

BTW, it has passed many standardized tests under the same circumstances as a human.

skepticATX · 3 years ago
GPT-4 is absolutely more generally knowledgeable than any individual person. Individual humans can still easily beat it when it comes to knowledge of individual subjects.

Let’s not conflate knowledge with intelligence though. GPT-4 simply isn’t intelligent.

staticman2 · 3 years ago
Do you frequently talk to people who you know to have an 80 IQ about a range of subjects?
SanderNL · 3 years ago
I'm curious if you actually ever interacted with IQ 80 humans. They are definitely not on this scale.
byyy · 3 years ago
Of course it does. He knows it. Some people just can't bring himself to stare at the reality of it all.
mitthrowaway2 · 3 years ago
> Calculators are smarter then humans in calculating, what does he mean by that?

My understanding of what he means by that is a computer that is smarter than humans in everything, or nearly everything.

chrsjxn · 3 years ago
That statement seems like such science fiction that it's kind of baffling an AI expert said it.

What does it even mean for the AI to be smarter than people? I certainly can't see a way for LLMs to generate "smarter" text than what's in their training data.

And even the best case interactions I've seen online still rely on human intelligence to guide the AI to good outcomes instead of bad ones.

Writing is a harder task to automate than calculation, but the calculator example seems pretty apt.

Al-Khwarizmi · 3 years ago
> I certainly can't see a way for LLMs to generate "smarter" text than what's in their training data.

Their training data contains much more knowledge than any single human has ever had, though. If they had equivalent linguistic, understanding and reasoning abilities to a human, but with so much stored knowledge, and considering that they also win in processing speed and never get tired, that would already make them much "smarter" than humans.

Not to mention that LLMs are just the current state of the art. We don't know if there will be another breakthrough which will counter the limitation you are mentioning. We do know that AI breakthroughs are relatively common lately.

ssnistfajen · 3 years ago
It's not just about LLMs. AGI will be the result of many more iterations in this field of research, of which LLM is a part of. How quickly the iterations will happen is now being drastically revised down. If AGI is the space shuttle then LLMs are 19th century gliders. They may appear vastly difference but the knowledge that created both are connected in many ways. The space shuttle exist(ed) as a cumulation of knowledge acquired over many iterations of aviation/rocketry.

Edit: changed metaphor to a more commonly known one

janalsncm · 3 years ago
Totally agreed that words like “smart” and “intelligent” are loaded and poorly defined. Competence is a better term since it implies some sort of metric has been used to compare to humans.

However, even at human levels of competence a tool can be superior by being faster or more scalable than humans.

Deleted Comment

Izkata · 3 years ago
> I certainly can't see a way for LLMs to generate "smarter" text than what's in their training data.

By combining contexts from different fields. People are already using it with non-English languages and it responds in that language with something they couldn't previously find in that language.

gdiamos · 3 years ago
These results are predicted by LLM Scaling Laws and the GPT authors knew it before they started.
hipjiveguy · 3 years ago
where are these laws?
theptip · 3 years ago
Calculators are not smarter than humans. Don’t be obtuse. He means the same thing anyone means when they say something like “Alice is smarter than Bob”.
byyy · 3 years ago
It's quite obvious that these LLMs are approaching and encroaching on human intelligence. It's so strange to see people continuously be in denial. They clearly aren't fully there yet but two things must be noted:

   1. At times and in certain instances LLMs do produce superior output to humans. 

   2. There is a clear trendline of improvement in AI for the past decade. From voice recognition in Alexa to Dall-E to chatGPT. The logical projection of this trendline points to an inescapble and likely possibility that if AI is not superior now it will be in the future. 
There is a huge irrational denial of the above logical deduction. I think it's because chatGPT hit us in a way that was too sudden. It's like if I saw a flying saucer and I told you I saw it, your first reaction is disbelief even if I produce logical evidence for it.

I mean the GP you replied to knows what the guy is talking about, but he just doesn't want to admit it.

renewiltord · 3 years ago
Sibling comment is correct to prompt you to at least try an LLM first. It's unfortunately the equivalent of lmgtfy.com but it's true.
neatze · 3 years ago
What makes you think I did not try, simply fail to see why/how natural language inconstant comprehension in any way equates to human or any other animal behavior, I simply don't believe/see (subjectively) that any potential of prompt hacking with massive datasets will build consistent anticipatory system (planning and some aspect of learning).

As analogy, the more I look at it, the more it looks like an geocentric model of solar system.

Mike_12345 · 3 years ago
> Calculators are smarter then humans in calculating, what does he mean by that?

He means AGI.

Dead Comment

mFixman · 3 years ago
> His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

Isn't this the case already? I expect every post I see in large social media sites posted by somebody I don't personally know to be non-organic feedback by a social media expert.

People are doomsaying over a scenario that's identical to the present world.

felipeerias · 3 years ago
The difference is a matter of scale. In the not-too-distant future, the digital output of LLMs could dwarf the output of humans while being basically indistinguishable from it.

At that point, social media will probably split into hyper-local services for people who know each other personally, and an enormous amount of AI-powered rabbit holes for unwary (or depressed, lonely, etc.) users to fall into.

Double_a_92 · 3 years ago
So like private group chats and discord servers, and the rest of the internet? ._.
baby · 3 years ago
I think this quote is exactly what people are afraid of with the advances of ML, and I think anyone with a bit of mileage browsing the web should be scared as well. It’s a legitimate downside of the tech. It’ll reach a point where you won’t know if the picture you’re looking at, or the voice you’re listening to, or the book you’re reading, or the video you’re watching, is real or generated by AI.
kevincox · 3 years ago
Yeah, it seems that driving this fact home may in fact be beneficial. Right now a lot of people still assume that everyone on the internet is truthful with good intentions. Making it very clear that this isn't true may be helpful to reset this frame of mind.
newswasboring · 3 years ago
Does anyone else hate the fact that we are actively encouraging making a low trust society? That can't lead to good things...
nologic01 · 3 years ago
> the average person will not be able to know what is true anymore

We barely held things together as society without AI unleashing cognitive noise at industrial scale.

Somehow we must find ways to re-channel the potential of digital technology for the betterment of society, not its annihilation.

revelio · 3 years ago
Society will be fine, actually AI will make things much better, just as the internet did. People have been making these kind of extreme predictions for decades and it was always wrong. The only people still upset about better communications tech are the people who pine for the days when all that was expected of respectable people was automatically trusting anyone working for the government, a university or a newspaper that claimed to be trustworthy.

What have we got now? ChatGPT is trained to give all sides of the issue and not express strong opinions, which is better than 90% of journalists and academics manage. Their collective freakout about the "dangers" of AI is really just a part of the ongoing freakout over losing control over information flows. It's also just a kind of clickbait, packaged in a form that the credentialed class don't recognize as such. It's en vogue with AI researchers because they tend to be immersed in a culture of purity spirals in which career advancement and prestige comes from claiming to be more concerned about the fate of the world than other people.

Meanwhile, OpenAI control their purity spirals, get the work done and ship products. The sky does not fall. That's why they're winning right now.

fatherzine · 3 years ago
"AI will make things much better, just as the Internet did." We must be living in very different worlds. I sometimes wonder if the numbers behind https://en.wikipedia.org/wiki/Disease_of_despair (roughly tripled in 20 years of Internet) are just the first steps of a hockey stick.
shadowgovt · 3 years ago
Whether society (here I'm referring to "Representative democracy with general elections;" YMMV if you're under an authoritarian or totalitarian state where someone is already filtering the truth for you) will be fine will be heavily dependent upon whether two things happen:

1. The public, in general, comes to understand in an in-their-bones way that they currently do not understand that most of what they see online is hogwash. I.E. the bozo bit has to flip all the way to "My neighbor says there's a missing dog on the block... but is that really my neighbor?"

2. Some other mechanism of truth-pedigree that has not yet been invented comes along to allow for communication of the current state of the world to work.

Without (1) we know democracies are easily led by credible, subtle propaganda, and a well-tuned network of hostile actors will drive wedges at the friction points in representative democracies and crack them into warring subcultures.

Without (2) voters will have insufficient tools at their disposal to understand country-scale issues and their ability to effect positive outcomes with their vote will collapse into noise, which is a ripe environment for authoritarians to swoop in and seize power (and a ripe environment for centralized authoritarian states to outmaneuver the representative democracies on the world stage and gain power).

UberFly · 3 years ago
"...freakout about the "dangers" of AI is really just a part of the ongoing freakout over losing control over information flows..."

Not all of the "information flows" you mention are helpful or benevolent. Most will likely be targeted and hyper-focused to manipulate individuals like they are now.

slowmovintarget · 3 years ago
Social media algorithms on "the internet" have caused wars, supported genocides, created extreme societal polarization, have led to dramatically increased suicide rates among teens, especially teen girls, and more.

But I got to share baby pics with my mom.

How will a far noisier information flow help? Generative AI will only help us do what we've been doing in far greater quantity. Just like calculators can only help you get the wrong answer faster when you don't know what you're doing. These tools will help us build societal disasters with far greater speed.

To say it's all going to be much better seems a bit Pollyanna to me.

And for the record, we know for a fact that ChatGPT is specifically constrained to give one particular side of political issues, not "all sides."

AlexandrB · 3 years ago
> What have we got now? ChatGPT is trained to give all sides of the issue and not express strong opinions, which is better than 90% of journalists and academics manage.

I think we're experiencing the "golden age" of AI at the moment. We'll see what kind of monetization OpenAI and others will land on, but I would be shocked if messing with the model's output for commercial gain is not in the cards in the future.

lancesells · 3 years ago
Ending the internet would probably do it. Noise goes way down when you only have x amount of news sources and outlets.

We could still have things like maps, messages, etc. that are all very beneficial.

h2odragon · 3 years ago
Yes, there was no ignorance or error before the Internet. Everyone operated with perfect information at all times.
red-iron-pine · 3 years ago
What you propose would require radical changes, practically back to the 1980s, and wouldn't even really free you from anything.

Who cares if there is no internet if your cellphone can track you? If your car runs on connected apps? If your credit card & POS systems are networked? Security cameras and facial recognition are still things.

Just cuz you're not getting spammed via website ads doesn't mean it's not tracking you constantly and jamming subtle things to change your world view. Means their attack surface is smaller; sniping instead of loudspeakers. And if their only option is sniping then they'll get really good at it.

vbezhenar · 3 years ago
I used FIDO over telephone line. It didn't differ much from modern Internet other than scale.

If there're messages, there'll be Internet built on top of it. Unless there will be aggressive censors hunting for every sign of "unapproved" communication.

carlosjobim · 3 years ago
Great! Then people could go back to be fed only lies through TV, so we don't have to make the effort of thinking what is true or not.
Red_Leaves_Flyy · 3 years ago
Without the internet there’s nothing entertaining millions of people who would be very incentives to protest.
flippinburgers · 3 years ago
Who is to say that any news stream will be remotely truthful anymore?

I think we are doomed. It is possible that only horrifically authoritarian societies that already control the narrative will survive this.

tenebrisalietum · 3 years ago
I don't think it will be so bad.

All Internet comment sections, pictures, video, and really anything on electronic screens will become assumed false by default.

Therefore the only use of the Internet and most technology capable of generating audio and video will be entertainment.

I already distrust-by-default most of what is online that isn't hard reference material, even if not AI generated.

ben_w · 3 years ago
Three men make a tiger.

- 龐蔥, some time c. 350 BC

https://en.wikipedia.org/wiki/Three_men_make_a_tiger

amelius · 3 years ago
No, there will be echo-chambers where some content will resonate. This can be partly fake content.
macintux · 3 years ago
The cult of Qanon effectively killed any hope I have that people are rational actors when it comes to consuming online content.
thinkingemote · 3 years ago
There's an argument that people generally do not want the truth and that AI will never be allowed to tell it. An optimist could view this as ensuring AI will be safe forever or pessimistically they might see it as AI never being authoritative ever.

One example of truth would be the topic of biological sex another about politics or economics or racism. Imagine releasing an AI that told the actual truth. It's impossible that one will be released by anyone, anywhere.

It's possible to build it but it can't happen.

On the other side of inconvenient or embarrassing truths some would argue that "truth" itself is part of the machineries of oppression because it destroys and ignores an individuals experiences and feelings.

Without objective truth AI will always be limited and therefore it will be tamed and made safe no matter where and who invented, runs and releases it.

bbor · 3 years ago
Ok

A) It's not possible to build a machine that knows the absolute truth, that's fundamentally impossible; induction is impossible, and there are hordes (well... dozens?) of Epistemologists concerned with finding and defining the very small corners of knowledge that we _can_ be certain about, such as "a triangle has three sides" or "an orange is an orange".

B) If that angered/interested you, you should look into Standpoint Theory! It's a very interesting discussion on how humans operate with significant bias at all levels of thought, and pretending otherwise is a disservice to science. And this is using "bias" in a very broader sense.

B) Are we allowed to berate/report/etc. ""race realists"" on HN? I know the rules are big on positive interaction, so I hope it's not out of ine to say that's some obvious scared-white-man bullshit that has no place in this community.

Lutger · 3 years ago
Between Social Media, Cambridge Analytica, the Climate Crisis, Pandemic and (mostly) Russian disinfo, etc, it is already the case that most people have a really hard time knowing what is true.

I don't claim to have much foresight, but an online world where truly and obviously nothing can be trusted might be a good thing. Because when AI generated content looks and feels the same as real content, nothing is to be trusted anymore by anyone. This makes misinfo and disinfo authored by humans even less impactful, because they are parasitic upon true and reliable information.

We will need new devices of trust, which are robust enough to protect against widespread use of generative AI, and as a byproduct disinfo won't have such an easy time to grift on our naivety.

nologic01 · 3 years ago
> We will need new devices of trust...

the challenge is that the pace at which existing (imperfect) devices of trust get destroyed (e.g. the demise of ads financed journalism) is far faster that the rate of new device invention

in fact the only positive example after many decades of "digital innovation" might be wikipedia

ModernMech · 3 years ago
The problem is, when no one trusts anything, it makes room for men who promise everything, but can deliver nothing. We call them "dictators" and "authoritarians", but others call them "strong men" because they are envied by those who seek power. If you look around the world, you can see authoritarian movements rising, especially here in the USA.
seydor · 3 years ago
The average person never knew, it heard. In this new world people have to learn to get out of their apartments
layer8 · 3 years ago
Yes, the problem isn’t so much that knowledge is diminished, but that trust is diminished.
m3kw9 · 3 years ago
Which is fine, humans will adapt to this info noise rather than going crazy, Hinton is way underestimating human intelligence
partiallypro · 3 years ago
I think the problem is that the internet created a ton of new jobs, even while taking some. So far, I can't think of an example of AI creating jobs...only taking them. When you have a lot of newly unemployed people, drowned in debt, unable to know what to believe (AI lies and generations will become more prominent)...I can see that as becoming a massive political problem. It's not quite like robots on an assembly floor, those robots couldn't scale. Now one AI program and API could displace 1000s of workers instantly. It's not crazy to be concerned.
dan-g · 3 years ago
seydor · 3 years ago
not an interview
dan-g · 3 years ago
Changed to “piece”— not sure what else to call it. Maybe a profile? But to me that connotes more of a biography or something.

Deleted Comment

Verdex · 3 years ago
Okay, so is this some grammatical style that I'm just unaware of:

> where he has worked for more than decade

I would have expected an "a" or something before decade.

Meanwhile, over at theverge they have:

> employed by Google for more than a decade

Which is what I would have thought would be the grammatically correct form.

Okay, so the overall structure of the article is "man does thing then decides he maybe should not have done the thing". It doesn't really feel like it's adding anything meaningful to the conversation. At the very least theverge has Hinton's twitter response to the nytimes article, which feels like it expands the conversation to: "man regrets choices, but thinks large corporation we're all familiar with is doing okayish". That actually feels like a bit of news.

Over the years, I've been led to believe that NYTimes is a significant entity when it comes to news. However, I've already seen coverage and discussion of the current AI environment that's 1000x better on HN, reddit, and youtube.

renewiltord · 3 years ago
My experience with the NYT (I subscribed to both the NYT and the WSJ at the same time) is that most of their stuff is AI rewrite quality. But they occasionally have centerfold investigative pieces that are very good.

I imagine this is how it is: they have an army of junk journalists churning out content and then a few really good ones who do the tough stuff. It's probably not economical otherwise.

jongjong · 3 years ago
These days the internet is just a handful of corporate projects in a vast sea of spam. I suspect AI will exacerbate that. I have a feeling that eventually, we may figure out what websites to visit from our real-world interactions. Everything we know as the internet today will be seen as junk/spam. Nobody will use search engines for the same reason that nobody reads junk mail.
cubefox · 3 years ago
That's an incredibly unimportant problem compared to what Hinton is worried about.