Readit News logoReadit News
stolenmerch · 2 years ago
I'm not a fan of this hyper aggressive line-in-the-sand argumentation about AI that pushes it all precariously close to culture war shenanigans. If you don't like a new technology that is perfectly cool and your right to an opinion. Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment. That is NOT at all clear, settled, or even correct much of the time. I'm open to that conversation and debate, but diatribes like this make it far too black-and-white with "good" people and "bad" people.
rain_iwakura · 2 years ago
The issue is that without loud declaration like this money men will just soldier on with implementing shittier future.

It's always do something first and then ask for forgiveness. But at the point you ask for it it's too late and too many eggs were broken. And somehow you're richer at the end of it all and thus, protected from any consequences. While everyone else is, forgive my French, fucked.

Has Facebook been a net positive so far? Has twitter? You may make case for you YouTube, but what about Netflix?

It's only been good to us (engineers) and our investor masters, but not for the 90% of the rest, which may I remind you is the distribution that created us in the first place. Sorry for being dramatic, but I do seriously think these things need to be reigned in, and especially people like Altman who while believing themselves to be good-willed (and I have no doubts that he is better man than Musk for example) end up being Robert Moseses' of our generation. That is someone with good intentions who ends up making things worse overall.

throwaway295729 · 2 years ago
Why would YouTube or Netflix be a net negative?
skeaker · 2 years ago
This sounds like a critique of the creators of the AI moreso than the users of it, which TFA is targeting.
fennecbutt · 2 years ago
They always have done always will do and society has almost never done anything about it and isn't doing anything about it now. AI is just another tool in a the toolbox and as I'll keep repeating, the problem is never with the tool but with the tools using the tool.

How can we justify complaining about AI for these reasons; we've all sat on our asses and now they're are billionaires and soon to be trillionaires. We've already failed, dude.

skynet305 · 2 years ago
Ai/ml will change out world.

It already does.

It's a Paradigma shift and probably the most impactful after the internet.

From human to machine interface to medicine, research, content creation etc.

No one cares about some dude posting some negative rant like that.

akira2501 · 2 years ago
> Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

I don't think this article even remote attempts this claim. The closest it gets is suggesting that if these defenses are too much trouble for you, then perhaps your use case for AI wasn't great in the first place.

> but diatribes like this

How is this a diatribe? There's nothing bitter about the writing here, it's entirely couched within the realm of personal opinion, and is an unexpurgated sharing of that opinion.

Please don't position your arguments so that if I want to share my opinion I have to defend myself from accusations that I'm being exceedingly bitter or somehow interfering with what you intend to do.

You're effectively attempting to bully people out of their own opinions for the sake of your convenience.

handoflixue · 2 years ago
> it's entirely couched within the realm of personal opinion

"AI output is fundamentally derivative and exploitative"

"If you want custom art, pay an artist."

"Human recommendations will always be better."

If you can't argue against any of those stances, what stances are up for debate?

Surely the person you're responding to was just posting their own opinion, and you're as much a bully as they are?

stolenmerch · 2 years ago
> I don't think this article even remote attempts this claim.

It's in the first sentence, "AI output is fundamentally derivative and exploitative (of content, labor and the environment)."

trashymctrash · 2 years ago
> You're effectively attempting to bully people out of their own opinions for the sake of your convenience.

Maybe it's just me, but "bully" seems like a very exaggerated choice of words here.

Dead Comment

jacoblambda · 2 years ago
No you absolutely should have to defend yourself. Like the author, I don't want to touch anything you create that is produced with generative AI.

The ONLY exception is if you can demonstrate that your model was trained solely on datasets of properly licensed works and that those licenses permit or are compatible with training/generation.

But the issue is that overwhelmingly, people who use generative AI do not care about any of that and in practice no models are trained that way so it's not even worth mentioning that exception in this day and age.

godelski · 2 years ago
I'm with you, but I think it is a bit more complicated. I think a reason for a lot of pushback is because these systems are being over sold. A lot of tech over promises and under delivers. I'm not sure it is just an AI thing rather than the limit in which you can edge forward the amount of acceptable exaggeration.

It definitely is frustrating that many things are presented as binary. But I think we can only resolve this if we dig a little deeper and try to understand the actual frustration that is being attempted to be communicated. Unfortunately a lot of communication breaks down in a global context as we can't be reliant on the many implicit priors that may be generally shared across different groups. Complaining is also the first step to critiquing, but I think you're right that we should encourage criticisms over complaints, but I think we can attempt to elicit critiques from complaints too, and that we should.

logicprog · 2 years ago
The idea that machine learning like large language models and image generating systems exploit labor might be up for debate, but the fact that they are disproportionately damaging to the environment compared to the alternatives is certainly true in the same way that it's true for Bitcoin mining. And there's more than just those two aspects to consider, it's also very much worth considering how the widespread use of such technologies and the integration of them into our economy might change our political, social, and economic landscape, and whether those changes would be good or bad or worth the downsides. I think it's perfectly valid to decide that an emerging technology is not worth the negative changes it will make in society or the downsides that it will bring with it, and reject its use, technological progress is not necessarily inevitable in the way that every new technology must become widespread.
handoflixue · 2 years ago
> disproportionately damaging to the environment compared to the alternatives

This is a new one to me. Do you have any source for that? Once a model is trained, it seems pretty obvious that it takes Dall-E vastly less to create an image than a trained artist. I have trouble believing the training costs are really so large as to change the equation back to favoring humans.

gremlinsinc · 2 years ago
how's it more damaging to the environment of you can replace 1k people, that's 1k people staying at home instead of commuting, sure that causes pain if we can't figure out ubi or a way to house and feed the masses, also many of the biggest ai users are working to get their energy 100 percent from solar, wind, and geothermal. AI is something we've been heading towards since the dawn of man.

Hell, ancient Rome had automatons. There's no way to stop it. Ideally we merge with the ai to become something else than give it super powers and it decides to destroy us. I'm not sure the benevolent care giver of humanity is something we can hope for.

It's a scary but interesting future, but I mean we've also got major problems like cancer, global warming, etc, and ai is a killer researcher, that did 300k years worth of human research hours in a month to find tons of materials that can possibly be used by industry.

They're doing similar with medicine, etc... there's many pros and negatives, I'm a bit of an accelerationist, rip the band-aid off kind of guy, everyone dies someday I guess, not everyone can say they were killed by a Terminator, well at least not yet lol, tongue in cheek.

000ooo000 · 2 years ago
>Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

Why should you be free of accountability for the effects of your actions?

stolenmerch · 2 years ago
Because the effects of my actions in this case have yet to be demonstrated, let alone shown to cause harm. The author claims there is expoititive harm to labor, the environment, and maybe others. That is not at all obvious or provably true yet. As I said, I'm open to the discussion, but I can't defend myself in good faith when people claim some slam dunk moral certitude. Again, don't use generative AI if it makes you feel bad, but there is absolutely nothing clear cut yet about this radically brand new technology.
davidthewatson · 2 years ago
^^ THIS ^^

The middle road I've taken is that I use various consumer AI tools much the way I used the Macintosh or the Atari ST with MIDI when they showed up while I was in music school, as tools that may be used as augmentative technology to produce broader and deeper artistic output with different effort.

There's something mystical and magical about relinquishing complete control by taking a declarative approach to tools that require all manner of technique and tomfoolery to achieve transcendent results.

The jump from literate programming to metaprogramming to what we have in the autonomous spectrum is fascinating and worth the investment in time, assuming the output is artistic, creative, and philosophical.

AI is not free, but the price being paid comes at the cost of creators trying to create safe technology usable by anyone of any age.

Given the similarity to selling contraband, these AI tools need far more than just conditional guard rails to keep the kids out of the dark web... More like a surgeon general's warning with teeth.

Bard and Bing should be treated as if they were Therac 25, because in the long run we may realize that like social media, the outcome is worse.

cwillu · 2 years ago
Please don't do “^^ this ^^”, comments are reordered here.
worksonmine · 2 years ago
> Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

Can you please give me access to your private repositories? I'd like to se if there's anything useful there for me to sell. You shouldn't say no, at least I ask politely and use the magic words. It can only benefit humanity right?

I'm not against crowdsourcing LLM models, but copyright is copyright. I say that as someone who pirates heavily, but I'm not a hypocrite about what I do.

qgin · 2 years ago
There's a version of the future where AI actually takes larger and larger chunks of real work while humans move towards spending more and more of their time and energy on culture war activities.
boffinAudio · 2 years ago
ALL technology can be weaponized, and what you are sleep-walking into is an era where AI is easily weaponized against not just nation states or groups, but the individual.

Either have this conversation now, or face the consequences when weaponized AI is so prevalent, you will have to dig a hole in the ocean to escape it ..

StreetChief · 2 years ago
Your name is "stolenmerch"... I wonder if that colors your perspective at all.
omnimus · 2 years ago
Are We the Baddies?
ToucanLoucan · 2 years ago
> Please don't position it so that if I want to use AI I have to defend myself from accusations of exploiting labor and the environment.

You, personally, likely are not (apart from electricity use but that's iffy.) But the technology you want to use could not exist, and cannot continue to be improved, without those two things. That's not unclear in the slightest, that's just fact.

> I'm open to that conversation and debate, but diatribes like this make it far too black-and-white with "good" people and "bad" people.

I get that any person's natural response to feeling attacked to defend oneself. That's as natural as natural gets. But if shit tons of people are drawing the same line in the sand, no matter how ridiculous you might think it is, no matter how attacked you might feel, at some point, surely it's worth at least double checking that they don't actually have a point?

If I absolutely steel-man all the pro-AI arguments I have seen, it is, at the very best:

- Using shit tons of content as training data, be it written, visual, or audio/video, for a purpose it was not granted for by it's creators

- Reliant on labor in the developing world that is paid nearly nothing to categorize and filter reams upon reams of data, some of which is the unprocessed bile of some of the worst corners of the Internet imaginable

- Explicitly being created to displace other laborers in the developing and developed world for the financial advantage of people who are already rich

That is, at best, a socially corrosive if extremely cool technology. It stands to benefit people who already benefit everywhere, at the direct and measurable cost of people who are already being exploited.

I don't think you're a bad person for building whatever AI thing you are, for what it's worth. I think you're a person who probably sees cool new shit and wants to play with it, and who doesn't? That's how most of us got into this space. But as empathetic as I am to that, tons of people alongside you who are also championing this technology know exactly what they are doing, they know exactly who they are screwing over in the process, and they have said, to those people's faces, that they don't give a shit. That they will burn their ability to earn a living to the ground, to make themselves rich.

So if you're prepared to stand with them and join them in their quest to do just that, then I don't think anyone is obligated to assuage your feelings about it.

pseudo0 · 2 years ago
Your "steelman" is embarrassingly bad. Why play devil's advocate if you're going to do such a bad job of it? Here's an alternative:

- As a form of fair use, models learn styles of art or writing the same way humans do - by seeing lots of examples. It is possible to create outputs that are very similar to existing works, just as a human painter could copy a famous painting. The issue there lies in the output, not the human/model.

- Provide comfortable office jobs for people in economically underdeveloped countries, categorizing data to minimize harm for content moderators worldwide. One piece of training data for a model to filter harmful content can prevent hundreds/thousands of people from being exposed to similar harmful content in the future.

- Reduces or eliminates unpleasant low-skill jobs in call centers, data entry, etc.

- Creates new creative opportunities in music, video games, writing, and multimedia art by lowering the barriers to entry for creative works. For example, an indie video game developer on a shoestring budget could create their own assets, voice actors, etc.

- Reduces carbon emissions by replacing hours of human labor with seconds of load on a GPU.

kbelder · 2 years ago
That's not a steelman. At the very best:

- All content was viewed and learned from, which is ethical (even a good) use of all content that has ever been released content to the public.

- Gave jobs to 3rd world laborers.

- Benefited us, made some of us everymen more productive and able to build and create in ways that we weren't able to before.

I suspect you don't agree with all the above, but that's more like what a steelman argument should be.

plorg · 2 years ago
This is a bad take, chief. You're not a smol bean. If someone is telling you that the technology you are using is harmful to many people and to society as a whole the least you could do is to make an argument that either those harms are not what is being claimed or that there are significant benefits that outweigh the harms. "Don't say it's bad, that makes me feel bad so we shouldn't talk about it" is both a weak and useless position.
xigoi · 2 years ago
> I'm not a fan of this hyper aggressive line-in-the-sand argumentation about fossil fuels that pushes it all precariously close to culture war shenanigans. If you don't like a new technology that is perfectly cool and your right to an opinion. Please don't position it so that if I want to use fossil fuels I have to defend myself from accusations of polluting the air and the environment.
cwillu · 2 years ago
I'm not sure what you think you did here, but juxtaposing climate change with copyright squabbles really brings out how much of a first-world-problem such squabbles really are.
satisfice · 2 years ago
You should have to defend yourself if you are going to use this unreliable, untested, irresponsible technology.

Everyone who wants to do things that completely ignore the reasonable concerns of their fellow citizens should feel some heat, at least.

RolandFlicket · 2 years ago
My boss “writes” policies etc using ChatGPT. They’re generic, overly wordy and say nothing of original thought. I don’t read them and don’t care. When he sends me a link to them in MS Teams I always click the first auto suggested response Teams like “Looks great!” Machine v machine.
bunderbunder · 2 years ago
There's a part of me that thinks that this is actually what's great about tools like ChatGPT.

Such a huge percentage of typical intra-office communication is neither worth writing nor worth reading. It only happens because someone who will neither write nor read it has mandated that it must happen, and it's easier not to argue. Farming that work out to GPT, though, is excellent damage control. It minimizes the cost to write, and, as long as you can trust your colleagues not to do something antisocial like hiding important and original thoughts in the middle of a GPT sandwich, almost eliminates the cost to read.

RandomLensman · 2 years ago
But with these tools you can do much more of those things. Instead of damage control, it might move it to a whole new level.
gremlinsinc · 2 years ago
yeah, I like the concept of bullet points in, lengthy email out, just to be translated back to bullet points at the other end, maybe eventually we just skip all the filler shit and bullet points become the norm, of course we can have ai help us brainstorm those too.
busfahrer · 2 years ago
That's an interesting thought, whereas previously technology wrapped computer protocols (I send hello in chat and the computer will wrap and unwrap it in TCP for me), in your example we have the AI wrapping the message in social protocol.
nperez · 2 years ago
That's hilarious. Someday we'll all be working QA, just scanning over AI output for issues, like manufactured goods passing by on a conveyor belt.
Gud · 2 years ago
Well at least that’s better than being QA’d by the AI.
jijijijij · 2 years ago
I hope at some point people will realize you can replace 95% of AI applications by a simple, stupid, very efficient interface. Guys, you are just cutting human interactions. We don't need AI for it. If it's AI end to end anyway, you can skip all the talking and just transmit whatever information directly. Not just text either, this extends to most everything, images as decoration, and whole websites enabled by super extra AI productivity. There is no point to human facing communication anymore, when everyone got AI to parse AI.
bee_rider · 2 years ago
I wonder what he feeds the AI. Maybe some nice concise bullet points, which are what you’d probably want.

Maybe we’ll end up with dual layers of AI: one to expand out top-down requests, another to compress them to something efficient to read.

We can also have engineers prompt the AI: write me a weekly status report on what I did. Here are my git commit logs and jira points. Emphasize (some topic).

Then the owners can have the AI summarize those reports.

Whole layers of middle management might be in danger.

leetharris · 2 years ago
I read an article about this sometime last year.

Basically AI is the exact opposite of compression.

We take what should be a few bullet points, turn it into some overly wordy bullshit with AI, then the recipient uses AI to turn that wordy bullshit back into a few bullet points.

And it costs a ton of compute to do this.

Kind of insane. I hope society evolves to work smarter.

bonton89 · 2 years ago
A guy I know is a manager and is a huge AI proponent and was telling me he writes one sentence about some one for a performance review and then has chatGPT blow it up into multiple paragraphs of machine generated corpospeak. I guess if his subordinates want to survive they'll have to use chatGPT to summerize that back down to the one sentence.

This whole exercise reminds me of the two economists paying each other to eat piles of crap.

truculent · 2 years ago
Just pass them into ChatGPT and ask for a summary - problem solved!
starbugs · 2 years ago
> Just pass them into ChatGPT and ask for a summary - problem solved!

Ask for an overly polite answer instead. You don't need to mention that it should be wordy. GPT will take care of that naturally.

hackerlight · 2 years ago
His mistake was not instructing it to be terse. AI output doesn't have to be more annoying and less dense than human output.
hospitalJail · 2 years ago
>My boss “writes” policies etc using ChatGPT. They’re generic, overly wordy and say nothing of original thought.

Isnt this good for a workplace policy?

Wife had to do something similar and I'm happy it was overfit. I don't want some creative policy book.

nickthegreek · 2 years ago
I find that it is just real wordy by default. I commonly tell ChatGPT to be succinct.
gremlinsinc · 2 years ago
do you read policies normally that weren't ai generated? I mean how many people never read the TOS or user policy?
ThrowawayR2 · 2 years ago
There's a rather good SMBC comic (from the same person who wrote "A City on Mars" recently) about that sort of thing and where it leads: https://www.smbc-comics.com/?id=3576 Seems more and more prescient by the day.
rain_iwakura · 2 years ago
I think the funny result of our ML work is that we essentially bringing the value of being online down and will eventually force people to interact IRL as costs of verifying these generations becomes too high. That is we are going back to pre-Internet era interactions.

I agree with the author, but I also don't see lower usage of all this already meaningless human-produced content as inherently bad.

The hope is dim, but I do wish being online would be restricted to strictly work-related purposes and we'd be forced back to human-to-human interactions as primary modus operandi. We'd see the depression and polarization rates go down significantly. These online community feedback loops are too toxic and bring little to the table.

If nothing online can be trusted then only offline can be the way to go, up until you people (the ones screaming Luddites at everyone 'normal' here) will decide that we need start augmenting our bodies to make better future, or in other words make profit to line your pockets.

I work in ML btw (for a loooong time).

JohnFen · 2 years ago
Well said.

> If nothing online can be trusted

I think we're already pretty much there, except that it isn't just online. This is an all-media problem now.

(I work in ML as well)

akira2501 · 2 years ago
We were ever in a position to "trust" media?

To me the advantage of having the internet was to allow a range of people without prior permission, a large sum of money, or more free time than sense to start publishing.

It was all seemingly meant to expand the number of voices in the "media" and reduce the requirement to put your trust into any one outlet. We took a wrong turn somewhere.

logicprog · 2 years ago
I think you're underestimating the benefits of having social community online, I've found and made extremely close friends and even partners online and they have been utterly life-changing for me. I'm making/have made (I'm different cases) herculean efforts to be able to be with them in person permanently because obviously in person interaction is better, but being able to discover people that are so good for me and fit so well for me was made possible by the internet because such people are rare for the kind of person I am. I would not be nearly as well adjusted and happy in my life as I am without the close friends that I've made online. They mean a lot to me. I think the problem with the current online landscape, including polarization and the generation of meaningless content, has more to do with the specific form most online social spaces take, where it's this intensely public popularity contest, instead of something more like irc, where it's generally pretty private and ephemeral and limited to a small number of people. That and the rotten incentives that social media companies have to take advantage of their users.
pokoblond · 2 years ago
I've been thinking about this a lot the last two years as someone who also grew up with social community online. It was life-changing for me too, but sometimes I catch myself wishing I could have had these experiences offline instead.

I read this a few months ago and I still think about it all the time. Curious about your thoughts. https://maya.land/monologues/2023/08/12/social-media-chalk-m...

gremlinsinc · 2 years ago
Disney has this floor that moves when you do, ie you feel like you're walking but really you're walking in place.

We're very close to holodeck technology, ai generated scenes and what not. Ais right now are single agents or groups of single agents, if they become a hive mind they could create worlds that multiple users can experience the same thing from different view points, essentially lucid dreaming or the meta verse, I mocked Zuckerberg for his meta shit and hoped they'd be last to figure it out, but the open source models from Facebook seems to me to be how you speed up building a meta verse.

I don't know if there's 4d chess, but I do think we'd be progressing a lot slower if everything was closed source. I'm glad it's open, a little nervous what terrorists and despots might do with the same technological access though.

my point being, our outside might really be inside virtual worlds. We might even have real jobs there. Imagine if we can order some food dish and a real person prepares it virtually like they would in the real world and you have a replicator device actually make it for you... kinda creepy but possible I guess.

Ready player one, is about to be reality I think.

laurencei · 2 years ago
> I don't want music recommendations from something that can't appreciate or understand music. Human recommendations will always be better.

I find Spotify's "discover weekly" list to be generally pretty good. Sure, there are some songs I dont like, but there are often 3-4 great songs each week that get added to my regularly playlist.

Its all good an well to say that human recommendations are better, but I'm not paying someone $50 per week to spend 3-4 hours finding me new and good songs. I get something that is maybe 80% as good included, and the reality is that is good enough.

I feel like one of the reason AI is doing well is it doesnt need to be better, it just needs to be "good enough" at a fraction of the price..

OtherShrezzing · 2 years ago
>there are often 3-4 great songs each week that get added to my regularly playlist.

I'm earnestly uncertain that a system with near total access to your listening history producing 3-4 great songs per week from the corpus of all human musical endeavour can be considered a "good" effort. Particularly when the 3-4 recommendations are jammed into a 60 minute playlist of otherwise questionable quality.

nicbou · 2 years ago
It’s better than my effort at finding music, and it makes Mondays a little nicer. My goal isn’t to find the best music ever, but to find new and interesting music more easily.
NegativeLatency · 2 years ago
Spotify (and most other services) actually have a mix of human and algorithimic recommendations in things like discover weekly: https://www.theverge.com/2015/9/30/9416579/spotify-discover-...
000ooo000 · 2 years ago
I wish I had the citation handy but they also put sponsored content in your 'recommendations' even if you're a paying customer. Rubs me the wrong way.
unsignedint · 2 years ago
Honestly, I believe Spotify would offer me more accurate recommendations if they relied solely on AI, without human input. Their current DJ features, although supposedly based on my previous listening habits, often suggest popular songs I don't listen to, tracks supposedly reminiscent of my school days that I've never heard before, and genres that I'm not interested in.
sfpotter · 2 years ago
I recently signed up for Qobuz, which costs the same or roughly the same as Spotify. They have a significant amount of recommendations, writing, etc written by actual people. It is of vastly higher quality than anything automatically generated by Spotify. I've only occasionally found something I like through Spotify but have already found many things I like through Qobuz.
rhizome · 2 years ago
You don't have to pay someone $50/wk. You allow users to have friends, and then you recommend based on what they've been listening to.
arghwhat · 2 years ago
"AI output is fundamentally derivative"

By that definition, so is all human output. Musicians spend years studying and practicing other people's music before writing their own, painters spend years trying to replicate techniques before mastering their own, programmers spend years reading other people's bugs before authoring their own. All expression of skill is ultimately derivative. Sometimes slightly, sometimes verbatim, sometimes outright plagiarism.

This is why we reserve "derivative" for cases where the output has a similarities and obvious connection, and why we have a hard time dealing with it in practice - it's impossible to disallow a human from using past experiences in future works.

We taught a pile of melted sand to think using principles of learning (very roughly) similar to ours, and now we get upset that it worked and they apply what they learnt because now only we are allowed to do that.

slimrec77 · 2 years ago
People act as if we are drowning in War and Peace level masterpieces that AI is going to displace.

There is absolutely nothing going on culturally.

"I am not into all this derivative AI crap. I like the creativity of humans. Do you want to go see Spiderman part 37 or Superman part 22 this weekend?"

We are already a culture that has been displaced by completely uncreative, derivative art and useless gadget making for cash. Maybe someone you can even sell their useless gadget so they can afford to go to some other place and culture full time.

AI is the only hope I have left for this culture.

archagon · 2 years ago
> There is absolutely nothing going on culturally.

Old man yells at cloud.

adrian_b · 2 years ago
All human output is much more derivative than the majority of "creators" dare to admit.

Nevertheless, good human output always add something new and original to the elements that are derived from prior art.

AI output consists entirely of derived elements, which only in the best case may happen to be mixed into a distinct combination from those already existing.

Even when the combination is new, it is distinct only due to randomness, without an ability to select like a human, which from the possible random combinations is more suitable to achieve a purpose, or it is more beautiful, or it is better according to other such criteria that are not possible, at least yet, to be judged by a program.

arghwhat · 2 years ago
> Nevertheless, good human output always add something new and original to the elements that are derived from prior art.

My personal opinion is that even the new and original elements are derivative. Once a creation derives from a sufficiently large number of sources - some unrelated, such as being inspired by music when painting - you consider it original and new, and once the space of experiences and current inputs you derive from grow sufficiently large and chaotic - including in particular that inputs and outputs become experiences, forming a feedback loop currently lacking in machine learning - you get what we consider "free thought" . That's at least the model I believe in.

> Without an ability to select like a human, which from the possible random combinations is more suitable to achieve a purpose, or ...

Untrained humans do not have this ability. It takes training to identify and categorize things. For example, my mother may appreciate a photo that most people would consider "good", but does not have the practice needed to either frame the scene herself or select the best framing to be deemed "good". At the same time, others might outright dislike the same picture - "good" is not exact in the first place.

I see no reason to believe that our models could do this as well or better than us. If an LLM generates reasonable responses, it must already have applied a standard of "best fitting". It is just not a distinct step, just like how we are not manually filtering our thoughts as we say them.

BirAdam · 2 years ago
Author’s point about environmental cost is a frustration I share. Most people in the tech industry are at least somewhat concerned about the environment but their use and endorsements of technologies don’t follow at all: cloud technologies running in massive over-provisioned datacenters, LLMs consuming more energy than some countries, etc. It’s totally cool to strip mine the Earth, clear cut forests, and burn a bunch of coal when it’s for fashionable things, right?
skwirl · 2 years ago
I'd rather shoot for a world where clean energy is abundant than abandon the benefits of ever increasing computational power. Thankfully, this is the world we are trending towards, and not the world of 'humans are a virus' self loathing, even if the toxic mindset is having a moment.
bee_rider · 2 years ago
What consumes more energy in AI, training or inference?

It seems to me that training ought to be a super shift-able workload, a great fit for intermittent green energy sources.

tomn · 2 years ago
The expensive bit is not the electricity, it's the cost of the GPUs, so they will train flat out.

There is also a massive amount of competition between the various players, and nobody taking part cares about the energy use (otherwise they wouldn't be doing it), so i don't see this happening.

(rough numbers: a H100 is 700W, which is $613/year at $0.1/kWh, and the estimated cost is $25000-40000)

golol · 2 years ago
Energy is not really all that bottlenecked.
glimshe · 2 years ago
So silly. This reminds me of anti-computer rants from the 1980s. People confuse "AI can be used to generate garbage content" with "All AI-generated content is garbage".

I have 2 artist/photographer cousins, and both of them are raving about AI. I've seen the results, they make good use of AI as a tool to augment their talent, rather than to replace it. Sometimes they spend an hour going through AI-generated/altered content, but that saves them 10+ hours - the artistic input here is in operating and curating the AI-generated content in ways that an untalented individual wouldn't be able to do.

snakeyjake · 2 years ago
It is true that some people use AI as a tool to jumpstart, enhance, or refine their own works.

It is also true that the vast, overwhelming, majority of people simply take the terrible, horrible, no-good, vomit that spews out the end of the AI pipeline and pollute the world with it.

lesser_value · 2 years ago
This was the case beforehand as well though? Now they are just better at it.
SLHamlet · 2 years ago
Anytime someone sends me something generated by ChatGPT, I think about how AI expert Hilary Mason puts it: "By design, ChatGPT aspires to be the most mediocre web content you can imagine."

https://nwn.blogs.com/nwn/2023/03/chatgpt-explained-hilary-m...

ColonelPhantom · 2 years ago
This resonates with my experience using Copilot. It generates lowest-common-denominator code, and displays a stunning unawareness of language and library features. Some understandable due to training cutoff (but still very frustrating), but it also refused to use Pillow functions that seem to have been around for a decade, instead crufting together some shitty pipeline by hand.
notpachet · 2 years ago
> it also refused to use Pillow functions that seem to have been around for a decade, instead crufting together some shitty pipeline by hand.

Definitely passes the Turing test.

Deleted Comment

nicbou · 2 years ago
I sometimes need this. It’s nice to get an average of all mediocre content summed up as a bullet list sometimes.

Sometimes I want the average tourist guide, the average recipe or the average answer. Now I get it instantly without sifting through ad-infested shallow content. ChatGPT made it so much easier to be curious about new things. It’s a good trailhead for curiosity.

jayrot · 2 years ago
Point of note: this is all well and good until these tools become inevitably ad-infested.

I'm starting to think we need to add "advertising" to the list (Death, Taxes, etc).

jayrot · 2 years ago
Benedict Evans often invokes the analogy of the concept of "infinite interns", which is pretty apt, at least in the current state.