Somewhat related, on YouTube, there's a channel filled with fake police bodycam videos. The most-viewed of these are racially inflammatory, e.g.: https://www.youtube.com/watch?v=5AkXOkXNd8w
The description of the channel on YouTube claims: "In our channel, we bring you real, unfiltered bodycam footage, offering insight into real-world situations." But then if you go to their site, https://bodycamdeclassified.com/, which is focused on threatening people who steal their IP, they say: "While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage. Our videos represent original creative works that we script, film, edit, and produce ourselves." Pretty gross.
I've seen less than a handful of, usually shorts, on YT purporting to be body-cam footage, but they all seem too well-framed and fairly obviously scripted / staged / fake to me, because I'm actually paying attention to the 'environment' not just the 'action'.
But I doubt most doomscrollers would notice that in their half-comatose state.
It IS real, unfiltered bodycam footage. From an actor, following a script, in front of one or many other actors, also following scripts. I think that's how they get away with it, they don't specify it's bodycam footage from actual law enforcement. Yes, gross.
The website you link (disgusting people) has apparently changed.
> For Content Thieves (Warning)
> If you are currently using Body Cam Declassified content without [...]
> You are in violation of copyright law and will be subject to legal action
[...]
> We aggressively pursue legal remedies against content theft, including statutory damages of up to $150,000 per infringement under U.S. [...]
> An additional administrative fee of $2,500 per infringing video will be assessed
> We demand all revenue generated from the unauthorized use of our content
> We maintain relationships with copyright attorneys who specialize in digital media infringement
> We recommend removing the infringing content immediately and contacting us regarding settlement options
A paragraph about the videos being fake is still there.
> While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage.
> Our videos represent original creative works that we script, film, edit, and produce ourselves.
> As privately created content (not government-produced public records), our videos are fully protected by copyright law and are NOT subject to the same fair use allowances that might apply to actual police bodycam
> The distinction means our content receives full copyright protection as creative works, similar to any other professionally produced video content.
This reminds me of a non-AI content mill business strategy that has been metastasizing for years. People who film homeless people and drug addicts and make whole Insta and Youtube channels monetizing it, either framed at "REAL rough footage from city XY" or even openly mocking helpless people. The latter seems to be more common on TikTok and I'm not watching "original" videos of such shite.
There is a special place in hell for people who do such things and in my opinion, there should be laws with very harsh punishments for the people that "create" this trash and make money from it. When it's about the filming of real people without their consent, we really need some laws that effectively allow to punish people who do this, because the victims are not likely to defend themselves.
And in total, the whole strategy is to worsen societal division and tensions, and feed bad human instincts (voyeurism, superiority complex) in order to funnel money into the pockets of parasites without ethics.
I don't think it's changed. Note that my first quote, claiming it's real bodycam footage, is from the description they wrote for the YouTube channel, not from the site. The second quote, saying it's not actual bodycam footage, is the one from the site, and that's still there.
> nothing drives engagement on social media like anger and drama
There. It isn’t even a “real” racism, it’s more of a flamebait, where the more outrageous and deranged a take is, the more likely it would captivate attention and possibly even provoke a reaction. Most likely they primarily wanted to earn some buck from viewer engagement, and didn’t care about the ethics of it. Maybe they also had the racist agendas, maybe not - but that’s just not the core of it.
And in the same spirit, the issue is not really racism or AI videos, but perversely incentivized attention economics. It just happened to manifest this way, but it could’ve been anything else - this is merely what happened to hit some journalist mental filters (suggesting that “racism” headlines attract attention those days, and so does “AI”).
And the only low-harm way - that I can think of - how to put this genie back in the bottle is to make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.
I agree, but I believe the intent matters if we’re trying to identify why this happens.
Racism is just less legally dangerous. There would be people posting snuff or CSAM videos, would that “sell”. Make social networks tough on racism and it’ll be sexism next day. Or extremist politics. Or animal abuse. Or, really, anything, as long as people strongly react to it.
But, yeah, to avoid any misunderstanding - I didn’t mean to say racism isn’t an issue. It is racist, it’s bad, I don’t argue any otherwise. All I want to stress is that it’s not the real issue here, merely a particular manifestation.
I don't think child porn and tired racist stereotypes are the same. Even content showing murder would be ignored by most and none of us, I assume, are pro murder.
I dont assume everyone that uses a sexy female thumbnail is a gooner, just farming goons. I think the original poster has a fair point, having seen the videos, they lack the usual cherrypicked accuracy of content made by genuinely racist creators and instead go for.. Watermelon. My friends are about as bothered by watermelon as an irishman is about cartoon leprechauns, but I'm not in the USA so perhaps its a cultural thing.
I think maybe the nuance they’re trying to capture is that yes the content is absolutely freaking racist but the reason it’s being spread isn’t racists laughing at it and liking it, it’s people being angry about it
The creation of CSAM is a crime because an underage person must be harmed in its creation by definition. Making an AI video of an offensive stereotype does not harm anyone in its creation. It is textbook free speech.
Clutch your pearls as much as you want about the videos, but forcibly censoring them is going to cause you to continue to lose elections.
> make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.
i.e. delete your facebook, your tiktok, your youtube and return to calling people on your flip phone and writing letters (or at least emails). I say this without irony (The Sonim XP3+ is a decent device). all the social networking on smart phones has not been a net positive in most people's lives, I don't really know why we sleep walked into it. I'm open to ideas how to make living "IRL" more palatable than cyberspace. It's like telling people to stop smoking cigarettes. I guess we just have to reach a critical mass of people who can do without it and lobby public spaces to ban it. Concert venues and schools are already playing with it by forcing everyone to put their phones in those faraday baggies so maybe it's not outlandish.
> i.e. delete your facebook, your tiktok, your youtube and return to calling people on your flip phone and writing letters
That sounds like an abstinence-type approach. Not saying that it's not a valid option (and it can be the only effective option in case of a severe addiction), but it's certainly not the only way that could work. Put simply, you don't have to give up on modern technology just because they pose some dangers (but you totally can, if you want to, of course).
I can personally vouch for just remembering to ask myself "what I'm currently doing, how I'm feeling right now, and what do I want?" when I notice I'm mindlessly scrolling some online feeds. Just realizing that I'm bored so much I'm willing to figuratively dumpster-dive in hope of stumbling upon something interesting (and there's nothing fundamentally wrong with this, but I must be aware that this interesting thing will be very brief by design, so unless I'm just looking for inspiration and then moving somewhere else, I'm not really doing anything to alleviate my boredom) can be quite empowering. ;-)
> all the social networking on smart phones has not been a net positive in most people's lives
Why do you think so? I'm not disagreeing, but asking because I know plenty of individual examples, but I'm personally not feeling comfortable enough to make it generalization (because it's hard) and wonder what makes you do.
> It isn’t even a “real” racism, it’s more of a flamebait
I think the harm done by circulating racist media is "real" racism regardless of whether someone is doing it because they have hateful ideology, are profiting for it, or just having a good time.
I don't even think it's flamebait, people just like being edgy on the internet so they enjoy these memes, reading the comments under these posts would probably confirm what I'm saying.
>And the only low-harm way - that I can think of - how to put this genie back in the bottle is to make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.
Gonna be hard to admit, but mandatory identity verification like in Korea, i.e attaching real consequences to what happens in the internet is more realistic way this is going to be solved. We've have "critical thinking" programs for decades, it's completely pointless on a aggregate scale, primairly because the majority aren't interested in the truth. Save for their specific expertise, it's quite common for even academics to easily fall into misinformation bubbles.
> it's completely pointless on a aggregate scale, primairly because the majority aren't interested in the truth
No offense meant, but unless you know of an experiment that indicated an absence of statistically significant effect of education programs on collective behaviors; especially one that established a causality like you stated, I would dare to suspect that it's not an accurate portrayal of things, but more of an emotionally driven but not entirely factual response.
> mandatory identity verification like in Korea, i.e attaching real consequences to what happens in the internet
I'm not sure I understand the idea. Is it about making it easier for law enforcement to identify authors of online posts, or about real-name policies and peer pressure, or, possibly, something else?
This isn't really a problem with video generation or AI in general. Sure, there is an aspect of ragebait to it, but the reality is that racism is extremely widespread. If it were not, this kind of content would not be so popular. The people at the very top of US government right now are white supremacists. I'm sorry that is not an exaggeration. There is another term that encompasses more of their worldviews which is not politically correct but is accurate.
Stop trying to blame technology for longstanding social problems. That's a cop out.
Granted that racism is not new, the infinite production of automated content drowning out any genuine human opinion is a harbinger of the internet to come.
it also allows automated production of positive content. The main issue here is given a sea of good and a sea of bad content, where the typical person would go for a swim? Why calls for empathy fall flat while inciting rage and hatred is so successful?
It's entirely appropriate to blame a technology if the answer to the question, "Does this technology make a longstanding social problem worse or better?" is "It makes it worse."
There can be a follow-on discussion about what, if any, benefits are also provided by aforesaid technology
None of the examples shown in the video are passable hoaxes. They are all obvious burlesque-style parodies, albeit made in bad taste. They all also have clear and prominent hallmarks of AI generation. Anyone fooled by these has got bigger, prior problems than any potential belief instilled by these videos.
The problem is not that they are fooling anyone. No one thinks a woman is marrying a chimpanzee. The problem is that the videos are obviously and openly racist and being spread quite brazenly.
If I have to encounter a constant barrage of shitty racist (or sexist, or homophobic, or whatever) material just to exist online, I'm going to pretty quickly feel like garbage. (If not feel unsafe.) Especially if I'm someone who has other stressors in their life. Someone who is doing well, their life otherwise together, might encounter these and go, "Fucking idiots made a racist video, block."
But empathize with someone who is struggling? Who just worked 18 hours to make ends meet to come home and feed their kids and pay rent for a shitty apartment that doesn't fit everyone, and their kid comes up to them asking what this video means, and it just... gets past all their barriers. It wedges open so many doubts.
I really miss the time before generative images and video were a thing. We opened such a can of worms. Really seems like a "the scientists were so occupied with if they could they didn't stop to think if they should" situation. What is the actual utility of these tools again beyond putting artists out of work?
from an information theory perspective, predicting and efficiently representing data is so closely tied to generation that it is unavoidable.
if you want to use ML to do anything at all with image and video, you will usually wind up creating the capability to generate image and video one way or another.
however building a polished consumer product is a choice, and probably a mistake. every technology has good and bad uses, but there seem to be few and trivial good uses for image/video generation, with many severe bad uses.
I miss the time before everybody was on the Internet when it was mostly like minded techie types. This modem internet kinda sucks with all its AI generated racism.
I’ve seen lots of them which I found very very amusing. That seems good enough for me. Think about it: there are channels on YouTube and on the telly that are there just to amuse you. So a system that creates amusing videos is a net positive for the world.
Some of TikTok is great. I mean, most of it is just dopamine hits, and it's potentially quite bad from a health perspective. But also, plenty of TikTok is news, or political theory, or thoughtful commentary, or explanations of how things work.
It's a bowl of fun size candy bars, with a few razors, a few drugs, a few rotten apples, etc. mixed in. You can, by and large, get the algorithm to serve you nothing but the candy, but you are still eating only candy bars at that point.
Some people can say no to infinite candy. Other people, like myself, cannot and it's a real problem.
The description of the channel on YouTube claims: "In our channel, we bring you real, unfiltered bodycam footage, offering insight into real-world situations." But then if you go to their site, https://bodycamdeclassified.com/, which is focused on threatening people who steal their IP, they say: "While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage. Our videos represent original creative works that we script, film, edit, and produce ourselves." Pretty gross.
But I doubt most doomscrollers would notice that in their half-comatose state.
It IS real, unfiltered bodycam footage. From an actor, following a script, in front of one or many other actors, also following scripts. I think that's how they get away with it, they don't specify it's bodycam footage from actual law enforcement. Yes, gross.
Definitely have watched enough videos from this channel to recognize its name. :(
> For Content Thieves (Warning)
> If you are currently using Body Cam Declassified content without [...]
> You are in violation of copyright law and will be subject to legal action
[...]
> We aggressively pursue legal remedies against content theft, including statutory damages of up to $150,000 per infringement under U.S. [...]
> An additional administrative fee of $2,500 per infringing video will be assessed
> We demand all revenue generated from the unauthorized use of our content
> We maintain relationships with copyright attorneys who specialize in digital media infringement
> We recommend removing the infringing content immediately and contacting us regarding settlement options
A paragraph about the videos being fake is still there.
> While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage.
> Our videos represent original creative works that we script, film, edit, and produce ourselves.
> As privately created content (not government-produced public records), our videos are fully protected by copyright law and are NOT subject to the same fair use allowances that might apply to actual police bodycam
> The distinction means our content receives full copyright protection as creative works, similar to any other professionally produced video content.
This reminds me of a non-AI content mill business strategy that has been metastasizing for years. People who film homeless people and drug addicts and make whole Insta and Youtube channels monetizing it, either framed at "REAL rough footage from city XY" or even openly mocking helpless people. The latter seems to be more common on TikTok and I'm not watching "original" videos of such shite.
There is a special place in hell for people who do such things and in my opinion, there should be laws with very harsh punishments for the people that "create" this trash and make money from it. When it's about the filming of real people without their consent, we really need some laws that effectively allow to punish people who do this, because the victims are not likely to defend themselves.
And in total, the whole strategy is to worsen societal division and tensions, and feed bad human instincts (voyeurism, superiority complex) in order to funnel money into the pockets of parasites without ethics.
There. It isn’t even a “real” racism, it’s more of a flamebait, where the more outrageous and deranged a take is, the more likely it would captivate attention and possibly even provoke a reaction. Most likely they primarily wanted to earn some buck from viewer engagement, and didn’t care about the ethics of it. Maybe they also had the racist agendas, maybe not - but that’s just not the core of it.
And in the same spirit, the issue is not really racism or AI videos, but perversely incentivized attention economics. It just happened to manifest this way, but it could’ve been anything else - this is merely what happened to hit some journalist mental filters (suggesting that “racism” headlines attract attention those days, and so does “AI”).
And the only low-harm way - that I can think of - how to put this genie back in the bottle is to make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.
Generating and distributing racist materials is racist regardless of the intent, even if the person "doesn't mean it".
Simple thought experiment: If the content was CSAM, would you still excuse the perpetrators as victims of perversely incentivized attention economics?
Racism is just less legally dangerous. There would be people posting snuff or CSAM videos, would that “sell”. Make social networks tough on racism and it’ll be sexism next day. Or extremist politics. Or animal abuse. Or, really, anything, as long as people strongly react to it.
But, yeah, to avoid any misunderstanding - I didn’t mean to say racism isn’t an issue. It is racist, it’s bad, I don’t argue any otherwise. All I want to stress is that it’s not the real issue here, merely a particular manifestation.
Clutch your pearls as much as you want about the videos, but forcibly censoring them is going to cause you to continue to lose elections.
i.e. delete your facebook, your tiktok, your youtube and return to calling people on your flip phone and writing letters (or at least emails). I say this without irony (The Sonim XP3+ is a decent device). all the social networking on smart phones has not been a net positive in most people's lives, I don't really know why we sleep walked into it. I'm open to ideas how to make living "IRL" more palatable than cyberspace. It's like telling people to stop smoking cigarettes. I guess we just have to reach a critical mass of people who can do without it and lobby public spaces to ban it. Concert venues and schools are already playing with it by forcing everyone to put their phones in those faraday baggies so maybe it's not outlandish.
That sounds like an abstinence-type approach. Not saying that it's not a valid option (and it can be the only effective option in case of a severe addiction), but it's certainly not the only way that could work. Put simply, you don't have to give up on modern technology just because they pose some dangers (but you totally can, if you want to, of course).
I can personally vouch for just remembering to ask myself "what I'm currently doing, how I'm feeling right now, and what do I want?" when I notice I'm mindlessly scrolling some online feeds. Just realizing that I'm bored so much I'm willing to figuratively dumpster-dive in hope of stumbling upon something interesting (and there's nothing fundamentally wrong with this, but I must be aware that this interesting thing will be very brief by design, so unless I'm just looking for inspiration and then moving somewhere else, I'm not really doing anything to alleviate my boredom) can be quite empowering. ;-)
> all the social networking on smart phones has not been a net positive in most people's lives
Why do you think so? I'm not disagreeing, but asking because I know plenty of individual examples, but I'm personally not feeling comfortable enough to make it generalization (because it's hard) and wonder what makes you do.
I think the harm done by circulating racist media is "real" racism regardless of whether someone is doing it because they have hateful ideology, are profiting for it, or just having a good time.
Gonna be hard to admit, but mandatory identity verification like in Korea, i.e attaching real consequences to what happens in the internet is more realistic way this is going to be solved. We've have "critical thinking" programs for decades, it's completely pointless on a aggregate scale, primairly because the majority aren't interested in the truth. Save for their specific expertise, it's quite common for even academics to easily fall into misinformation bubbles.
No offense meant, but unless you know of an experiment that indicated an absence of statistically significant effect of education programs on collective behaviors; especially one that established a causality like you stated, I would dare to suspect that it's not an accurate portrayal of things, but more of an emotionally driven but not entirely factual response.
> mandatory identity verification like in Korea, i.e attaching real consequences to what happens in the internet
I'm not sure I understand the idea. Is it about making it easier for law enforcement to identify authors of online posts, or about real-name policies and peer pressure, or, possibly, something else?
Stop trying to blame technology for longstanding social problems. That's a cop out.
There can be a follow-on discussion about what, if any, benefits are also provided by aforesaid technology
If I have to encounter a constant barrage of shitty racist (or sexist, or homophobic, or whatever) material just to exist online, I'm going to pretty quickly feel like garbage. (If not feel unsafe.) Especially if I'm someone who has other stressors in their life. Someone who is doing well, their life otherwise together, might encounter these and go, "Fucking idiots made a racist video, block."
But empathize with someone who is struggling? Who just worked 18 hours to make ends meet to come home and feed their kids and pay rent for a shitty apartment that doesn't fit everyone, and their kid comes up to them asking what this video means, and it just... gets past all their barriers. It wedges open so many doubts.
This isn't harmless.
Dead Comment
if you want to use ML to do anything at all with image and video, you will usually wind up creating the capability to generate image and video one way or another.
however building a polished consumer product is a choice, and probably a mistake. every technology has good and bad uses, but there seem to be few and trivial good uses for image/video generation, with many severe bad uses.
In our case, it’s just generative AI.
Dead Comment
It's a bowl of fun size candy bars, with a few razors, a few drugs, a few rotten apples, etc. mixed in. You can, by and large, get the algorithm to serve you nothing but the candy, but you are still eating only candy bars at that point.
Some people can say no to infinite candy. Other people, like myself, cannot and it's a real problem.
Deleted Comment