I was thinking about it, because wife was telling me story from work, where a woman was scammed with AI generated stuff and her colleague was a little too nonchalant about it ( 'it is on her to do her due diligence' ). And it made me annoyed.
How can you possibly make due diligence when everyone around you is incentivized to lie? We do have a concept of fraud, but advertising seems to be able to move around its edges.
I do get the why. Money talks and whatnot, but we are getting to the point where trust is becoming a hot commodity and that is not good.
And all this before we get to the idea that it actually managed to desensitize people even further.
> You can't and the correct response is a total lack of trust by default because that's the easiest way to protect yourself.
Rather off-topic, but it's funny how this principle applies in the exact same way when it comes to traffic for instance. It is unfortunate it has to be like that, but not trusting any other traffic and assuming they can at any point do the thing you'd least expect them to do is just safer. Especially when e.g. cycling this saved my from injuries or worse more than just a couple of times. And that's even in a country with relatively high numbers of cyclists.
I wouldn’t describe it as the correct response, more like “the best one we can think of right now.”
Living this way is that it’s really exhausting, saps quite a lot of joy out of life and makes people more lonely. It’s far from optimal and a more sustainable option would be to work our way back to the community trust we used to enjoy just ~30 years ago.
>> You can't and the correct response is a total lack of trust by default because that's the easiest way to protect yourself.
This has slowly eaten away at the idea that we used to live in a high trust society that has now completely transformed into a society where you cannot trust anything, ever, in any capacity.
I felt like I had kind gained back some control from not clicking on any links in emails and using my phone sparingly. But with this new crop of AI tools, you're right, it makes it a lot easier to separate you from your money and the criminals are becoming way more sophisticated and persistent in their attacks.
That's no way to live. The optimal amount to get scammed is >0, because to 100% avoid it you have to cheapen and weaken your life to an unreasonable extent.
It's like how the only 100% secure system is offline, off, unplugged, wiped, and slagged. Or how the only way to be 100% safe from communicable disease is to be the last living human.
And that's what makes our tolerance for grift and scams so obnoxious. It's one thing when dishonest people lie and cheat and steal, it's another entirely when our institutions and leaders forgive, excuse, enable, and even embrace it.
If you can be beguiled by random images from an unknown solicitor to make rash financial decisions, the root of the problem is probably sitting in front of the screen. "A fool and his money are soon parted" is not a new thing, it's just easier to accomplish than ever before thanks to the dopamine conditioned internet and AI content generation. These victims don't need new legal layers of trust, they need a social media detox.
I have met multiple cybersecurity experts, and they really were experts and in general very intelligent and knowledgeable people, that have fallen for very obvious scams. One of them was one of those gift card scams which in my eyes is possibly the most obvious one.
I myself have almost fallen for a well crafted phishing attack, the only reason I never ended up putting in my card details was that I was technical enough to know that a generic URL with no query params can't possibly have my tracking code pre-filled on a page I've never been to before. Had the scammers just made me enter my own tracking code, or if I didn't know what a query param is, I 100% would've fallen for it.
My point is that no one is immune to a momentary lapse in judgement or just plain bad luck. You can be an expert, extremely intelligent, whatever, all it takes is 5 minutes of weakness to get you, whether that's because you slept bad, had something else on your mind, weren't being diligent enough (it's impossible to be diligent 24/7 after all) or any other myriad list of reasons.
It's easy to put the blame on the victims here, but with generative AI the line between reality and fabrication is getting thinner and thinner. I'm a massive AI skeptic (just check my comments here), but I'm 100% positive that sooner or later we'll hit a stage where it's quite literally impossible to discern an AI fabrication from a real event unless you witnessed it in person yourself. You won't be able to trust images, or audio or even videos of your loved ones unless you basically see them send it to you, and even then there's no guarantee the final footage isn't doctored by the phone in some way.
So sure, people need to smarten up a bit, but we also need to start thinking about these problematic issues with AI sooner rather than later, cause things are only going to get worse, and fast.
You should be distrustful of every emotion generated by a mass media campaign. They're all artificial, and generated for the benefit of the person running the campaign, not you.
It's still possible to function, it's just that you have to go out and seek information, and usually seek information through channels where the organization is not delivering it to you. Every non-profit files a Form 990 with the IRS, and every public company files a form 10-Q/K with the SEC. There's a wealth of information there for figuring out what the company is doing, but they usually like to obfuscate it in some extremely boring text and financial figures, because they want you to buy into the narrative they deliver to the press and not the facts they deliver to the government. They usually will not outright lie on these, though, because doing so is a crime that can put the CEO, CFO, and Board of Directors in jail.
Same for consumer stuff. Ruthlessly seek out back-channel information about products, whether it's word-of-mouth from friends, online reviews (though these are increasingly easily gamed these days), product tests from independent organizations (though again, many companies provide free products to review in exchange for favorable reviews), etc. I've found that keeping an online subscription to Consumer Reports has been well worth it because they're one of the few review sites where you pay them to review, the company doesn't pay them to get reviewed. Advertisements are worthless; treat them as such. Same goes for random cold calls; it's probably a scam, unless you can corroborate it otherwise.
<< You should be distrustful of every emotion generated by a mass media campaign.
I would like you to think through this statement and then carefully apply it what today's publicly facing technology can do. If TikTok proved anything, it is that masses of people can be influenced to do feel, think and act in accordance with desired goals.
One could argue that if it is already this bad, maybe it should e reined in a tiny wee bit?
<< They're all artificial, and generated for the benefit of the person running the campaign, not you.
Oh no man. The feelings are real. They are generated under false pretenses, but the feelings are real. Honestly, I am not sure if people running those campaigns realize that all those feelings may eventually be turned against them.
And the stuff is getting really really good. Like, sometimes I have really thought this is AI, but then it turns out it is real and many times I'm unsure. It's really a different world now. And a lot of these frauds are things that could happen in the past, but only if you were a really valuable target, meaning that someone would invest a lot of time and resources on tailoring something just for you. But these days it's getting so cheap that anybody is that target now.
> her colleague was a little too nonchalant about it ( 'it is on her to do her due diligence' ).
I’m always fascinated by victim blaming culture, which has been pervasive long before generative AI.
You see it most frequently in cases where the victim is thought to be a safe target: Someone wealthier, an office rival, a corporation. On HN it appears in every thread about someone being scammed, but it was most obvious in the recent threads where JPMorgan was defrauded by a startup they acquired. Seemingly 1/3 of the comments were from people commenting that JPMorgan was actually at fault for allowing themselves to be scammed. Some even declaring that the fraudster shouldn’t be prosecuted because JPMorgan was entirely to blame for allowing themselves to be scammed.
I don’t know what drives it. The victim blaming people always seem to believe they would not fall prey to similar scams. They also seem to see the world as full of faceless scams everywhere and that allowing yourself to fall victim to them is a moral failing. Many of them just like to be contrarian, snide, and judgmental, so heaping scorn on the victim they know checks more of those boxes than going along with the obvious consensus that the party who committed the crime is the one to blame.
This happens with every new generation of scams. The victim blamers read the news and think it would never happen to them because they’re too smart, therefore any victims deserve blame.
The problem is that we're using "blame" to mean two different things:
1. In an ideal, fair world, which parties should have to change to make the outcome not happen? Scammers shouldn't scam, murderers shouldn't murder, etc.
2. With a reasonable understanding of the world, which parties could have predicted the outcome and changed their behavior to mitigate the problem? JPMorgan not doing due diligence on a deal of that magnitude is pretty negligent. I don't carry open bags full of cash walking around the city either, and I don't comment negatively about our new glorious leader and all of his kingly power. Yes, if I had my savings stolen or were murdered on my next boat trip to the Caribbean that'd be somebody else's "fault" via definition (1), but as a practical matter my life is a hell of a lot better on average if I avoid those activities regardless.
The courts sometimes agree with point (2) to an extent as well. If JPMorgan's negligence caused harm to others, the criminals involved would still have full responsibility to JPMorgan, but the harmed parties might have a civil claim against JPMorgan. By way of analogy, what happens if your local bank's safe is found out to be an unmonitored cardboard box? The fact that somebody would eventually break in is predictable, and the bank would be liable to its customers.
It could also be a more charitable "I make significant efforts to fight against this and spent years/months of my life trying to convince others to do the same only to be ignored, so fuck them".
Like that RMS meme where the world is finally getting the pointy end of the proprietary software trap and cries for help and he just whispers "Gno".
> On HN it appears in every thread about someone being scammed, but it was most obvious in the recent threads where JPMorgan was defrauded by a startup they acquired. Seemingly 1/3 of the comments were from people commenting that JPMorgan was actually at fault for allowing themselves to be scammed.
I find that awful. JPMorgan should be held accountable, like many similar firms, for all the money that they themselves have stolen. But one crime does not justify the other. The people that scammed JPMorgan will not use the money to pay off JPMorgan's victims.
What it seems that in the USA nobody believes in justice anymore, as even the Supreme Court is just another partisan agency helping the rich. Americans may justify getting money thru crime because it is so normalized. Blaming victims helps to feel good about it.
The real answer is to have stronger institutions that see everybody equally under the law, and to have better laws that punish all type of criminals including economic crimes.
Sadly, a large number of people seem to think "caveat emptor" is some kind of optimal default way to live and organize a society. Like, anyone should be able to do or say anything, and if the counterparty doesn't do their due diligence, they're gullible and deserve to lose.
> You see it most frequently in cases where the victim is thought to be a safe target: Someone wealthier, an office rival, a corporation.
That must be a representation of your own social circle because I can assure you that poor people are commonly blamed for all the bad things that happen to them.
You picked a bad example. JPMorgan Chase has itself violated the law many times: for example, they willfully violated the Servicemembers Civil Relief Act and the Foreign Corrupt Practices Act in thousands of cases. So fuck those guys for being scammers themselves. There are no clean hands here and in that instance I absolutely blame the "victim". Like if an armed robber steals cash from a drug smuggler, the smuggler shouldn't expect sympathy from anyone.
My comment is specific to JPMorgan Chase. While I know that I would not fall prey to similar scams, I am not endorsing victim blaming in general.
There are multiple services that verify and rate NGOs and nonprofits. The key is to look them up on the service website and not just Google the name. Personally I use Guidestar, but that's for U.S. orgs.
I was recently thinking about this. One of the only benefits of this AI slop is that maybe it will teach the generations growing up with it not to trust anything they see on the internet without verifying it. If you're older and not technologically savvy, I understand why it's so easy to get scammed. Hopefully younger generations will learn the lesson that you need to assume everyone is lying on the internet, that pictures, and more importantly, video are no longer reliable without verifying the source.
Good approach, and was valuable and necessary prior to AI.
I learned this lesson in my early 20s. Nearly every entity that you interact with is trying to transact with you, in a way that benefits themselves. Whether that's a legitimate transaction (money exchanged for a product/service exactly as advertised), a misrepresented transaction (the product/service is not as advertised), taking your money for nothing, or simply taking away your time and attention (advertising).
Even before AI, if you were unable to get a good sense of legitimate transactions, you'd lose all your money on misrepresented transactions to scammy used car salesmen, door-to-door salesmen, and whole life insurance salesmen. These parasites and their ilk prey on people who are trusting by nature, and people who will say "yes" to avoid disappointing a stranger. It's unfortunate that the world has come to this, but you need to be untrusting of others' motivations by nature to not be taken advantage of financially.
I honestly hate that idea. I am slowly adjusting, but I got used to the idea that you can at least trust the other party not to outright lie ( although I was already expected omission, and other 'normal selling tactics' ).
I posit that what we need is heavily enforced truth in advertising laws.
We only do have such a concept when it's about an individual lying to a company for profit.
When companies lie to individuals it's just business as usual, or worst-case scenario a "mistake" that the company pinky-promises will not happen again.
Empathy is a hackable interface- those that are exposing it, are in this attention economy civil war zone- lesser beings by default.
They once where protected by the state- but the state, as policeman- has been continuously reduced in its protector role, by those, who found hackable political interfaces, the ticks on the state wave through the fleas on the people. The pent-up backlash to all this results in a militant vote for anti-parasitic and exposure limiting institutions.
Fascism promises to remove those attackers, by fire-walling away the exterior and prosecuting perceived attackers on the inside of the nation.
Every successful scam, every allowed exploit, is a advertising for the totalitarians who promise to restore order.
I'm sure that some (few) of these NGOs do good work. However, sooner or later, they all seem to succumb to two problems: (1) excessive staff costs, and (2) a failure of incentives.
The second one is more insidious: If they solved the problem they address, they would no longer need to exist. They have no incentive to succeed. So they go around addressing individual problems, taking sad pictures, and avoid addressing systemic problems.
And if the systemic problems are insoluble? Then there is again an argument that the NGO should not exist. If the problem is truly insoluble, then likely the money could be better spent elsewhere.
I don't think #2 really applies for many/most orgs, since so many causes don't really have a hard and fast solution, but instead exist on a spectrum. Think of a group trying to end poverty or protect nature. Those problems will never be truly solved but they can go much better or worse.
The issue is that their incentive is on marketing more than problem solving. A good NGO can blend actual solutions into snappy marketing campaigns. Like its expensive time/effort wise to push for legislative change on an issue but its "cheap" to throw a few grand at a group of down trodden people and take some photos.
I spent some years working for a large NGO (Opportunity International) and living with people who work for NGOs.
NGOs must constantly raise money to fund their operations. The money that an NGO spends on fund-raising & administration is called "overhead". The percentage of annual revenue spent on overhead is the overhead percentage. Most NGOs publish this metric.
When a big donor stops contributing, the NGO must cut pay or lay off people and cut projects. I've never heard of an NGO "succumbing to excessive staff costs" like a startup running out of money. Financial mismanagement does occasionally happen and boards do replace CEOs. Board members are mostly donors, so they tend to donate more to help the NGO recover from mismanagement, instead of walking away.
NGOs pay less than other organizations, so they mostly attract workers who care about the NGO's mission. These are people with intrinsic motivation to make the NGO succeed in its mission. Financial incentives are a small part of their motivations. For example, my supervisor at Opportunity International refused several raises.
> So they go around addressing individual problems, taking sad pictures, and avoid addressing systemic problems.
Work on individual problems is valuable. For example, the Carter Center has prevented many millions of people from going blind from onchocerciasis and trachoma [0].
The Carter Center is not directly addressing the systemic problems of poverty and ineffective government health programs. That would take different expertise and different kinds of donors.
The world is extremely complicated and interconnected. The Carter Center's work preventing blindness directly supports worker productivity in many poor countries. Productivity helps economic growth and reduces poverty. And with more resources, government health programs run better.
Being effective in charity work requires humility and diligence to understand what can be done now, with the available resources. And then it requires tenacity to work in dangerous and backward places. It's an extremely hard job. People burn out. And we are all better off because of the work they do.
When we ignore the value of work on individual problems, because it doesn't address systemic problems, we practice binary thinking [1]. It's good to avoid binary thinking.
> included AI-generated testimony from a Burundian woman describing being raped by three men and left to die in 1993 […] has been taken down, as we believed it shows improper use of AI, and may pose risks regarding information integrity
It’s one thing to create an image but creating a whole fake testimony is even worse.
Notice they just felt that the tool was evolving “too fast” and it didn’t feel real enough. Once the tool is better, they’ll have no issues with fake testimonies.
That's a hot take. Any group's credibility is shot if they're fabricating testimonials, never mind that the subject matter is absolutely brutal, manipulative, and exploitive. The whole point of such a thing is to show create human connection to a real problem in the interest of motivating change. What utter bullshit.
Take AI out of this story and pretend that you have groups making up stuff to solicit donations. It's fraud. The AI part just means it was easy to commit.
I have suspected that I have seen some of these for Palestine in recent months. Not saying this has any implication on the broader reporting of it but… shit we’re really kind of screwed if media starts portraying everything via fictitious dramatization imagery. Especially on social media.
> if media starts portraying everything via fictitious dramatization
let me do one nit picky correction
"does so in a a way even more efficient at manipulating our brains"
because a lot of media, especially social media, already do so. Just currently by using unrelated stock photos, photos from different "incididents" etc.
e.g. even on supposedly reputable news channels it's not rare to find the background images of completely unrelated incidents when they don't have any fitting "correct" images, and then maybe some small *note in some corner. Or not because didn't even notice.
Doesn't have to involve any AI. I had door-to-door campaigners the other month asking to donate to some charity (probably Red Cross) to help with hunger in Palestine. They haven't meant to shoot at IDF to stop the blockade of aid, just to buy more of it since it was in the news and they needed to top up their budget. That's just how they work.
If we're thinking of the same picture: That kid is real, it has a severe condition that needs medical attention. It's not the general case, but an extreme one, but that doesn't make the problems any less urgent.
Any news organization that does this deserves to burn, but I think it makes sense for aid organizations because they want to be able to portray the work they're doing / have done but might have some pause showing real people's faces while they're suffering. In a way a dramatization seems more "humane" and preserves the dignity of the individuals involved.
I've run photos I've taken through AI so I could post pictures of myself without opening myself to being doxxed.
I've always hated when even real images are used to promote something, but the image is unrelated. Example: a horrific image of a smashed car to show the result of drunk driving or phone use but where that wasn't the actual cause of the crash in the picture. Claiming "this is a result of xxx" is a lie. There are plenty of real cases out there so if you're making stuff I can't trust you - particularly for those asking for donations.
Like, I agree, but at some level I’d actually rather see a fake picture of a drunk driving crash (or no picture) and know this is what the aftermath might look like but nobody died in this particular shot.
For drunk driving in particular, the pictures typically come from the police and aren’t exposed or composed with an artistic purpose. The opposite is true for a bunch of the famous pictures of poverty; these are composed for their (often powerful) emotional effect. Whether it’s a good thing or not, I can imagine that being able to add a little of the art direction after the fact to drunk driving photos might be tempting, i.e., having some control over angle & exposure, and maybe avoid gore in the shot…
"If your'e making stuff up I can't trust you - particularly for those asking for donations" Exactly. AI doesn't have much to do with it other than it makes it easier to make stuff up. Someone using AI images to solicit donations is just grifting like any other liar asking people for money.
This tactic is often used to get attention. There was outrage about this in the late 2000s when reality TV/talk show series followed people in a particular poverty stricken area in the UK.
I can understand that there are some good arguments for using AI generated pics. Using AI means not having to pay for photographers who'd either have to spend a lot of time taking or staging pictures, and not having to pay for their travel expenses which means that money can be spent on aid instead of marketing. Also it can feel a bit exploitative to photograph vulnerable people, especially children.
As long as the images they end up with accurately reflect the situation on the ground and the use of AI is transparent it seems like a good idea to use AI for attention grabbing images and supplement with candid photos from aid workers who aren't professional photographers.
Take AI out of the conversation. Now revisit the argument that it's OK for people to make fake pics, testimonials, stories, and videos to solicit donations. I'm pretty sure that's just called a con or perhaps fraud in legal terms.
Would hand drawn illustrations of a starving child or a battered woman be fraud if they were used to help raise money that was going to feed actual starving children or help battered women?
The way I see it: What can I reasonably assume about "the situation on the ground" if you're giving me fake photographs? If an organization is willing to fake one thing, what aren't they willing to fake? Are they faking what they are doing with donation money, too? You have to assume yes if they're willing to fake other things.
How can you possibly make due diligence when everyone around you is incentivized to lie? We do have a concept of fraud, but advertising seems to be able to move around its edges.
I do get the why. Money talks and whatnot, but we are getting to the point where trust is becoming a hot commodity and that is not good.
And all this before we get to the idea that it actually managed to desensitize people even further.
You can't and the correct response is a total lack of trust by default because that's the easiest way to protect yourself.
They really are out to get you(r money).
Rather off-topic, but it's funny how this principle applies in the exact same way when it comes to traffic for instance. It is unfortunate it has to be like that, but not trusting any other traffic and assuming they can at any point do the thing you'd least expect them to do is just safer. Especially when e.g. cycling this saved my from injuries or worse more than just a couple of times. And that's even in a country with relatively high numbers of cyclists.
Living this way is that it’s really exhausting, saps quite a lot of joy out of life and makes people more lonely. It’s far from optimal and a more sustainable option would be to work our way back to the community trust we used to enjoy just ~30 years ago.
This has slowly eaten away at the idea that we used to live in a high trust society that has now completely transformed into a society where you cannot trust anything, ever, in any capacity.
I felt like I had kind gained back some control from not clicking on any links in emails and using my phone sparingly. But with this new crop of AI tools, you're right, it makes it a lot easier to separate you from your money and the criminals are becoming way more sophisticated and persistent in their attacks.
"They only want your best... Your money"
It's like how the only 100% secure system is offline, off, unplugged, wiped, and slagged. Or how the only way to be 100% safe from communicable disease is to be the last living human.
And that's what makes our tolerance for grift and scams so obnoxious. It's one thing when dishonest people lie and cheat and steal, it's another entirely when our institutions and leaders forgive, excuse, enable, and even embrace it.
The root of the problem has something to do with the scammer.
I myself have almost fallen for a well crafted phishing attack, the only reason I never ended up putting in my card details was that I was technical enough to know that a generic URL with no query params can't possibly have my tracking code pre-filled on a page I've never been to before. Had the scammers just made me enter my own tracking code, or if I didn't know what a query param is, I 100% would've fallen for it.
My point is that no one is immune to a momentary lapse in judgement or just plain bad luck. You can be an expert, extremely intelligent, whatever, all it takes is 5 minutes of weakness to get you, whether that's because you slept bad, had something else on your mind, weren't being diligent enough (it's impossible to be diligent 24/7 after all) or any other myriad list of reasons.
It's easy to put the blame on the victims here, but with generative AI the line between reality and fabrication is getting thinner and thinner. I'm a massive AI skeptic (just check my comments here), but I'm 100% positive that sooner or later we'll hit a stage where it's quite literally impossible to discern an AI fabrication from a real event unless you witnessed it in person yourself. You won't be able to trust images, or audio or even videos of your loved ones unless you basically see them send it to you, and even then there's no guarantee the final footage isn't doctored by the phone in some way.
So sure, people need to smarten up a bit, but we also need to start thinking about these problematic issues with AI sooner rather than later, cause things are only going to get worse, and fast.
It's still possible to function, it's just that you have to go out and seek information, and usually seek information through channels where the organization is not delivering it to you. Every non-profit files a Form 990 with the IRS, and every public company files a form 10-Q/K with the SEC. There's a wealth of information there for figuring out what the company is doing, but they usually like to obfuscate it in some extremely boring text and financial figures, because they want you to buy into the narrative they deliver to the press and not the facts they deliver to the government. They usually will not outright lie on these, though, because doing so is a crime that can put the CEO, CFO, and Board of Directors in jail.
Same for consumer stuff. Ruthlessly seek out back-channel information about products, whether it's word-of-mouth from friends, online reviews (though these are increasingly easily gamed these days), product tests from independent organizations (though again, many companies provide free products to review in exchange for favorable reviews), etc. I've found that keeping an online subscription to Consumer Reports has been well worth it because they're one of the few review sites where you pay them to review, the company doesn't pay them to get reviewed. Advertisements are worthless; treat them as such. Same goes for random cold calls; it's probably a scam, unless you can corroborate it otherwise.
I would like you to think through this statement and then carefully apply it what today's publicly facing technology can do. If TikTok proved anything, it is that masses of people can be influenced to do feel, think and act in accordance with desired goals.
One could argue that if it is already this bad, maybe it should e reined in a tiny wee bit?
<< They're all artificial, and generated for the benefit of the person running the campaign, not you.
Oh no man. The feelings are real. They are generated under false pretenses, but the feelings are real. Honestly, I am not sure if people running those campaigns realize that all those feelings may eventually be turned against them.
<< it's just that you have to
How come it is not fraudster that 'has to'?
I’m always fascinated by victim blaming culture, which has been pervasive long before generative AI.
You see it most frequently in cases where the victim is thought to be a safe target: Someone wealthier, an office rival, a corporation. On HN it appears in every thread about someone being scammed, but it was most obvious in the recent threads where JPMorgan was defrauded by a startup they acquired. Seemingly 1/3 of the comments were from people commenting that JPMorgan was actually at fault for allowing themselves to be scammed. Some even declaring that the fraudster shouldn’t be prosecuted because JPMorgan was entirely to blame for allowing themselves to be scammed.
I don’t know what drives it. The victim blaming people always seem to believe they would not fall prey to similar scams. They also seem to see the world as full of faceless scams everywhere and that allowing yourself to fall victim to them is a moral failing. Many of them just like to be contrarian, snide, and judgmental, so heaping scorn on the victim they know checks more of those boxes than going along with the obvious consensus that the party who committed the crime is the one to blame.
This happens with every new generation of scams. The victim blamers read the news and think it would never happen to them because they’re too smart, therefore any victims deserve blame.
1. In an ideal, fair world, which parties should have to change to make the outcome not happen? Scammers shouldn't scam, murderers shouldn't murder, etc.
2. With a reasonable understanding of the world, which parties could have predicted the outcome and changed their behavior to mitigate the problem? JPMorgan not doing due diligence on a deal of that magnitude is pretty negligent. I don't carry open bags full of cash walking around the city either, and I don't comment negatively about our new glorious leader and all of his kingly power. Yes, if I had my savings stolen or were murdered on my next boat trip to the Caribbean that'd be somebody else's "fault" via definition (1), but as a practical matter my life is a hell of a lot better on average if I avoid those activities regardless.
The courts sometimes agree with point (2) to an extent as well. If JPMorgan's negligence caused harm to others, the criminals involved would still have full responsibility to JPMorgan, but the harmed parties might have a civil claim against JPMorgan. By way of analogy, what happens if your local bank's safe is found out to be an unmonitored cardboard box? The fact that somebody would eventually break in is predictable, and the bank would be liable to its customers.
Like that RMS meme where the world is finally getting the pointy end of the proprietary software trap and cries for help and he just whispers "Gno".
I find that awful. JPMorgan should be held accountable, like many similar firms, for all the money that they themselves have stolen. But one crime does not justify the other. The people that scammed JPMorgan will not use the money to pay off JPMorgan's victims.
What it seems that in the USA nobody believes in justice anymore, as even the Supreme Court is just another partisan agency helping the rich. Americans may justify getting money thru crime because it is so normalized. Blaming victims helps to feel good about it.
The real answer is to have stronger institutions that see everybody equally under the law, and to have better laws that punish all type of criminals including economic crimes.
That must be a representation of your own social circle because I can assure you that poor people are commonly blamed for all the bad things that happen to them.
"If the victim somehow did something to deserve it, then it won't happen to me" (just world fallacy)
Seemed to come up a lot on this one recently too (fake job interview trying to get you to install malware): https://news.ycombinator.com/item?id=45591707
My comment is specific to JPMorgan Chase. While I know that I would not fall prey to similar scams, I am not endorsing victim blaming in general.
Hasn't failed me yet.
I assume everyone is telling the truth until they've given me a reason not to assume that, and I'd say that hasn't failed me yet.
I learned this lesson in my early 20s. Nearly every entity that you interact with is trying to transact with you, in a way that benefits themselves. Whether that's a legitimate transaction (money exchanged for a product/service exactly as advertised), a misrepresented transaction (the product/service is not as advertised), taking your money for nothing, or simply taking away your time and attention (advertising).
Even before AI, if you were unable to get a good sense of legitimate transactions, you'd lose all your money on misrepresented transactions to scammy used car salesmen, door-to-door salesmen, and whole life insurance salesmen. These parasites and their ilk prey on people who are trusting by nature, and people who will say "yes" to avoid disappointing a stranger. It's unfortunate that the world has come to this, but you need to be untrusting of others' motivations by nature to not be taken advantage of financially.
Sounds like you have failed to find it, and are now just coping.
(assuming everyone's lying until proven otherwise sounds like a miserable existence)
I posit that what we need is heavily enforced truth in advertising laws.
We only do have such a concept when it's about an individual lying to a company for profit.
When companies lie to individuals it's just business as usual, or worst-case scenario a "mistake" that the company pinky-promises will not happen again.
Dead Comment
They once where protected by the state- but the state, as policeman- has been continuously reduced in its protector role, by those, who found hackable political interfaces, the ticks on the state wave through the fleas on the people. The pent-up backlash to all this results in a militant vote for anti-parasitic and exposure limiting institutions.
Fascism promises to remove those attackers, by fire-walling away the exterior and prosecuting perceived attackers on the inside of the nation.
Every successful scam, every allowed exploit, is a advertising for the totalitarians who promise to restore order.
The second one is more insidious: If they solved the problem they address, they would no longer need to exist. They have no incentive to succeed. So they go around addressing individual problems, taking sad pictures, and avoid addressing systemic problems.
And if the systemic problems are insoluble? Then there is again an argument that the NGO should not exist. If the problem is truly insoluble, then likely the money could be better spent elsewhere.
NGOs must constantly raise money to fund their operations. The money that an NGO spends on fund-raising & administration is called "overhead". The percentage of annual revenue spent on overhead is the overhead percentage. Most NGOs publish this metric.
When a big donor stops contributing, the NGO must cut pay or lay off people and cut projects. I've never heard of an NGO "succumbing to excessive staff costs" like a startup running out of money. Financial mismanagement does occasionally happen and boards do replace CEOs. Board members are mostly donors, so they tend to donate more to help the NGO recover from mismanagement, instead of walking away.
NGOs pay less than other organizations, so they mostly attract workers who care about the NGO's mission. These are people with intrinsic motivation to make the NGO succeed in its mission. Financial incentives are a small part of their motivations. For example, my supervisor at Opportunity International refused several raises.
> So they go around addressing individual problems, taking sad pictures, and avoid addressing systemic problems.
Work on individual problems is valuable. For example, the Carter Center has prevented many millions of people from going blind from onchocerciasis and trachoma [0].
The Carter Center is not directly addressing the systemic problems of poverty and ineffective government health programs. That would take different expertise and different kinds of donors.
The world is extremely complicated and interconnected. The Carter Center's work preventing blindness directly supports worker productivity in many poor countries. Productivity helps economic growth and reduces poverty. And with more resources, government health programs run better.
Being effective in charity work requires humility and diligence to understand what can be done now, with the available resources. And then it requires tenacity to work in dangerous and backward places. It's an extremely hard job. People burn out. And we are all better off because of the work they do.
When we ignore the value of work on individual problems, because it doesn't address systemic problems, we practice binary thinking [1]. It's good to avoid binary thinking.
[0] https://en.wikipedia.org/wiki/Carter_Center#Implementing_dis...
[1] https://en.wikipedia.org/wiki/Splitting_(psychology)
Dead Comment
It’s one thing to create an image but creating a whole fake testimony is even worse.
Notice they just felt that the tool was evolving “too fast” and it didn’t feel real enough. Once the tool is better, they’ll have no issues with fake testimonies.
That's a hot take. Any group's credibility is shot if they're fabricating testimonials, never mind that the subject matter is absolutely brutal, manipulative, and exploitive. The whole point of such a thing is to show create human connection to a real problem in the interest of motivating change. What utter bullshit.
Take AI out of this story and pretend that you have groups making up stuff to solicit donations. It's fraud. The AI part just means it was easy to commit.
let me do one nit picky correction
"does so in a a way even more efficient at manipulating our brains"
because a lot of media, especially social media, already do so. Just currently by using unrelated stock photos, photos from different "incididents" etc.
e.g. even on supposedly reputable news channels it's not rare to find the background images of completely unrelated incidents when they don't have any fitting "correct" images, and then maybe some small *note in some corner. Or not because didn't even notice.
I've run photos I've taken through AI so I could post pictures of myself without opening myself to being doxxed.
Dead Comment
So now they use AI imagining stuff...
For drunk driving in particular, the pictures typically come from the police and aren’t exposed or composed with an artistic purpose. The opposite is true for a bunch of the famous pictures of poverty; these are composed for their (often powerful) emotional effect. Whether it’s a good thing or not, I can imagine that being able to add a little of the art direction after the fact to drunk driving photos might be tempting, i.e., having some control over angle & exposure, and maybe avoid gore in the shot…
https://en.wikipedia.org/wiki/Benefits_Street
https://en.wikipedia.org/wiki/The_Jeremy_Kyle_Show
https://www.imdb.com/title/tt5455122
I remember thinking at the time that these were quite exploitative.
Already in 1986, the band Chumbawamba published their debut album "Pictures of Starving Children Sell Records":
> https://en.wikipedia.org/wiki/Chumbawamba
> https://en.wikipedia.org/wiki/Pictures_of_Starving_Children_...
> https://www.youtube.com/watch?v=gt_ztOo9Kak
> https://www.youtube.com/playlist?list=OLAK5uy_ljBD_smlWxwYsg...
As long as the images they end up with accurately reflect the situation on the ground and the use of AI is transparent it seems like a good idea to use AI for attention grabbing images and supplement with candid photos from aid workers who aren't professional photographers.