Dead Comment
However, open intermediary victims up to contributory lawsuits and everyone will have to take security more seriously. Think twice before you connect that new piece of shit IoT device.
The problem with that is there are a number of ways to prevent you from holding cash as well. Bank regulations around how much money you can withdraw/access, scrutiny around how much money you can carry to an airport, asset forfeiture without due process etc. all allow governments to coerce you into whatever they want. Cash is not necessarily a solution either.
Dead Comment
I'm often accused of letting my skepticism hold me back from really trying it properly, and maybe that's true. I certainly could not imagine going months without writing any code, letting the AI just generate it while I prompt
My work is pushing these tools hard and it is taking a huge toll on me. I'm constantly hearing how life changing this is, but I cannot replicate it no matter what I do
I'm either just not "getting it", or I'm too much of a control freak, or everyone else is just better than I am, or something. It's been miserable. I feel like I'm either extremely unskilled or everyone else is gaslighting me, basically nowhere in between
I have not once had an LLM generate code that I could accept. Not one time! Every single time I try to use the LLM to speed me up, I get code I have to heavily modify to correct. Sometimes it won't even run!
The advice is to iterate, but that makes no sense to me! I would easily spend more time iterating with the LLM than just writing the code myself!
It's been extremely demoralizing. I've never been unhappier in my career. I don't know what to do, I feel I'm falling behind and being singled out
I probably need to change employers to get away from AI usage metrics at this point, but it feels like it's everyone everywhere guzzling the AI hype. It feels hopeless
The untrained temp workers using AI to do the entirety of their jobs aren't producing code of professional quality, it doesn't adhere to best practices or security unless you monitor that shit like a hawk but if you're still engineering for quality then AI is not the first train you've missed.
They will get code into production quicker and cheaper than you through brute force iteration. Nothing else matters. Best practices went the way of the rest of the social contract the instant feigned competence became cheaper.
Even my podunk employer has AI metrics. You won't escape it. AI will eventually gatekeep all expertise and the future employee becomes just a disposable meat interface (technician) running around doing whatever SHODAN tells them to.
Let's be specific: NSO Group sold Pegasus to Saudi Arabia, who used it to track Jamal Khashoggi's inner circle before his assassination. They sold to Mexico, where it was used to target journalists' families within days of their murders. To Rwanda, to hunt dissidents abroad after imprisoning their family. The list goes on.
This isn't cherry-picking. When Citizen Lab analyzes global sypware operations, Israeli companies dominate: NSO, Candiru, Paragon, QuaDream, and arguably Cytrox (Macedonian, but Israeli leadership and investors). The common thread? Former Unit 8200 personnel, who've turned state cyber-warfare capabilities into a business model explicitly built on selling to authoritarians.
Your "but everyone does it" framing fundamentally misrepresents the issue. Yes, other countries have surveillance companies. But there's a massive difference between developing capabilities and systematically selling them to regimes that murder journalists. WHen was the last time a German or French company's tools were found on a murdered journalist's or imprisoned political dissident's phone?
The data shows Israeli companies don't just happen to have "bad PR" (or uniquely terrible luck in choosing their clients) - they actively court authoritarian clients because that's where the money is if you have no morals.
For some context: Israel has a population of less than 10 Million - less than 0.1% of the world's population. If you have a persuasive argument for why Israeli spyware is routinely found by organizations like Citizen Lab, why their products seem so uniquely popular and successful with fascists and authoritarians, I'd love to hear it. Because from where I'm standing, the clear and obvious explanation is that there is a deep, systemic issue in the Israeli private intelligence and cybersecurity sector that is entirely unconcerned with how their tools will be used, or by whom, as long as the money's right. All enabled by the Israeli authorities, who need to approve of these exports.
You're right that spyware companies exist elsewhere. But when researchers keep finding the same tiny country's products in the phones of murdered journalists and jailed activists, dismissing scrutiny as bias is itself a bias. The question isn't why Israeli companies get attention - it's why they keep selling to regimes that use their tools to crush dissent, and worse.
If you are paying for a VPN, the odds are good that it's owned by Kape Technologies, another Israeli company staffed by former Unit 8200 personnel. PIA and a bunch of others are now under their purview.
They'll say they don't keep logs, but only an idiot would trust that.
Cellebrite also does questionable shit with phone forensics; newer products upload phone images to "the cloud." Supposedly it is instanced and law enforcement is just supposed to trust that yet another function the Justice Department outsources to Israel isn't backdoored by them, like Inslaw/PROMIS.
I agree with all else but I get a different impression here.
The unethical behavior really ramped up not with participation trophies per se, but around the same time we started gamifying everything. People treated each other like NPCs in GTA to abuse for their own amusement or advancement. On the internet we stopped being people and became targets to destroy. Nothing appeals to psychopathic behavior like turning any environment into a feedback loop of reward-seeking.
"Gamifying" life should give anyone pause when you witness how people act within existing game systems. How much effort is spent policing fraud, abuse and antisocial behavior within zero-stakes environments of games themselves? Multi-player Minecraft is unplayable unless you band together into tribal cliques with private servers (de-facto ingroup "racism" with no inclusivity clause for trolls); otherwise random people will log in, destroy your world on purpose and leave. We see similar behavior in real life from terminally-online types committing arson and trying to bait law enforcement into killing people. It's not coincidental.
Gamifying anything drives a competitive environment in which people are compelled to win (dominate) by any means necessary. It's not ethical to force opponents into bankruptcy by ruthlessly exploiting them, but it's the literal point of Monopoly.
School itself is gamified through the reward of grades and privileges. The system you describe is the one we already have. People will always have incentive to cheat to get ahead, especially when competing for or trying to retain tenured positions.