For instance, an alleged car bombing in Donetsk right before the invasion was staged using cadavers,[1] and a video used to claim that Ukrainian troops were moving aggressively into separatist territory was filmed far from its purported location.[2]
If Russia was so sure that Ukraine was committing these atrocities, they wouldn't need to rely on fake videos to justify attacking.
[1] https://www.bellingcat.com/news/2022/02/28/exploiting-cadave...
[2] https://www.cnn.com/2022/02/22/europe/russia-videos-debunkin...
Of course, there are grey areas here - certainly there are existing missiles that can be launched without a defined target and programmed to aim for anything that, say, has radar emissions that match known enemy systems.
Although submarines are more effective navigating through storms, one could simply wait for the storm to pass when shipping non-time-sensitive cargo, then use a regular surface cargo ship (which could be automated if desired just like the submarine). Surface ships also have the advantage of compatibility with our already-established shipbuilding and maintenance infrastructure, while a submarine would require the proliferation of new skills and tools to support it.
For time-critical cargo (where one can't wait for a hypothetical storm to pass), it's likely aircraft would be a better option for most shipments - certainly in severe storms aircraft can't operate either, but in that case the very act of loading and unloading the submarine would be hazardous as well.
I'm sure that there is a great deal of discussion about potential anticompetitive issues within Google and with their outside counsel, but in a context where legal privilege protects against disclosure.
At the same time, there isn't much of an outside market for algorithmic bias info in the same way there for security vulnerabilities. Probably the biggest effect of this reward will be to pull some grad students who were going to study algorithmic bias anyways towards studying Twitter specifically - after all, there aren't any rewards for studying the algorithmic bias of other companies!
That's only due to selection effects. If being open were the default then they'd be diluted among all the other people. ISPs themselves, (older) reddit, 4chan all serve as examples that the people you don't want to talk to can be mostly siloed off to some corner and you can have your own corner where you can have fun. Things only get problematic once you add amplification mechanisms like twitter and facebook feeds or reddit's frontpage.
> For one thing, it's easy to say 'well we'd only take down illegal content'. But in practice there isn't such a bright line, there's lots of borderline stuff, authorities could rule something posted on your site illegal after the fact- lots of these situations are up to a prosecutor's judgement call. Would you risk jail to push the boundaries?
I don't see how that's an issue? They send a court order, you take down the content is a perfectly reasonable default procedure. For some categories of content there already exist specific laws which require takedown on notification without a court order, which exactly depends on jurisdiction of course, in most places that would be at least copyright takedowns and child porn.
> Pretty soon the FBI & CIA start contacting you about some of the actual or borderline illegal content being hosted on Free Speech Drive. Do you want to deal with that?
That's pretty much what telcos have to deal with for example. Supposedly 4chan also gets requests from the FBI every now and then. It may be a nuisance, but not some insurmountable obstacle. For big players this shouldn't be an issue and smaller ones will fly under the radar most of the time anyway.
Also, having stricter policies doesn't make those problems go away. People will still post illegal content, but now in addition to dealing with the FBI you also need to deal with moderation policies, psychiatrists for your traumatized moderators (which you're making see that content) and endusers complaining about your policy covering X but not Y or your policy being inconsistently enforced or whatever.
Any business with the technical ability to censor what they host is going to be tempted (and likely pressured by other actors) to take down content that people find objectionable. Removing these "chokepoints" where a small number of people have the ability to engage in mass censorship is key if you want to promote more diverse speech on the web. (Not everyone has this goal!)