If you only reference the SEC's own press releases, you are going to miss the nuance here.
The SEC on its own doesn't get to decide when and how existing securities law applies to cryptocurrencies. Absent a settlement, any action undertaken by the SEC must be decided by a federal court.
Importantly, the SEC has been losing in court. For instance, the SEC, which had been blocking a spot Bitcoin ETF, was told in unequivocal terms by the DC Court of Appeals that its reasons for not allowing the ETF to issue were completely unsound. More pertinent to the question of enforcement: another federal court recently found that exchange-traded Ripple XRP tokens are not securities, with the implication that the SEC does not have jurisdiction to regulate the trading of Ripple XRP tokens on exchanges. If you extrapolate this finding to other cryptocurrencies, the SEC cases against Kraken and Coinbase are on shaky ground.
The fact that the SEC can list so many victories on its website is more a function of how costly it is to fight the SEC in court, rather than being a function of whether the SEC is right in all of its assertions.
There have been too many cases of fraud in the crypto industry, and it's good that the SEC has pursued enforcement against them. There are cases, however, where SEC has gone too far, and continues to go too far—especially in light of the fact that the SEC refuses to set forth clear criteria as to which crypto tokens it considers securities, and which crypto tokens it does not consider securities.
Now that the SEC is going after larger players, we are starting so see more cases actually go to court. If the trend continues, one or more of these cases will end up before the Supreme Court, and we will find out what the actual law is in the United States with regards to which crypto tokens are securities and which ones are not, and whether the SEC does in fact have any jurisdiction at all over the crypto exchanges.
You should not be surprised if after everything is said and done—after we have a Supreme Court opinion—crypto is in fact a special case under US securities law, at least with regards to some tokens.
You should also not be surprised if some of the cases in the list that you reference lose their legal support once the law is clarified by the Supreme Court. In hindsight, some of these SEC enforcement actions may be seen as unfair and unjust.
So we shouldn't be surprised when anything changes, ever, given how much activism we are seeing in courts today. So the question is, how much do the people that actually decide what a law means really like cryptocurrencies? I suspect the only good chance most of those companies have is rely on the court's dislike for government agencies, regardless of what laws say. But as far as I am aware, the good friends of the court tend to be very involved in old banking, and thus they aren't fond of crypto companies either.
So maybe those companies should start lobbying Harlan Crow and his circle of friends.
Everyone is encouraged to rate anyone else in a variety of categories, as often as possible. Every rating is public. You know who is rating you, and how they did it. Those ratings are put together to get a score in every category, and can be seen by anyone. It's your Baseball Card.
The problem is that not everyone is equally 'credible' in their rating. If I am bad at underwater basketweaving, my opinions on the matter are useless. But if I become good, suddenly my opinion is very important. You can imagine how, as one accumulates ratings, the system becomes unstable: My high credibility makes someone else have bad credibility, which changes the rankings again. How many iterations do we run before we consider the results stable? Maybe there's two groups of people that massively disagree in a topic. One will be high credibility, and the other bad, and that determines final scores. Maybe the opinion of one other random employee just changes everyone else's scores massively.
So the first thing is that the way we know an iteration is good involves whether certain key people are rated highly or not, because anything that, say, said that Ray is bad at critical thinking is obviously faulty. So ultimately winners and losers on anything contentious are determined by fiat.
So then we have someone who is highly rated, and is 100% aware of who is rating them badly. Do you really think it's safe to do that? I don't. Therefore, if you don't have very significant clout, your rating of people should be simple: Go look at that person's baseball card, and rate something very similar in that category. Anything else is asking for trouble. You are supposed to be honest... but honestly, its better to just agree with those that are credible.
So what you get is a system where, if you are smart, you mostly agree with the bosses... but not too much, as to show that you are an independent thinker. But you better disagree in places that don't matter too much.
If there's anything surprising, is that more people involved into the initiative stayed on board that long, because it's clear that the stated goals and the actual goals are completely detached from each other. It's unsurprising that it's not a great place to work, although the pay is very good.
Either way we slice it, we'll all soon see what is what brings people to certain publications? The brand? Long form, high research articles that just take too much research? The wokeness/andti-wokeness posturing? Is it a matter of just a few extremely talented people, carrying a publication?
We all can make our guesses, but the market will say who is right.
It's increasingly clear that automating important decisions like this is causing a lot of harm while removing most forms of recourse available to those affected. Coupled with the way automated decisions are used to perform and then launder fraud on a massive scale, maybe we should target laws at the automation itself: Require decisions made by automated systems of any kind to be auditable and explicitly define what human is held responsible and what remedies can be applied
As with any classification system though, 100% accuracy isn't going to happen. But there's always some customer service rep that can look at the details of the account, and see why in the world the system said what it did. But a detailed explanation of why we thought something was fraudulent could (and sometimes would!) just lead to another fun reddit post where someone describes how to hide the fraud a little better.
For any given system like this, how much harm is actually being done, vs how much is being prevented (as fraud just leads to raising prices to cover for it: financial companies are not charities)? I've read way too many CSR conversations where a blatant fraudster with world-class chutzpah would claim that we were destroying their family for no reason, when the data was damning. But this doesn't mean that everyone who isn't a fraudster really reaches out to the CSRs, and has the energy to prove there was no fraud. The actual levels of damage are just hard to measure.
We should have sensible, mandatory, available customer service access, which costs just enough to access to not be hammered by bots, but that is completely refunded in case of error. But what is really causing this is that many companies have lowered the barrier of interaction so much that we are letting a lot of fraud through the door. Remember how getting a merchant account in a real bank is a multi-day affair? How getting hired to become a delivery driver needed an interview, with a real person, and a manager checking between deliveries? The price of not having to interact with a human to sign in is fraud detection that isn't a boss you interact with every day, makes sure you are working, and is paid from the work you do. Companies with billions of customers and probably hundreds of millions of suppliers aren't exactly workable without automating a lot of those intermediate jobs away.
Maybe we made the wrong call across the board, and lower-productivity, but far higher trust commerce is the way to go... but a lot of that commerce is losing in the market, right now. So if we like it, we have to be willing to pay extra for it.
Agreed. There's such a whole world of unreported 'sponsored' or otherwise products, and streaming is a big part of it. And no-one is immune.
When the cheesegrater Mac Pro came out, I watched a lot of videos on it, particularly on YouTube in the photo/video segment - I was planning to get one, and while I had other uses for it, I'd be doing a lot of photo work on it in my recreational time.
Quickly I noticed just how many of the big name streamers had launch day or very early access to the Mac Pro and Pro Display. Sure.
And then I noticed how each and every one spun it as "I just got mine", "just bought one", and so forth. All organic, they'd have you believe - not a single one said "Apple sent me this". And yet...
By "a curious coincidence", every single one had seemingly ordered the exact same spec: an 18 core CPU, 384GB of memory, the Vega II Duo GPU, and 8TB SSD, and the nano-textured ProDisplay.
So what, you might think, that might have been the quickest shipping order. Also an $18,000+ computer, $25K with the display.
And if you're a photographer, even if you're working on medium format digital, and 100MP images, you in no way shape or form need 384GB of memory, or that GPU. For me, LightRoom / Capture One and Photoshop all barely sweated on my 12 core 192GB W5700X variant.
So then Occam's Razor applies. What are the odds that, even of just the 8-10 streamers I watch, they all got exactly the same spec Mac? Or is it that that was the spec Apple was sending to high popularity streamers?
Except not a single one even implied that that might have been the case. And I don't doubt that many or all bought their own at some point. But I suspect it was mostly "got one from Apple, talked it up, and then substituted it with my own when it arrived".
Is there any big (or even medium-sized) company where this isn't true? I feel like it's just a rule of corporate culture that flashy overpromising projects get you promoted and regularly doing important but mundane and hard-to-measure things gets you PIP'd.
At another job, at a financial firm I got a big bonus after I went live on November 28th with an upgrade that let a system 10x their max throughput, and scaled linearly instead of being completely stuck. at their 1x. Median number of requests per second received in dec 1st? 1.8x... the system would have failed under load, causing significant losses to the company.
Prevention is underrated, but firefighting heroics are so well regarded that sometimes it might even be worthwhile to be the arsonist
So, it turns out this is not so stupid. I’m going to sound a little stupid, because this is not my field, but my sister is a plant molecular biologist. All her research/work is in plant DNA. Apparently it’s been recently discovered that the gene expression for the genetically modified plants was not as had been previously understood. Instead of turning on just one gene, it turns on like five other unexpected random genes. In addition there was something about “complex interactions” and “poorly tested and uncertain outcomes” or something. Any actual experts able to weigh in?
Anyway, she’s gone from “genetic modification of plants is a great thing because of improving food security” to “I’m not letting my kids eat this stuff”.
The problem is assuming that any other change that happens is dangerous and untested. There's mutation all over the place in perfectly organic plants, just like with GMOs. There are also changes on gene expression from those changes, also like GMO. An overwhelming majority of changes either do nothing, or make the plant unviable at all. The practical risks of a change that does something, doesn't harm yields, and yet somehow makes the parts of the plant that a human eats somewhat toxic is a huge stretch. It's even less likely when one considers the actual regulatory processes that happen later. It might seem crazy, but people that work on GMOs tend to be uninterested in poisoning the public.
If I was afraid of poisoning due to mutation (which I am not), I'd be more afraid of what someone that has been crossing plans with some localized, ancestral wildtypes that have been planted just in some village for the last hundred years or something. They are more likely to be untested. But it's like the risk of getting hit by lighting for the 5th time this week.
I am far more likely to be poisoned by a detergent, or someone that has let bacteria run amok in their packing facility, and is somehow selling, say, premixed salads that land in my local supermarket.
They are different than the common case of infertile hybrids, which have good reason to exist and the infertility (IIUIC, not a botanist) is a side effect.
The idea of engineering seed so that new seed which comes from its crop will not germinate is just beyond wicked and evil.
But we've had that for something like a quarter century now.
$332M seems like peanuts.
Roundup Ready is a relatively good idea: Roundup applies easily, and after the first few generations, when the control of how to add the GMO genes was so bad yields were impacted, it's a very narrow change to plants that makes farmers a lot of money. The vast majority of soybeans in the world are running Roundup-ready genes for a reason.
Terminator seeds sounded like a scary patent, but it was mostly useless. In corn, for instance, you'd never want to run it at all, as the seeds that are sold are almost perfect hybrids of 2 inbreds, which lose a whole lot of yield in the next generation without anyone really trying. There's no need for a gene when the next generation yields worse naturally.
If you want a sick situation with Bayer, forget Roundup, and look at what's been happening with Dicamba, the next generation of pesticide that GMOs are protected from. It's not a new pesticide, but it's very aggressive and it drifts: You spray a field, and many other fields around it are going to get hit. Supposedly Bayer is telling everyone that, on tests, the new formulations of it, when applied properly under the right weather conditions, there's no drift... but reality disagrees. Therefore, a whole lot of fields that aren't planting seeds protected from dicamba are getting wrecked by not-so-close that haven't mastered the really difficult ways to spray dicamba in the calmest of days. We aren't talking a pesticide that drifts 50 feet here, or 100, but people relatively far away that have their crops ruined. This is happening often enough that we'll see bans, while more and more generations of roundup GMO are going out of patent.
I am pretty sure that this one is what is scaring Bayer's lawyers, not Roundup.