Perhaps a simpler explanation is just contrast? Both OLEDs and CRTs can produce much higher contrast than LCDs.
Perhaps a simpler explanation is just contrast? Both OLEDs and CRTs can produce much higher contrast than LCDs.
But then, here's the reality: any large company will have a new for an army of directors and VPs, compared to only a handful of ultra-senior, visionary engineers. Take Google and compare the ratio of Jeff Dean-type folks to senior managerial staff collecting similar paychecks.
So yeah, if your goal is to retire early without depending too much on luck or on being exceptional, management is your career growth path. For better or worse.
It's really that. He falls for the same trap many modern critics of progress do: the nostalgia for a world that never existed, when men lived meaningful lives in peaceful harmony with nature... juxtaposed with all the purported moral, societal, and environmental decay of today.
Many people find it alluring today, but the themes are evergreen. They crop up in ancient Greece, in the Middle Ages, and throughout history.
Misplaced nostalgia aside, another problem with most such ideologies is that the prescription for returning to that utopian bygone era inevitably involves force: the premise is that our minds are too corrupted to understand what's right. Whether that's blowing things up or taking away your rights is just an implementation detail.
They could probably spread things out instead, but real estate is stupidly expensive, and Google was never known for spacious accommodations (maybe except for some remote offices). I can't imagine they have any motivation to spend more if their approach worked fine for more over a decade.
I don't love it, but how many applicants walked away because of cramped open spaces? How many top performers quit for that reason? We just put up with it.
I've tried before to gaslight GPT4 into saying things which are mathematically untrue, I lie to it, I tell it it's malfunctioning, I tell it to just do it, it wouldn't do it.
I was recently studying linear algebra which can be a very tricky subject. In linear algebra the column space of a matrix is the same as the column space of the product with itself transposed: C(A) = C(AA^T). If you ask GPT4 if "C(A) = C(AA^T)" is true, it will understand what you're asking, it knows it's about linear algebra, but it will get it wrong (at the time of this writing, I've tried several times).
I couldn't get GPT4 to agree it was a true statement until I told it the steps of the proof. Once it saw the proof it agreed it was a true statement. However, if you try to apply the same proof to C(A) = C((A^T)A), GPT4 cannot be tricked, and indeed, the proof is not applicable to this latter case.
So GPT4 was incorrect yet able to be persuaded with a correct proof, but a very similar proof with a subtle mistake cannot trick it.
Early LLMs were very malleable, so to speak: they would go with the flow of what you're saying. But this also meant you could get them to deny climate change or advocate for genocide by subtly nudging them with prompts. A lot of RLHF work focused on getting them to give brand-safe, socially acceptable answers, and this is ultimately achieved by not giving credence to what the user is saying. In effect, the models pontificate instead of conversing, and will "stand their ground" on most of the claims they're making, no matter if right or wrong.
You can still get them to do 180 turns or say outrageous things using indirect techniques, such as presenting external evidence. That evidence can be wrong / bogus, it just shouldn't be phrased as your opinion. You can cite made-up papers by noted experts in the field, reference invalid mathematical proofs, etc.
It's quite likely that you replicated this, and that it worked randomly in one case but not the other. I'd urge you to experiment with it by providing it with patently incorrect but plausibly-sounding proofs, scientific references, etc. It will "change its mind" to say what you want it to say more often than not.
A more likely explanation is that fewer followers means higher engagement, which totally makes sense; for a larger social network account with 100k+ followers, many of those will be bots and people who follow anyone and everyone without really caring. A smaller account will only have followers who care.
I don't think there's anything specific about Mastodon here other than it being a smaller social network so you naturally have far fewer followers.
In either case, this is a single account talking about a single post, and shouldn't be used to generalize different levels of engagement across social networks.
People are jumping too quickly to deeply integrate this tech with everyday things, and while that's great for many use cases, it's not so great for others.
You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.
But our industry already operates this way. Google will cut you off for triggering automated rules, and good luck getting human help. AI will not make it worse; but it will be used by such businesses to give their CS the appearance of being better. It will feel like you're talking to a real person again.
Agriculture is outright wasteful of water. California agriculture consumes 80% of the state's water.
https://water.ca.gov/Programs/Water-Use-And-Efficiency/Agric...
Its an environmental equivalent of Amdahl's law - spending so much effort to make a small portion of the water use efficient when we can work far less to make agriculture more efficient. Of course its all because of lobbying.
Some irrigated farms in the Central Valley will be withdrawing from aqueducts, but part of the reason why the valley is dry is because we built these aqueducts, harming agricultural land for the benefit of SoCal cities, with the promise that the farmers would be able to use that water. So not sure it's fair for us to claim the moral high ground.
Much of the California water crisis is manufactured too. There's no shortage of freshwater for the foreseeable future, but we're not building new dams, aqueducts, etc, essentially relying on the infrastructure built in the 1960s and before, for a population only fraction of what we have right now. Climate change plays a role, but the bulk of the pain is self-inflicted and has little to do with growing rice or watering our lawns.