Readit News logoReadit News
flockonus commented on CEO pay and stock buybacks have soared at the largest low-wage corporations   ips-dc.org/report-executi... · Posted by u/hhs
nostrademons · 3 days ago
It’s entirely possible that this is causal ABs deliberate, ie the reason why boards of these companies have approved large CEO pay packages is so that the CEO will align themselves with the shareholders paying them rather than the workers working for them and cut wages so the money can be returned to shareholders as buybacks.
flockonus · 3 days ago
Google up "CEO fiduciary duty" - that's very much within the definition of a CEO role.
flockonus commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
highfrequency · 17 days ago
It is frequently suggested that once one of the AI companies reaches an AGI threshold, they will take off ahead of the rest. It's interesting to note that at least so far, the trend has been the opposite: as time goes on and the models get better, the performance of the different company's gets clustered closer together. Right now GPT-5, Claude Opus, Grok 4, Gemini 2.5 Pro all seem quite good across the board (ie they can all basically solve moderately challenging math and coding problems).

As a user, it feels like the race has never been as close as it is now. Perhaps dumb to extrapolate, but it makes me lean more skeptical about the hard take-off / winner-take-all mental model that has been pushed.

Would be curious to hear the take of a researcher at one of these firms - do you expect the AI offerings across competitors to become more competitive and clustered over the next few years, or less so?

flockonus · 17 days ago
If we're focusing on fast take-off scenario, this isn't a good trend to focus on.

SGI would be self-improving to some function with a shape close to linear based on the amount of time & resources. That's almost exclusively dependent on the software design, as currently transformers have shown to hit a wall at logarithmic progression x resources.

In other words, no, it has little to do with the commercial race.

flockonus commented on Purple Earth hypothesis   en.wikipedia.org/wiki/Pur... · Posted by u/colinprince
geokon · a month ago
sure but there are probably ecological niches that are light starved. for instance deeper in the water column or in dark areas like caves or polar regions
flockonus · a month ago
Of course, for example several sea weeds come to mind in dark green to brown tones; which makes sense, they can disperse heat very fast since they are immersed in cold water / liquid cooling
flockonus commented on Purple Earth hypothesis   en.wikipedia.org/wiki/Pur... · Posted by u/colinprince
sampo · a month ago
> Yes, but why?

Scientific writing style is not always very good at highlighting the unknowns. "We don't know this" doesn't make very convincingly looking text, so people tend to avoid admitting it up front.

But you are, of course, correct to ask.

Like another comments said, this is an open question.

One theory is, that while the algae floating in water were absorbing broad spectrum, the algae growing attached at the bottom of the water evolved to chlorophyll to capture whatever was left at the edges of the spectrum. And then later land-based plants would have evolved from the water plants that were already attaching themselves to the bottom. But then why are also the current ocean-floating algae green now?

http://hyperphysics.phy-astr.gsu.edu/hbase/Biology/imgbio/pl...

Another theory is that a perfectly-absorbing leaf would somehow absorb too much energy and get overheated, and that it was better to absorb only part of the available light.

None of these theories are fully convincing, so the question remains open.

flockonus · a month ago
> Another theory is that a perfectly-absorbing leaf would somehow absorb too much energy and get overheated

If having both pigments means the plant would be close to black, overheating is an absolutely valid hypothesis imo, plants just like animals have optimal temperature metabolism and often getting too hot is deadly, while under optimal temperature is tolerable.

flockonus commented on Jujutsu for busy devs   maddie.wtf/posts/2025-07-... · Posted by u/Bogdanp
flockonus · a month ago
I legit ask myself how many folks avoid Github Desktop for some dogmatic reasoning equivalent to "having an UI makes it worse", when it does the core of common flows extremely easily and clear.

To be clear where it ties to this post: it makes git far more convenient with nearly 0 learning curve.

flockonus commented on MARS.EXE → COM (2021)   chaos.if.uj.edu.pl/~wojte... · Posted by u/reconnecting
flockonus · a month ago
Would love seeing this running on the web via WASM or similar :)
flockonus commented on Linda Yaccarino is leaving X   nytimes.com/2025/07/09/te... · Posted by u/donohoe
phendrenad2 · a month ago
She stepped in and did a job, nothing more nothing less. I don't see this as a failure, the post-Elon Twitter is not a company that operates based on traditional characteristics, and I don't know what a CEO even does for such a company. It's obvious that Elon put her in charge to appease advertisers, but that gimmick only works for so long.

Anyway, I wouldn't have made it as long as she did. Being in charge of a cesspool of racist, misogynistic, antisemitic content like that is a fate worse than unemployment.

flockonus · a month ago
X was gobbled by another of Elon's AI company, no doubt to reduce some of the mess. So yes, a CEO there effectively does nothing.

https://www.reuters.com/markets/deals/musks-xai-buys-social-...

flockonus commented on Mercury: Ultra-fast language models based on diffusion   arxiv.org/abs/2506.17298... · Posted by u/PaulHoule
flockonus · 2 months ago
If anyone else is curious about the claim "Copilot Arena, where the model currently ranks second on quality"

This seems to be the link, mind blowing results if indeed is the case: https://lmarena.ai/leaderboard/copilot

flockonus commented on KernelLLM – Meta's new 8B SotA model   huggingface.co/facebook/K... · Posted by u/flockonus
flockonus · 3 months ago
> On KernelBench-Triton Level 1, our 8B parameter model exceeds models such as GPT-4o and DeepSeek V3 in single-shot performance. With multiple inferences, KernelLLM's performance outperforms DeepSeek R1. This is all from a model with two orders of magnitude fewer parameters than its competitors.

u/flockonus

KarmaCake day662October 27, 2012View Original