Readit News logoReadit News
lumost commented on The Walt Disney Company and OpenAI Partner on Sora   openai.com/index/disney-s... · Posted by u/inesranzo
philistine · 3 days ago
Yeah, basically Disney invested a billion dollar in Pregnant Elsa Spider-Man Beach Castle.
lumost · 3 days ago
They may be viewing this as an inevitable outcome with open models/fly-by-night providers/providers in more liberal copy-right jurisdictions.

They can either invest in mass classification and enforcement operations or gain some revenue share from it.

lumost commented on Microsoft Scales Back AI Goals Because Almost Nobody Is Using Copilot   extremetech.com/computing... · Posted by u/mtdewcmu
Festro · 4 days ago
Copilot is basically just whitelabelled ChatGPT. It's a big ask for people to use it over the source system.

ChatGPT gets the headlines, is seen as an innovator, and costs less.

Copilot offers what? A physical button on a Windows keyboard, OS integration when we're in our browsers 24/7 and Atlas exists?

My main gripe is that if Copilot has any value MS do a piss-poor job of promoting it. I can see AI functions in MS365 being useful, I can see MS-related headaches being solvable quicker with an AI buddy nudging me along to a resolution. But their press releases and and demos, if they exist, do not compete with OpenAI's, Google's, hell Deepseek gets more coverage.

MS might as well give up and pursue integration and compatibility with the rest of the ecosystem. I know they won't though, they'll cut costs and still shove Copilot down our throats with feature creep and useless opt-out bulk.

lumost · 4 days ago
IMO the wrapper products all suffer from the same problem. The LLM is trained to do a specific set of tasks such as Chat, Coding, Image understanding, and image/video generation, and tool use in support of the above. If you suddenly ask the LLM to do something it was not trained for such as producing power points - you get a few surprisingly successful results, followed by a large set of crap. There is no reason for customers or your own team to expect the underlying model to improve unless token usage is so massive it motivates training investment in this area.

LLMs are a facsimile of general intelligence on tasks similar to their training set and which can be solved in finite context length. If you are outside of the training set - you will have poor results. Likewise if you are in the training set, then the foundation model vendor will already have a great product to sell you (claude code/chatgpt etc.)

lumost commented on Apple's slow AI pace becomes a strength as market grows weary of spending   finance.yahoo.com/news/ap... · Posted by u/bgwalter
lumost · 4 days ago
Companies with strong distribution have an option to be the "last" player in a market and simply force their way in. If Apply makes a "default" LLM which is as good or better than all premium LLM options... then you would obviously choose to use that over paying for a ChatGPT subscription. Apple could probably upcharge the phone by $200 for this privilege. Alternatively, they may do what they did with search and just get paid not to add an LLM chat functionality.
lumost commented on Impacts of working from home on mental health tracked in study of Australians   abc.net.au/news/2025-12-0... · Posted by u/anotherevan
mierz00 · 9 days ago
I don’t do well with 100% working from home.

My preference is 3 days in the office, I find anything less than that and I struggle mentally. My home starts to feel like a prison and I lose connection with people.

I really value human connection and I just don’t get the same thing online.

lumost · 6 days ago
Why not use a co-working space? I found it was the best of all worlds when your "coworkers" have no relationship with your "boss".
lumost commented on OMSCS Open Courseware   sites.gatech.edu/omscsope... · Posted by u/kerim-ca
rahimnathwani · 8 days ago
OMSCS requires ten courses to graduate. I completed one course (with an A grade) before realizing that, even at a pace of one course per semester, it was not a high enough priority for me to devote the time required to do each course well.

That course was great, though, and I definitely learned some things I'm glad to have learned!

IMO the instructional materials are a small part of the value. The things that stood out to me were:

- the assignments

- the autograding of programming assignments

- giving and receiving peer feedback about written assignments

- learning some LaTeX for those assignments

- having an artificial reason (course grade) to persist in improving my algorithm and code [on the problems taught in that course, I wouldn't have been self-motivated enough if they were just things I came across during a random weekend]

lumost · 8 days ago
The ability of OMSCS to scale paper writing, review, and grading with real human TAs is nothing short of astounding. While it's a ton of work (I'm just completing class #5) it's a great resource for both learning the material - and how to communicate it effectively.
lumost commented on The AI Backlash Is Here: Why Public Patience with Tech Giants Is Running Out   newsweek.com/ai-backlash-... · Posted by u/zerosizedweasle
cess11 · 9 days ago
I find it kind of common, used as a riff off of patterns in advertising and post-politics.
lumost · 9 days ago
It’s not universal - but it’s a compelling rhetorical device /s

It just sounds like slop as it’s everywhere now. The pattern invites questions on the authenticity of the writer, and whether they’ve fallen victim to AI hallucinations and sycophant. I can quickly become offended when someone asks me to read their ChatGPT output without disclosing it was gpt output.

Now when AI learns how to use parallelism I will be forced to learn a new style of writing to maintain credibility with the reader /s

lumost commented on The AI Backlash Is Here: Why Public Patience with Tech Giants Is Running Out   newsweek.com/ai-backlash-... · Posted by u/zerosizedweasle
ceroxylon · 9 days ago
> The friction isn’t just about quality—it’s about what the ubiquity of these tools signals.

Unless they are being ironic, using an AI accent with a statement like that for an article talking about the backlash to lazy AI use is an interesting choice.

It could have been human written (I have noticed that people that use them all the time start to talk like them), but the "its not just x — its y" format is the hallmark of mediocre articles being written / edited by AI.

lumost · 9 days ago
I’m quickly becoming convinced that humans adapt to ai content and quickly find it boring. It’s the same effect as walking through the renaissance section of an art museum or watching 10 action movies. You quickly become accustomed to the flavor of the content and move on. With human generated content, the process and limitations can be interesting - but there is no such depth to ai content.

This is fine for topics that don’t need to be exciting, like back office automation, data analysis, programming etc. but leads me to believe most content made for human consumption will still need to be human generated.

I’ve ceased using ai for writing assistance beyond spell check/accuracy/and as an automated reviewer. The core prose has to be human written to not sound like slop.

lumost commented on Why are 38 percent of Stanford students saying they're disabled?   reason.com/2025/12/04/why... · Posted by u/delichon
ultrarunner · 10 days ago
> Adderall and other amphetamines only have problems with long term usage.

My research was done a long time ago. I understood Ritalin to have mild neurotoxic effects, but Adderall et al to be essentially harmless. Do you have a source for the benefits giving way to problems long-term?

Regardless, your overall point is interesting. Presumably, these drugs are (ridiculously tightly) controlled to prevent society-wide harm. If that ostensible harm isn't reflected in reality, and there is a net benefit in having a certain age group accelerate (and, presumably, deepen) their education, perhaps this type of overwhelming regulatory control is a mistake. In that sense, it's a shame that these policies are imposed federally, as comparative data would be helpful.

lumost · 10 days ago
I went to university at a time that Adderall was commonplace, and am now old enough to see how it turned out for the individuals. At college, it was common for students to illicitly purchase Adderall to use as a stimulate to cram for a test/paper etc. It was likewise common for students to abuse these drugs by taking pills at a faster than prescribed pace to work for 48 hours straight amongst other habits.

In the workplace, I saw the same folks struggle to work consistently without abusive dosages of such drugs. A close friend eventually went into in-patient care for psychosis due to his interaction with Adderall.

Like any drug, the effect wears off - Cognitive Behavioral Therapy matches prescription drugs at treating ADHD after 5 years. As I recall, the standard dosages of Adderall cease to be effective after 7-10 years due to changes in tolerance. Individuals trying to maintain the same therapeutic effect will either escalate their usage beyond "safe" levels or revert to their unmedicated habits.

lumost commented on Why are 38 percent of Stanford students saying they're disabled?   reason.com/2025/12/04/why... · Posted by u/delichon
shetaye · 10 days ago
Regarding Stanford specifically, I did not see the number broken down by academic or residential disability (in the underlying Atlantic article). This is relevant, because

> Some students get approved for housing accommodations, including single rooms and emotional-support animals.

buries the lede, at least for Stanford. It is incredibly commonplace for students to "get an OAE" (Office of Accessible Education) exclusively to get a single room. Moreover, residential accommodations allow you to be placed in housing prior to the general population and thus grant larger (& better) housing selection.

I would not be surprised if a majority of the cited Stanford accommodations were not used for test taking but instead used exclusively for housing (there are different processes internally for each).

edit: there is even a practice of "stacking" where certain disabilities are used to strategically reduce the subset of dorms in which you can live, to the point where the only intersection between your requirements is a comfy single, forcing Admin to put you there. It is well known, for example, that a particularly popular dorm is the nearest to the campus clinic. If you can get an accommodation requiring proximity to the clinic, you have narrowed your choices to that dorm or another. One more accommodation and you are guaranteed the good dorm.

lumost · 10 days ago
They lead with the headline that most of these students have a mental health disability - particularly ADHD. Is it surprising that legalized Amphetamines drive teenagers to higher performance for a short period in their lives? Adderall and other amphetamines only have problems with long term usage.

It should be expected that some portion of the teenage population sees a net-benefit from Amphetamines for the duration of late high school/college. It's unlikely that that net-benefit holds for the rest of their lives.

lumost commented on I ignore the spotlight as a staff engineer   lalitm.com/software-engin... · Posted by u/todsacerdoti
postit · 10 days ago
One thing I’ve learned in my 25+ year career is that if you don't own your narrative and your work, someone else will claim it - especially in corporate America.

I have lost count of the brilliant engineers who were passed over for credit simply because someone less technically capable, but extremely popular, pulled the strings to steal the spotlight.

You don't necessarily need to be in the spotlight, but you do need to leave a paper trail. Claim your work and inventions both internally and externally. You don't need to be a 'LinkedIn thought leader' to do this, just submit talks to conferences and find peers at other companies who understand the difference between those who build and those who only talk about building.

lumost · 10 days ago
this is the biggest benefit of 1:1's in my opinion.

Often, individuals can claim credit simply by being first and loudest. For example, and individual can highlight a problem area that someone is already working on in the team and loudly talk about the flaws in the current approach and how they will solve it. The individual need not actually solve the task if the first person finishes - but now the success is subconsciously attributed to the thought leadership/approach of the new individual.

Good managers/leadership teams have mechanisms to limit this type of strategy, but it requires them to talk to everyone on the team - listen for unsaid feedback and look at hard artifacts. Otherwise you quickly have a team of people who are great at nothing more than talking about problems and dreaming of solutions.

u/lumost

KarmaCake day10920May 13, 2015View Original