Readit News logoReadit News
blueblisters commented on LLMs tell bad jokes because they avoid surprises   danfabulich.medium.com/ll... · Posted by u/dfabulich
IshKebab · 7 days ago
This sounds really convincing but I'm not sure it's actually correct. The author is conflating the surprise of punchlines with their likelihood.

To put it another way, ask a professional comedian to complete a joke with a punchline. It's very likely that they'll give you a funny surprising answer.

I think the real explanation is that good jokes are actually extremely difficult. I have young children (4 and 6). Even 6 year olds don't understand humour at all. Very similar to LLMs they know the shape of a joke from hearing them before, but they aren't funny in the same way LLM jokes aren't funny.

My 4 year old's favourite joke, that she is very proud of creating is "Why did the sun climb a tree? To get to the sky!" (Still makes me laugh of course.)

blueblisters · 7 days ago
Also the pretrained LLM (the one trained to predict next token of raw text) is not the one that most people use

A lot of clever LLM post training seems to steer the model towards becoming excellent improv artists which can lead to “surprise” if prompted well

blueblisters commented on Cursor CLI   cursor.com/cli... · Posted by u/gonzalovargas
TechDebtDevin · 16 days ago
You're allowing them to train on your code?
blueblisters · 16 days ago
The code isn’t the valuable part. They know all the most common workflows and failure modes, allowing them to create better environments for training agentic models
blueblisters commented on Cursor CLI   cursor.com/cli... · Posted by u/gonzalovargas
ribeyes · 16 days ago
i'm betting on cursor being the long-term best toolset.

1. with tight integration between cli, background agent, ide, github apps (e.g. bugbot), cursor will accommodate the end-to-end developer experience.

2. as frontier models internalize task routing, there won't be much that feels special about claude code anymore.

3. we should always promote low switching costs between model providers (by supporting independent companies), keeping incentives toward improving the models not ui/data/network lock-in.

blueblisters · 16 days ago
> we should always promote low switching costs between model providers (by supporting independent companies), keeping incentives toward improving the models not ui/data/network lock-in

You’re underestimating the dollars at play here. With cursor routing all your tokens, they will become a foundation model play sooner than you may think

blueblisters commented on Gemini 2.5 Deep Think   blog.google/products/gemi... · Posted by u/meetpateltech
NitpickLawyer · 23 days ago
Saw one today from gpt5 (via some api trick someone found) that was better than this, let me see if I can find it.

Pelican:

https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....

Longer thread re gpt5:

https://old.reddit.com/r/OpenAI/comments/1mettre/gpt5_is_alr...

blueblisters · 22 days ago
Uh that doesn't look better. it has more texture but the composition is bad/incomplete
blueblisters commented on East Asian aerosol cleanup has likely contributed to global warming   nature.com/articles/s4324... · Posted by u/defrost
matthewdgreen · a month ago
I don't see this as a truly organic reaction. When I see the same laws popping up in multiple states, my suspicion is that it's driven centrally by right-wing think tanks, probably to benefit the fossil fuel lobbies. You don't need aerosol injection if there's no climate change, so we need to make it illegal (just as we need to defund Earth sciences, fire climate scientists, etc.) Similarly, if we need aerosol injection, then climate change is real. It's all one big package.
blueblisters · a month ago
Possibly. Seems like a mix of conspiracy-theory induced paranoia and right-wing influencers pushing a coordinated narrative.

Ironically, aerosol injection will probably benefit fossil fuel companies, with less pressure to meet aggressive emissions targets.

blueblisters commented on East Asian aerosol cleanup has likely contributed to global warming   nature.com/articles/s4324... · Posted by u/defrost
matthewdgreen · a month ago
The red states have begun banning geoengineering and even small-scale tests. It seems to be spreading across these states, which suggests that we'll soon see similar laws being proposed at the Federal level.
blueblisters · a month ago
the uproar over minor, localized cloud-seeding (which had nothing to do with the Texas floods) is probably a death knell for aerosol injection.

we are going to see countries going to war over unilateral solar radiation management efforts

blueblisters commented on East Asian aerosol cleanup has likely contributed to global warming   nature.com/articles/s4324... · Posted by u/defrost
cubefox · a month ago
The short term effects are known though (bad indoor ventilation causes decreased intelligence due to increased CO2 concentration), and a permanent short term effect would arguably be a long term effect.
blueblisters · a month ago
we are nowhere close to the levels of CO2 concentration that would affect cognitive performance.

skimming through a couple of studies, measurable impact starts around 1000 ppm. with current policy intervention, we will likely reach 550ppm by 2100

blueblisters commented on Sam Altman Slams Meta’s AI Talent Poaching: 'Missionaries Will Beat Mercenaries'   wired.com/story/sam-altma... · Posted by u/spenvo
spenvo · 2 months ago
OpenAI's tight spot:

1) They are far from profitability. 2) Meta is aggressively making their top talent more expensive, and outright draining it. 3) Deepseek/Baidu/etc are dramatically undercutting them. 4) Anthropic and (to a lesser extent?) Google appear to be beating them (or, charitably, matching them) on AI's best use case so far: coding. 5) Altman is becoming less like-able with every unnecessary episode of drama; and OpenAI has most of the stink from the initial (valid) grievance of "AI-companies are stealing from artists". The endless hype and FUD cycles, going back to 2022, have worn industry people out, as well as the flip flop on "please regulate us". 6) Its original, core strategic alliance with Microsoft is extremely strained. 7) and, related to #6, its corporate structure is extremely unorthodox and likely needs to change in order to attract more investment, which it must (to train new frontier models). Microsoft would need to sign off on the new structure. 8) Musk is sniping at its heels, especially through legal actions.

Barring a major breakthrough with GPT-5, which I don't see happening, how do they prevail through all of this and become a sustainable frontier AI lab and company? Maybe the answer is they drop the frontier model aspect of their business? If we are really far from AGI and are instead in a plateau of diminishing returns that may not be a huge deal, because having a 5% better model likely doesn't matter that much to their primary bright spot:

Brand loyalty from the average person to ChatGPT is the best bright spot, and OpenAI successfully eating Google's search market. Their numbers there have been truly massive from the beginning, and are I think the most defensible. Google AI Overviews continue to be completely awful in comparison.

blueblisters · 2 months ago
If they can turn ChatGPT into a free cash flow machine, they will be in a much more comfortable position. They have the lever to do so (ads) but haven't shown much interest there yet.

I can't imagine how they will compete if they need to continue burning and needing to raise capital until 2030.

blueblisters commented on Sam Altman Slams Meta’s AI Talent Poaching: 'Missionaries Will Beat Mercenaries'   wired.com/story/sam-altma... · Posted by u/spenvo
andsoitis · 2 months ago
> OpenAI is the only answer for those looking to build artificial general intelligence

Let’s assume for a moment that OpenAI is the only company that can build AGI (specious claim), then the question I would have for Sam Altman: what is OpenAI’s plan once that milestone is reached, given his other argument:

> And maybe more importantly than that, we actually care about building AGI in a good way,” he added. “Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be.

If building AGI is OpenAI’s only goal (unlike other companies), will OpenAI cease to exist once mission is accomplished or will a new mission be devised?

blueblisters · 2 months ago
Nope AGI is not the end goal - https://blog.samaltman.com/the-gentle-singularity

> OpenAI is a lot of things now, but before anything else, we are a superintelligence research company.

IMO, AGI is already a very nebulous term. Superintelligence seems even more hand-wavy. It might be useful to define and understand limits of "intelligence" first.

blueblisters commented on OpenAI dropped the price of o3 by 80%   twitter.com/sama/status/1... · Posted by u/mfiguiere
Davidzheng · 2 months ago
Gemini is close (if not better) so it just makes sense no? o3-pro might be ahead of pack tho
blueblisters · 2 months ago
o3 does better especially if you use the api (not ChatGPT)

u/blueblisters

KarmaCake day1224September 19, 2019View Original