Readit News logoReadit News
keithwhor commented on Global Intelligence Crisis   citriniresearch.com/p/202... · Posted by u/tin7in
dvfjsdhgfv · 21 days ago
> I personally see consumer and entertainment spending, and people employed lucratively in these sectors, growing dramatically.

You may be right. OTOH, one could say the last decade had the best conditions ever to create the best movies, and yet for some reason I feel that the newer the movie is, the less soul it has.

keithwhor · 21 days ago
The economics of production cost, investment and distribution have created a lopsided industry where only guaranteed hits get funded. Less soul = pandering to more people.

With new tools we can reduce the production costs of great movies considerably. More budget, if it exists, can go to marketing and distribution. I expect this will lead to more experimental films and a lot more "soul." There will be a TON of slop, too, but that's fine! It's all part of experimentation with a new medium.

keithwhor commented on Global Intelligence Crisis   citriniresearch.com/p/202... · Posted by u/tin7in
keithwhor · 21 days ago
My guess is that intelligence gets commodified to the point where LLMs and diffusion models are sold on chips and we seamlessly integrate them into the HW + SW stack. Then they’re just another abstraction; we talk to our computers to get things built. At essentially zero cost, truly too cheap to meter.

In parallel there’s an explosion of creative output; Marvel movies turn around in 1 year instead of 4, solely blocked on availability of actors. Some actors license their likeness to unblock their calendar from reshoots so they can earn more. We don’t replace them wholesale because people idolize celebrity.

And demand for movies? Skyrockets. With new mediums to pursue. Classics like Goodfellas resurrected in high-fidelity 3D on the Vision Pro. A combination of diffusion models and Gaussian splatting means every movie can be upscaled to immersive 3d.

Video games enter a second renaissance, with indie developers having the advantage. For large studios, nostalgia is the moneymaker. The remake of Final Fantasy VII across three games that costs $100Ms and decades? Final Fantasy VIII gets rebuilt from scratch with a team of 30. But the rest of the money and team that would’ve been on that project now expand to other, more ambitious projects.

This is just the tip of the iceberg. Mars? Why stop at Mars? Let’s start megaprojects to explore the galaxy. Mine asteroids for resources. What’s stopping us? Humans yearn for the unknown. When we exhaust resources or a modality of existence, we dream bigger, not smaller.

I personally see consumer and entertainment spending, and people employed lucratively in these sectors, growing dramatically. Maybe SaaS and a lot of businesses that have traditionally employed white collar employees fade. And a bunch of boring “financistas” don’t know how to make a buck betting in the casino anymore because boring old businesses and things nobody really wanted to do anyway aren’t lucrative anymore.

But, personally, the whole reason I got into software was to build cool stuff. Starting with video games! The type and scale of cool stuff I can build is only getting better, at an insanely fast rate. My bet is we thrive.

keithwhor commented on OpenAI should build Slack   latent.space/p/ainews-why... · Posted by u/swyx
keithwhor · a month ago
A friend and I are working on something like this. It’s more Slack-adjacent; the problem we’re tackling is, “what does a future where agents seamlessly integrate with day to day communication look like?” We’re a little more focused on the developer platform.

We’re embarrassingly early and haven’t “launched” yet but I guess there’s some value in sharing with an audience who might be interested!

We call it “Superuser” [0], the social hub for agent tools. There’s more of a focus on the developer platform, but warning: major WIP! We are shipping huge changes and our docs are out of date...

[0] https://superuser.app

keithwhor commented on NASA finds Titan's lakes may be creating vesicles with primitive cell walls   sciencedaily.com/releases... · Posted by u/Gaishan
Scarblac · 6 months ago
What kind of instrument could conclusively eliminate presence of life?
keithwhor · 6 months ago
One that goes boom.
keithwhor commented on Genie 3: A new frontier for world models   deepmind.google/discover/... · Posted by u/bradleyg223
echelon · 7 months ago
> you could 100x the efficiency of game development.

> It would blow up the game industry, but also spawn a million independent one or two person studios producing some really imaginative niche experiences that could be much, much more expansive (like a AAA title) than the typical indie-studio product.

All video games become Minecraft / Roblox / VRChat. You don't need AAA studios. People can make and share their own games with friends.

Scary realization: YouTube becomes YouGame and Google wins the Internet forever.

keithwhor · 7 months ago
You’ve just described what Roblox is already doing.
keithwhor commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
keithwhor · 8 months ago
It’s also possible for LLMs to be inevitable, generate massive amounts of wealth and still be mostly fluff in terms of objective human progress.

The major change from my perspective is new consumer behavior: people simply enjoy talking to and building with LLMs. This fact alone is generating a lot (1) new spend and (2) content to consume.

The most disappointing outcome of the LLM era would be increasing the amount of fake, meaningless busywork humans have to do just to sift through LLM generated noise just to find signal. And indeed there are probably great products to be built that help you do just that; and there is probably a lot of great signal to be found! But the motion to progress ratio concerns me.

For example, I love Cursor. Especially for boilerplating. But SOTA models with tons of guidance can still not reliably implement features in my larger codebases within the timeframe it would take me to do it myself. Test-time compute and reasoning makes things even slower.

keithwhor commented on Accumulation of cognitive debt when using an AI assistant for essay writing task   arxiv.org/abs/2506.08872... · Posted by u/stephen_g
keithwhor · 9 months ago
I think it's likely we learn to develop healthier relationships with these technologies. The timeframe? I'm not sure. May take generations. May happen quicker than we think.

It's clear to me that language models are a net accelerant. But if they make the average person more "loquacious" (first word that came to mind, but also lol) then the signal for raw intellect will change over time.

Nobody wants to be in a relationship with a language model. But language models may be able to help people who aren't otherwise equipped to handle major life changes and setbacks! So it's a tool - if you know how to use it.

Let's use a real-life example: relationship advice. Over time I would imagine that "ChatGPT-guided relationships" will fall into two categories: "copy-and-pasters", who are just adding a layer of complexity to communication that was subpar to begin with ("I just copied what ChatGPT said"), and "accelerators" who use ChatGPT to analyze their own and their partners motivations to find better solutions to common problems.

It still requires a brain and empathy to make the correct decisions about the latter. The former will always end in heartbreak. I have faith that people will figure this out.

keithwhor commented on LLM function calls don't scale; code orchestration is simpler, more effective   jngiam.bearblog.dev/mcp-l... · Posted by u/jngiam1
deadbabe · 10 months ago
I’m confused as to why no one is just having LLMs dynamically produce and expose new tools on the fly as combinations of many small tools or even write new functions from scratch, to handle cases where there isn’t an ideal tool to process some input with one efficient tool call.
keithwhor · 10 months ago
I am building a company in this space, so can hopefully give some insight [0].

The issue right now is that both (1) function calling and (2) codegen just aren't really very good. The hype train far exceeds capabilities. Giving great demos like fetching some Stripe customers, generating an email or getting the weather work flawlessly. But anything more sophisticated goes off the rails very quickly. It's difficult to get models to reliably call functions with the right parameters, to set up multi-step workflows and more.

Add codegen into the mix and it's hairier. You need a deployment and testing apparatus to make sure the code actually works... and then what is it doing? Does it need secret keys to make web requests to other services? Should we rely on functions for those?

The price / performance curve is a consideration, too. Good models are slow and expensive. Which means their utility has to be higher in order to charge a customer to pay for the costs, but they also take a lot longer to respond to requests which reduces perception of value. Codegen is even slower in this case. So there's a lot of alpha in finding the right "mixture of models" that can plan and execute functions quickly and accurately.

For example, OpenAI's GPT-4.1-nano is the fastest function calling model on the market. But it routinely tries to execute the same function twice in parallel. So if you combine it with another fast model, like Gemini Flash, you can reduce error rates - e.g. 4.1-nano does planning, Flash executes. But this is non-obvious to anybody building these systems until they've tried and failed countless times.

I hope to see capabilities improve and costs and latency trend downwards, but what you're suggesting isn't quite feasible yet. That said I (and many others) are interested in making it happen!

[0] https://instant.bot

keithwhor commented on A critical look at MCP   raz.sh/blog/2025-05-02_a_... · Posted by u/ablekh
shivawu · 10 months ago
Obviously the article is making valid points. But a recent epiphany I had is, things by default are just mediocre but works. Of course the first shot at this problem is not going to be very good, very much like the first version of JavaScript is a shitshow and we’ll take years to pay down the technical debts. In order to force a beautiful creation, significant effort and will power needs to be put in place. So Id say I’m not surprised at all and this is just how the world works, in most cases.
keithwhor · 10 months ago
I think this is a cop out. OpenAI literally published a better integration spec two years ago, stored on `/.well-known/ai-plugin.json`. It just gave a summary of an OpenAPI spec, which ChatGPT could consume and then run your functions.

It was simple and elegant, the timing was just off. So the first shot at this problem actually looked quite good, and we're currently in a regression.

u/keithwhor

KarmaCake day3884March 14, 2013
About
hacker. canadian in sf. oss = <3. learning to be a CEO, one mistake at a time.

building Superuser, come join :)

https://superuser.app

https://x.com/keithwhor

View Original