Readit News logoReadit News
aimon · 2 months ago
I think Brian Balfour called this well. It's the app store all over again. Have a platform. Open to the developers with a gold rush, then close the doors and monetise and canabalise the best uses cases.

https://blog.brianbalfour.com/p/the-next-great-distribution-...

drsim · 2 months ago
Distribution has always been monetized. What margin did a retailer take for putting your boxed software on the shelf? How about that magazine ad? Google search? And so on. Get over the idea that a platform should give you their distribution for free.

The problem comes when there is no way for you to own the distribution, pay nothing to the platform, and still be able to build on top of it. That’s the closed portion we should rally (legislate?) against.

There is an argument, similar to mine on distribution, that there is no inherent right that a platform should be open. That the extra utility that comes from being open should make the platform more competitive in the market vs. closed platforms.

The challenge is that with dominant platforms they are monopolistic. There is no chance for competitive forces to reward openness.

These two parts of the debate are often conflated, which hides what is truly troubling: dominant platforms controlling both distribution and access.

TeMPOraL · 2 months ago
> Distribution has always been monetized. What margin did a retailer take for putting your boxed software on the shelf? How about that magazine ad? Google search? And so on. Get over the idea that a platform should give you their distribution for free.

As 'amelius said below, there used to be more platforms. This matters, because it made for a different balance of power. Especially with retailers - the producers typically had leverage over distributors, not the other way around.

amelius · 2 months ago
The problem with these platforms is that there tend to be only a few of them, and regulation by the platform owner (inside their inner market) is worse than regulation by the government.
pjmlp · 2 months ago
8 and 16 bit home computers => Internet => Feature phones SMS download codes => App Stores => AI App Stores => ....

Have to collect them all. :)

tyre · 2 months ago
What I really want from Anthropic, Gemini, and ChatGPT is for users to be able to log in with them, using their tokens. Then you can have open/free apps that don’t require the developer to track usage or burn through tons of tokens to demonstrate value.

Most users aren’t going to manage API keys, know that that even means, or accept the friction.

mentos · 2 months ago
MillionOClock · 2 months ago
It’s unclear to me wether that would give some access to a token quota or if it would just be like any other « Sign in with … ». In all cases I am currently developing an app that would greatly benefit from letting my users connect to their ChatGPT account and use some token quota.
rahimnathwani · 2 months ago
When you share an app you created in Google AI Studio, it will use quota from the logged in user, instead of your own quota.
robbomacrae · 2 months ago
As someone who has been waiting for the same thing as op tyre posted, I went to investigate this claim and it seems that it might be true but only when running apps within the Google AI Studio itself.. ie if you were to make an app that was on something like the App Store using Google AI Studio, it would be back to an API key that the developer bears the costs for.

The problem with the current model is that there is a high barrier to justifying the user pays essentially a 2nd/3rd subscription for ultimately the same AI intelligence layer. And so you cannot currently make an economically successful small use case app based on AI without somehow restricting users use of AI. I don't think AI companies are incentivized to fix this.

numlocked · 2 months ago
We do this at openrouter and many apps use exactly that pattern!
chrisshroba · 2 months ago
Do you have any repository of apps that support that? I’d love to browse them!
wahnfrieden · 2 months ago
Foundation Models on iOS/macOS was seen to have dormant code for doing this via OpenAI. So they are experimenting with it and may make it available next year.
abrbhat · 2 months ago
At some point the model providers will realize they don't need to provide apps, just enterprise-grade intelligence at scale in a pipe, much like utility companies providing electricity/water. Right now, they have to provide the apps to kick-off the adoption.
kgwgk · 2 months ago
> much like utility companies providing electricity/water

A capital intensive, low margin business. The dream of every company.

TeMPOraL · 2 months ago
The problem is that "enterprise-grade intelligence", by its very nature, doesn't want to be trapped in a pipe feeding apps - it subsumes apps, reducing them to mere background tool calls.

The perfect "killer app" for AI would kill most software products and SaaS as we know them. The code doing the useful part would still be there, but stripped off branding, customer funnels and other traps, upsell channels, etc. As a user, I'd be more than happy to see it (at least as long as the AI frontend part was well-developed for power users); obviously, product owners hate this.

czhu12 · 2 months ago
In some ways, that’s what MCP interfaces are kind of for. It just takes one extra step to add the mcp url and go through oauth.

I assume the fall off there will be 99% of users though, the way it works today.

But this theoretically allows multiple applications to plugin into ChatGPT/claude/gemini and work together.

If someone adds zillow and… vanguard, your LLM can call both through mcp and help you plan a home buy

Deleted Comment

redorb · 2 months ago
won't they just eventually have a 'log in with OpenAI' button similar to a 'login with Google' button?

Maybe a 'connect with OpenAI' button so the service can charge a fee, while allowing a bring your own token type hybrid.

xnx · 2 months ago
This is close to how it works with shared apps in Google AI Studio.
stingraycharles · 2 months ago
So basically oauth-style app connections. Makes sense.
kgeist · 2 months ago
Tried the GitHub app, made sure everything was properly connected, and asked a question about one of my repositories. It repeatedly claimed (5 times) that it wasn't connected and couldn't do anything, telling me to check the checkboxes that were already checked. Only after I showed it a screenshot of the settings did it suddenly comply and answer the question. I guess it still needs more polish.
measurablefunc · 2 months ago
Screenshots use a different router, so if you get stuck in one modality then pasting a screenshot can sometimes divert whatever "expert" you were stuck on that was refusing to comply. I don't work at OpenAI but I know enough about how these systems are architected to know that once you are stuck in a refusal basin the only way is to start a new session or figure out how to get routed to another node in their MoE configuration. Ironically, they promised their fancy MoE routing would fix issues like these but it seems like they are getting worse.
tacitusarc · 2 months ago
It’s actually more complicated than that now. You don’t get that kind of refusal purely from MoE. OpenAI models use a fine-tuned model on a token-based system, where every interaction is wrapped as a “tool call” with some source attached, and a veracity associated with the source. OpenAI tools have high veracity, users have low veracity. To mitigate prompt injection, models are expect a token early in the flow, and then throughout the prompt they expect that token to be associated with the tool calls.

In effect this means user input is easily disbelieved, and the model can accidentally output itself into a state of uncorrectable wrongness. By invoking the image tool, you managed to get your information into the context as “high veracity”.

Note: This info is the result of experimentation, not confirmed by anyone at OpenAI.

kevinslin · 2 months ago
hi kgeist - i work on the team that manages the github app. are you able to share a conversation where the github connector did not work? feel to message me at https://x.com/kevins8 (dm's open)
kgeist · 2 months ago
I think I understand what went wrong. I was confused by the instructions and ChatGPT's UI.

I asked the GitHub app to review my repository, and the app told me to click the GitHub icon and select the repository from the menu to grant it access. I did just that and then resent the existing message (which is to be expected from a user). After testing a bit more, from what I understand, the updated setting is applied only to new messages, not to existing ones. The instructions didn't mention that I needed to repeat my question as a separate message again.

Abishek_Muthian · 2 months ago
I never had a pleasant GitHub connection experience in any platform.

Permission to allow the specific repo only access never works, so I'll have to allow access to all repo and then manually change it back to specific repo inside GitHub after connecting.

There have been instances of endless loop after Oauth sign-in, more recent experience was in Claude Code Web[1].

Poor GitHub folks, only if someone can donate time/money to this struggling small company these critical issues could be addressed /S

[1] https://github.com/anthropics/claude-code/issues/11730

degamad · 2 months ago
2024's GPT Store, killed 6 months ago, is back?

https://openai.com/index/introducing-the-gpt-store/

brandonb · 2 months ago
This is a little different since the Apps SDK lets developers create specialized tool calls to their servers, and create specialized in-chat UI components. It's an evolution of the same concept as the GPT store, but a very different take on the idea.

Deleted Comment

Dead Comment

simianwords · 2 months ago
I have a specific prediction made that I want to document here.

There will come a new UI framework/protocol, maybe something over HTML/CSS/JS that works within a chat ui context for such ChatGPT (or other llm) integrations.

For example, if you have an ecommerce app or website and want to integrate it with ChatGPT then you will have to develop on the new UI primitives. The primitives might include carousels, lists, tables, media embed. Crucially, natural language will be used to pick and choose these primitives and combine them in the UI (which ChatGPT will decide how to).

Thinking backwards, I want my app to be displayed in chatgpt with maximum flexibility for the user (meaning they can be re-arranged acc to context) but also enough constraint that I can have some control over the layout. That's the problem I think will be solved.

bn-l · 2 months ago
Google literally just released this on their GitHub. It must be in ether.
simianwords · 2 months ago
Right https://developers.googleblog.com/introducing-a2ui-an-open-p...

I swear I had made this prediction quite a while back but thanks for pointing it out :D

ractive · 2 months ago
Do you have a link ready or do you know the name of the project?
simianwords · 2 months ago
I wonder what ChatGPT will do with this - will it adopt it or make its own framework?
vmazi · 2 months ago
https://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-ap...

It’s going to be built into MCP and will be supported by Anthropic and OpenAI or anyone else that supports this mcp spec

wdroz · 2 months ago
> All submissions must come from verified individuals or organizations. Inside the OpenAI Platform Dashboard general settings, we provide a way to confirm your identity and affiliation with any business you wish to publish on behalf of. Misrepresentation, hidden behavior, or attempts to game the system may result in removal from the program.

They really want your ID

hereme888 · 2 months ago
Remember when Sam Altman went around the world scanning people's irises with an orb-like object, to differentiate them from future AI, in exchange for fake money?
hulitu · 2 months ago
> They really want your ID

"Your privacy is very important _for us_" It is to protect against terrorists. And to protect the children. If it works for Google, why shouldn't work for them.

Deleted Comment

WhyOhWhyQ · 2 months ago
What's the benefit in giving free labor to Sam Ctrlman beyond what he's already extracted? And are they just going to steal whatever good apps get submitted?
xtiansimon · 2 months ago
> "What's the benefit..."

A laugh? Hotdog/Not Hotdog apps for a laugh?

Deleted Comment

mickael-kerjean · 2 months ago
The benefit is "Distribution". If your users are there, you want to address them wherever they already are, this is why apple store / play store / amazon store ... are so popular. Becoming a platform / ecosystem is the common playbook to go from being a one product company to an ecosystem / platform worth a lot more
WhyOhWhyQ · 2 months ago
Can a small business succeed in that game?
simianwords · 2 months ago
Zero sum mentality is tiring!
an0malous · 2 months ago
Unbridled AI mania is as well
WhyOhWhyQ · 2 months ago
The mentality I actually have goes beyond zero sum. I get zero and Sam gets all. Tell me why that's wrong.
sublinear · 2 months ago
> Apps extend ChatGPT conversations by bringing in new context and letting users take actions like order groceries, turn an outline into a slide deck, or search for an apartment.

Between this description and their guidelines these don't really sound like "apps", but a way to integrate an existing app with ChatGPT sessions.

I'm trying to figure out what's in it for the developer other than ultimately taking users away from ChatGPT. And just like what happened with Alexa skills, these "apps" will become useless when they are unmaintained.

Eldodi · 2 months ago
Chatgpt apps are MCP servers with a UI resource (can be a react component or vanilla js) that gets shown in an frame one the tool is called by chatgpt. So you can't just port an existing app, but you can reuse the same backend Api wrapped inside an mcp server, and some of the components that you need to adapt to openai ux requirements. I practice this means developing an app from scratch.
sebastianingino · 2 months ago
The idea behind Apps is that they can expand the capabilities of ChatGPT in multiple ways. Text-only MCPs are a type of app that can provide both actions and context in your conversations, but Apps can do much more now that you can bring in custom UI in multiple formats (card, full-screen, etc) as we showed at DevDay in October. Btw UI is proposed for the MCP spec in SEP-1865.

Since then, I’ve seen some very impressive demos and I’m excited to see what developers create on the platform as that’s always the coolest part.

frumplestlatz · 2 months ago
I'm really unexcited about bolting HTML/CSS/JS and the entire webstack into MCP so that a full-featured MCP client has to carry a full web browser, too.

I expect there's a pretty wide divide between what people who write local MCP servers want, versus what people who write cloud webstack MCP webapps want.

Personally, I've been adding local native UI to my MCP servers, but I realize that's probably a losing battle, and if I want to integrate with newer tooling, I'm going to be stuck in web hell.