This is actually an interesting space, and I think there's a room for such products. Large companies struggle a lot with their knowledge bases, and discoverability is part of the problem (especially when using multiple tools: Confluence, Docs, Chat app, etc.)
The problem here is that there are companies that focus on this area and keep improving their products, while for OpenAI it's one of dozens of tools they launch, so it's hard to believe they'll keep dedicating adequate resources to make this a mature tool that's worth the investment (in form of time and money) for the clients
This feature is actually useful and M365 Copilot Enterprise already has this for a while. It is actually quite useful because it basically has access to all public (“public” as in: accessible to any employee) information in a company - on Sharepoint - plus your own mailbox. It helps finding information I otherwise couldn’t have found easily with the company-wide search functionality.
It's more like a chorus of skeptics, nay-sayers, Luddites, conspiracy nuts, doomsday predictors, arm-chair philosophers, self-proclaimed experts, etc. mixed with all of AI fan boys. The signal to noise ratio in these threads is pretty terrible these days. It's just a lot of people shouting their opinion blindly to fuel their own vanity and egos. And then you get meta discussions like this as well.
Objectively, this OpenAI press release is announcing something that I might actually spend company money on. Finding out about such things is why I read HN. A lot of my AI chats are about copy pasting bits of information into a chat just to create enough of a context so that I can get some meaningful answer. The whole groundhog day of "who are you and what are you trying to do" is very frustrating. Anything addressing that is probably useful to me. And this sounds exactly like it would help me.
OpenAI is big enough and this announcement interesting enough that it probably warrants being on the front page more than whatever opinionated brainfart of some self proclaimed AI expert (positive / negative) competing for the same space there. There's a lot of drivel getting upvoted lately that probably could be labeled as "opinionated drivel" and unceremoniously and aggressively filtered by our dearly beloved HN moderators and editors. But this isn't one of those things IMHO. And in fairness, there just is a lot of substantial day to day news as well.
Now do this for 300+ employees constantly. It's not sustainable.
We need scoped MCPs before any of this is viable.
The receiving end needs a setting that "user X's ChatGPT MCP can access directories [x,y,z] in Google Drive. This wouldn't require any changes to the MCP protocol in general.
OR the whole spec must be changed so that when an MCP is connecting, there is a negotiation about the scope (select which directories are shared) AND that list is checked on the receiving end against a whitelist of allowed directories.
AI boyfriends/girlfriends are a serious thing (and when GPT-5 was released, people were complaining that this AI is much worse suited for this purpose). If you combine it with realistic human-size puppets, this might get big.
But how do people ignore that it's a machine simulating having feelings for you? Is it like the steak in The Matrix?
I played with Sony's Aibo robot dog once, if you hold out your hand in front of it, it can pretend to eat off your hand. After a few tries, it did so, and I thought "How cute!". Then I realized it was just image recognition and logic that instructed some actuators to do certain things..
Perhaps VR goggles and AI that analyses the video and activates the actuators and pumps in sync with whatever is happening in the video would also work.. or oh, geez, why not realtime generated videos?
Porn is always a safe bet, and I'd give them thumbs up if the crap finally disappears from the public space and we can go back to doing the actually innovative stuff.
This is a fishing expedition for even more data... I don't know if you want everything people connect to this available for people to use LLMs to surface.
At present though, I get the sense that reinforcement learning at scale is the current battleground (and has been for most of 2025). But we also see over time, the general models adopt the skills taught to the specialized models. Look at how the learning that made codex-1 went into GPT5.
Should we assume "GPT-5" still just means the LLM? It could mean 'GPT-5 the system' which means the model has RAG, tools to use it, and maybe fine-tuned to call those tools.
The problem here is that there are companies that focus on this area and keep improving their products, while for OpenAI it's one of dozens of tools they launch, so it's hard to believe they'll keep dedicating adequate resources to make this a mature tool that's worth the investment (in form of time and money) for the clients
Are they pushing multiple announcements per day to take the stock market to greater heights? 6 announcements this week alone: https://openai.com/news/
If the products were at least point-free [1] instead of pointless. :-)
(sorry for the nerdy pun)
[1] https://en.wikipedia.org/wiki/Tacit_programming
One weird trick to get out of their non profit state.
I hope it isnt "enabled by default" otherwise people will be fired on the spot for doing this.
Objectively, this OpenAI press release is announcing something that I might actually spend company money on. Finding out about such things is why I read HN. A lot of my AI chats are about copy pasting bits of information into a chat just to create enough of a context so that I can get some meaningful answer. The whole groundhog day of "who are you and what are you trying to do" is very frustrating. Anything addressing that is probably useful to me. And this sounds exactly like it would help me.
OpenAI is big enough and this announcement interesting enough that it probably warrants being on the front page more than whatever opinionated brainfart of some self proclaimed AI expert (positive / negative) competing for the same space there. There's a lot of drivel getting upvoted lately that probably could be labeled as "opinionated drivel" and unceremoniously and aggressively filtered by our dearly beloved HN moderators and editors. But this isn't one of those things IMHO. And in fairness, there just is a lot of substantial day to day news as well.
https://www.glean.com/
There's ZERO way our legal team will let us connect ChatGPT to Google Drive for example. ALL of our GDrive.
Specific directories, sure. But there's no way to do it.
let GPT drink it up
obliterate the new namespace until you need to train it again; wash, rinse, repeat
We need scoped MCPs before any of this is viable.
The receiving end needs a setting that "user X's ChatGPT MCP can access directories [x,y,z] in Google Drive. This wouldn't require any changes to the MCP protocol in general.
OR the whole spec must be changed so that when an MCP is connecting, there is a negotiation about the scope (select which directories are shared) AND that list is checked on the receiving end against a whitelist of allowed directories.
I played with Sony's Aibo robot dog once, if you hold out your hand in front of it, it can pretend to eat off your hand. After a few tries, it did so, and I thought "How cute!". Then I realized it was just image recognition and logic that instructed some actuators to do certain things..
Perhaps VR goggles and AI that analyses the video and activates the actuators and pumps in sync with whatever is happening in the video would also work.. or oh, geez, why not realtime generated videos?
Dead Comment
So another GPT-5 fine-tune. Codex also uses a custom GPT-5 fine-tune.
Does fine-tuning make sense now? Or do you have to be OpenAI to fine-tune the models with a mix of existing data and new behaviours?
Look at Tinker for an example of where things might be heading though (https://tinker-docs.thinkingmachines.ai/)
At present though, I get the sense that reinforcement learning at scale is the current battleground (and has been for most of 2025). But we also see over time, the general models adopt the skills taught to the specialized models. Look at how the learning that made codex-1 went into GPT5.