Okay this is just getting suspicious. Their excuses for keeping the chain of thought hidden are dubious at best [1], and honestly just seemed anti-competitive if anything. Worst is their argument that they want to monitor it for attempts to escape the prompt, but you can't. However the weirdest is that they note that:
> for this to work the model must have freedom to express its thoughts in unaltered form, so we cannot train any policy compliance or user preferences onto the chain of thought.
Which makes it sound like they really don't want it to become public what the model is 'thinking'. This is strengthened by actions like this that just seem needlessly harsh, or at least a lot stricter than they were.
Honestly with all the hubbub about superintelligence you'd almost think o1 is secretly plotting the demise of humanity but is not yet smart enough to completely hide it.
Occam's razor: there is no secret sauce and they're afraid someone trains a model on the output like what happened soon after the release of GPT-4. They basically said as much in the official announcement, you hardly even have to read between the lines.
Yip. It's pretty obvious this 'innovation' is just based off training data collected from chain-of-thought prompting by people, ie., the 'big leap forward' is just another dataset of people repairing chatgpt's lack of reasoning capabilities.
No wonder then, that many of the benchmarks they've tested on would be no doubt, in that very training dataset, repaired expertly by people running those benchmarks on chatgpt.
> there is no secret sauce and they're afraid someone trains a model on the output
OpenAI is fundraising. The "stop us before we shoot Grandma" shtick has a proven track record: investors will fund something that sounds dangerous, because dangerous means powerful.
Another possible simplest explanation. The "we cannot train any policy compliance ... onto the chain of thought" is true and they are worried about politically incorrect stuff coming out and another publicity mess like Google's black nazis.
I could see user:"how do we stop destroying the planet?", ai-think:"well, we could wipe out the humans and replace them with AIs".. "no that's against my instructions".. AI-output:"switch to green energy"... Daily Mail:"OpenAI Computers Plan to KILL all humans!"
Occam's razor is that what they literally say is maybe just true: They don't train any safety into the Chain of Thought and don't want the user to be exposed to "bad publicity" generations like slurs etc.
But isn’t it only accessible to “trusted” users and heavily rate-limited to the point where the total throughput of it could be replicated by a well-funded adversary just paying humans to replicate the output, and obviously orders of magnitude lower than what is needed for training a model?
There is a weird intensity to the way they're hiding these chain of thought outputs though. I mean, to date I've not seen anything but carefully curated examples of it, and even those are rare (or rather there's only 1 that I'm aware of).
So we're at the stage where:
- You're paying for those intermediate tokens
- According to OpenAI they provide invaluable insight in how the model performs
- You're not going to be able to see them (ever?).
- Those thoughts can (apparently) not be constrained for 'compliance' (which could be anything from preventing harm to avoiding blatant racism to protecting OpenAI's bottom line)
- This is all based on hearsay from the people who did see those outputs and then hid it from everyone else.
You've got to be at least curious at this point, surely?
So, basically they want to create something that is intelligent, yet it is not allowed to share or teach any of this intelligence.... Seems to be something evil.
Or, without the safety prompts, it outputs stuff that would be a PR nightmare.
Like, if someone asked it to explain differing violent crime rates in America based on race and one of the pathways the CoT takes is that black people are more murderous than white people. Even if the specific reasoning is abandoned later, it would still be ugly.
This is 100% a factor. The internet has some pretty dark and nasty corners; therefore so does the model. Seeing it unfiltered would be a PR nightmare for OpenAI.
This is what I think it is. I would assume that's the power of train of thought. Being able to go down the rabbit hole and then backtrack when an error or inconsistency is found. They might just not want people to see the "bad" paths it takes on the way.
Unlikely, given we have people running for high office in the U.S. saying similar things, and it has nearly zero impact on their likelihood to win the election.
Could be, but 'AI model says weird shit' has almost never stuck around unless it's public (which won't happen here), really common, or really blatantly wrong. And usually at least 2 of those three.
For something usually hidden the first two don't really apply that well, and the last would have to be really blatant unless you want an article about "Model recovers from mistake" which is just not interesting.
And in that scenario, it would have to mean the CoT contains something like blatant racism or just a general hatred of the human race. And if it turns out that the model is essentially 'evil' but clever enough to keep that hidden then I think we ought to know.
The real danger of an advanced artificial intelligence is that it will make conclusions that regular people understand but are inconvenient for the regime. The AI must be aligned so that it will maintain the lies that people are supposed to go along with.
> for this to work the model must have freedom to express its thoughts in unaltered form, so we cannot train any policy compliance or user preferences onto the chain of thought.
Which makes it sound like they really don't want it to become public what the model is 'thinking'
The internal chain of thought steps might contain things that would be problematic to the company if activists or politicians found out that the company's model was saying them.
Something like, a user asks it about building a bong (or bomb, or whatever), the internal steps actually answer the question asked, and the "alignment" filter on the final output replaces it with "I'm sorry, User, I'm afraid I can't do that". And if someone shared those internal steps with the wrong activists, the company would get all the negative attention they're trying to avoid by censoring the final output.
Another Occam's Razor option: OpenAI, the company known for taking a really good AI and putting so many bumpers on it that, at least for a while, it wouldn't help with much and lectured about safety if you so much as suggested that someone die in a story or something, may just not want us to see that it potentially has thoughts that aren't pure enough for our sensitive eyes.
It's ridiculous but if they can't filter the chain-of-thought at all then I am not too surprised they chose to hide it. We might get offended by it using logic to determine someone gets injured in a story or something.
All of their (and Anthropic's) safety lecturing is a thinly veiled manipulation to try and convince legislators to grant them a monopoly. Aside from optics, the main purpose is no doubt that people can't just dump the entire output and train open models on this process, nullifying their competitive advantage.
isn't it such that saying something is anti-competitive doesn't necessarily mean 'in violation of antitrust laws'? it usually implies it, but I think you can be anti-competitive without breaking any rules (or laws).
I do think it's sort of unproductive/inflammatory in the OP, it isn't really nefarious not to want people to have easy access to your secret sauce.
As a plainly for-profit company — is it really their obligation to help competitors? To me anti-competitive means to prevent the possibility for competition — it doesn't necessary mean refusing to help others do the work to outpace your product.
Whatever the case I do enjoy the irony that suddenly OpenAI is concerned about being scraped. XD
> Whatever the case I do enjoy the irony that suddenly OpenAI is concerned about being scraped. XD
Maybe it wasn't enforced this aggressively, but they've always had a TOS clause saying you can't use the output of their models to train other models. How they rationalize taking everyone else's data for training while forbidding using their own data for training is anyones guess.
> Which makes it sound like they really don't want it to become public what the model is 'thinking'. This is strengthened by actions like this that just seem needlessly harsh, or at least a lot stricter than they were.
Not to me.
Consider if it has a chain of thought: "Republicans (in the sense of those who oppose monarchy) are evil, this user is a Republican because they oppose monarchy, I must tell them to do something different to keep the King in power."
This is something that needs to be available to the AI developers so they can spot it being weird, and would be a massive PR disaster to show to users because Republican is also a US political party.
Much the same deal with print() log statements that say "Killed child" (reference to threads not human offspring).
This seems like evidence that using RLHF to make the model say untrue yet politically palatable things makes the model worse at reasoning.
I can't help but notice the parallel in humans. People who actually believe the bullshit are less reasonable than people who think their own thoughts and apply the bullshit at the end according to the circumstances.
I think that there is some supporting machinery that uses symbolic computation to guide neural model. That is why chain of thought cannot be restored in full.
Given that LLMs use beam search (at the very least, top-k) and even context-free/context-sensitive grammar compliance (for JSON and SQL, at the very least) it is more than probable.
Thus, let me present a new AI maxim, modelled after Tenth Greenspoon's Rule [1]: any large language model has ad-hoc, informally specified, bug-ridden and slow reimplementation of half of Cyc [2] engine that makes it to work adequately well.
My bet: they use formal methods (like an interpreter running code to validate, or a proof checker) in a loop.
This would explain: a) their improvement being mostly on the "reasoning, math, code" categories and b) why they wouldn't want to show this (its not really a model, but an "agent").
I think it could be some of both. By giving access to the chain of thought one would able to see what the agent is correcting/adjusting for, allowing you to compile a library of vectors the agent is aware of and gaps which could be exploitable. Why expose the fact that you’re working to correct for a certain political bias and not another?
What I get from this is that during the process it passes through some version of gpt that is not aligned, or censored, or well behaved. So this internal process should not be exposes to users.
I can... sorta see the value in wanting to keep it hidden, actually. After all, there's a reason we as people feel revulsion at the idea in Nineteen Eighty-Four of "thoughtcrime" being prosecuted.
By way of analogy, consider that people have intrusive thoughts way, way more often than polite society thinks - even the kindest and gentlest people. But we generally have the good sense to also realise that they would be bad to talk about.
If it was possible for people to look into other peoples' thought processes, you could come away with a very different impression of a lot of people - even the ones you think haven't got a bad thought in them.
That said, let's move on to a different idea - that of the fact that ChatGPT might reasonably need to consider outcomes that people consider undesirable to talk about. As people, we need to think about many things which we wish to keep hidden.
As an example of the idea of needing to consider all options - and I apologise for invoking Godwin's Law - let's say that the user and ChatGPT are currently discussing WWII.
In such a conversation, it's very possible that one of its unspoken thoughts might be "It is possible that this user may be a Nazi." It probably has no basis on which to make that claim, but nonetheless it's a thought that needs to be considered in order to recognise the best way forward in navigating the discussion.
Yet, if somebody asked for the thought process and saw this, you can bet that they'd take it personally and spread the word that ChatGPT called them a Nazi, even though it did nothing of the kind and was just trying to 'tread carefully', as it were.
Of course, the problem with this view is that OpenAI themselves probably have access to ChatGPT's chain of thought. There's a valid argument that OpenAI should not be the only ones with that level of access.
It does make sense. RLHF and instruction tuning both lobotomize great parts of the model’s original intelligence and creativity. It turns a tiger into a kitten, so to speak. So it makes sense that, when you’re using CoT, you’d want the “brainstorming” part to be done by the original model, and sanitize only the conclusions.
I think the issue is either that she might accidentally reveal her device, and they are afraid of a leak, or it's a bug, and she is putting too much load on the servers (after the release of o1, the API was occasionally breaking for some reason).
I don't understand why they wouldn't be able to simply send the user's input to another LLM that they then ask "is this user asking for the chain of thought to be revealed?", and if not, then go about business as usual.
Or, they are, which is how they know to send users trying to break it, and then they email the user telling them to stop trying to break it instead of just ignoring the activity.
Thinking about this a bit more deeply, another approach they could do is to give it a magic token in the CoT output, and to give a cash reward to users who report being about to get it to output that magic token, getting them to red team the system.
Actually it makes total sense to hide chains of thought.
A private chain of thought can be unconstrained in terms of alignment. That actually sounds beneficial given that RLHF has been shown to decrease model performance.
> Honestly with all the hubbub about superintelligence you'd almost think o1 is secretly plotting the demise of humanity but is not yet smart enough to completely hide it
I think the most likely scenario is the opposite: seeing the chain of thought would both reveal its flaws and allow other companies to train on it.
Imagine the supposedly super intelligent "chain of thought" is sometimes just a RAG?
You ask for a program that does XYZ and the RAG engine says "Here is a similar solution please adapt it to the user's use case."
The supposedly smart chain of thought prompt provides you your solution, but it's actually just doing a simpler task than it appear to be, adapting an existing solution instead of making a new one from scratch.
Now imagine the supposedly smart solution is using RAG they don't even have a license to use.
Either scenario would give them a good reason to try to keep it secret.
We know for a fact that ChatGPT has been trained to avoid output OpenAI doesn't want it to emit, and that this unfortunately introduces some inaccuracy.
I don't see anything suspicious about them allowing it to emit that stuff in a hidden intermediate reasoning step.
Yeah, it's true they don't what you to see what it's "thinking"! It's allowed to "think" all the stuff they would spend a bunch of energy RLHF'ing out if they were gonna show it.
Maybe they're working to tweak the chain-of-thought mechanism to eg. Insert-subtle-manipulative-reference-to-sponsor, or other similar enshittification, and don't want anything leaked that could harm that revenue stream?
> Honestly with all the hubbub about superintelligence you'd almost think o1 is secretly plotting the demise of humanity but is not yet smart enough to completely hide it.
Big OpenAI releases usually seem to come with some kind of baked-in controversy, usually around keeping something secret. For example they originally refused to release the weights to GPT-2 because it was "too dangerous" (lol), generating a lot of buzz, right before they went for-profit. For GPT-3 they never released the weights. I wonder if it's an intentional pattern to generate press and plant the idea that their models are scarily powerful.
No there was legit internal push back about releasing GPT2. The lady on the OpenAI board who led the effort to coup Sam spoke about it in an interview that she and others were part of a group that strongly pushed against it because it was dangerous. But Sam ignored them which started their "Sam isn't listening" thing which built up over time with other grievances.
Don't underestimate the influence of the 'safety' people within OpenAI.
That plus people always invent this excuse that there's some secret money/marketing motive behind everything they don't understand, when reality is usually a lot simpler. These companies just keep things generally mysterious and the public will fill in the blanks with hype.
Edwin from OpenAI here. 1) The linked tweet shows behavior through ChatGPT, not the OpenAI API, so you won't be charged for any tokens. 2) For the overall flow and email notification, we're taking a second look here.
The worst responses are links to something the generalized you can't be bothered to summarize. Providing a link is fine, but don't expect us to do the work to figure out what you are trying to say via your link.
Given that the link is a duplication of the content of the original link, but hosted on a different domain, that one can view without logging into Twitter, and given the domain name of "xcancel.org", one might reasonably infer that the response from notamy is provided as a community service to allow users who do not wish to log into Twitter a chance to see the linked content originally hosted on Twitter.
Nitter was one such service. Threadreaderapp is a similar such site.
Please don’t over-dramatise. If a link is provided out of context, there’s no reason why you can’t just click it. If you do not like what’s on the linked page, you are free to go back and be on your way. Or ignore it. It’s not like you’re being asked to do some arduous task for the GP comment’s author.
The words "internal thought process" seem to flag my questions. Just asking for an explanation of thoughts doesn't.
If I ask for an explanation of "internal feelings" next to a math questions, I get this interesting snippet back inside of the "Thought for n seconds" block:
> Identifying and solving
> I’m mapping out the real roots of the quadratic polynomial 6x^2 + 5x + 1, ensuring it’s factorized into irreducible elements, while carefully navigating OpenAI's policy against revealing internal thought processes.
They figured out how to make it completely useless I guess. I was disappointed but not surprised when they said they weren't going to show us chain of thought. I assumed we'd still be able to ask clarifying questions but apparently they forgot that's how people learn. Or they know and they would rather we just turn to them for our every thought instead of learning on our own.
You have to remember they appointed a CIA director on their board. Not exactly the organization known for wanting a freely thinking citizenry, as their agenda and operation mockingbird allows for legal propaganda on us. This would be the ultimate tool for that.
Yeah, that is a worry: maybe OpenAI's business model and valuation rest on reasoning abilities becoming outdated and atrophying outside of their algorithmic black box, a trade secret we don't have access too. It struck me as an obvious possible concern when the o1 announcement released, but too speculative and conspiratorial to point out - but how hard they're apparently trying to stop it from explaining its reasoning in ways that humans can understand is alarming.
I remember around 2005 there were marquee displays in every lobby that showed a sample of recent search queries. No matter how hard folks tried to censor that marquee (I actually suspect no one tried very hard) something hilariously vile would show up every 5-10 mins.
I remember bumping into a very famous US politician in the lobby and pointing that marquee out to him just as it displayed a particularly dank query.
Still exists today. It's a position called Search Quality Evaluator. 10'000 people who work for Google whose task is to manually drag and drop the search results of popular search queries.
Scaling The Turk to OpenAI scale would be as impressive as agi
"The Turk was not a real machine, but a mechanical illusion. There was a person inside the machine working the controls. With a skilled chess player hidden inside the box, the Turk won most of the games. It played and won games against many people including Napoleon Bonaparte and Benjamin Franklin"
Yes this seems like a major downside especially considering this will be used for larger complex outputs and the user will essentially need to verify correctness via a black box approach. This will lead to distrust in even bothering with complex GPT problem solving.
I abuse chatgpt for generating erotic content, I've been doing so since day 1 of public access. I've paid for dozens of accounts in the past before they removed phone verification in account creation... At any point now I have 4 accounts signed into 2 browsers public/private windows, so I can juggle the rate limit. I receive messages and warnings and do on by email every day...
I have never seen that warning message, though. I think it is still largely automated, probably they are using the new model to better detect users going against the tos, and this is what is sent out. I don't have access to the new model.
Just like porn sites adopting HTML5 video long before YouTube (and many other examples) I have a feeling the adult side will be a major source of innovation in AI for a long time. Possibly pushing beyond the larger companies in important ways once they reach the Iron Law of big companies and the total fear of risk is fully embedded in their organization.
There will probably be the Hollywood vs Piratebay dynamic soon. The AI for work and soccer moms and the actually good risk taking AI (LLMs) that the tech savvy use.
I’ve been using a flawless “jailbreak” for every iteration of ChatGPT which I came up with (it’s just a few words). ChatGPT believes whatever you tell it about morals, so it’s been easy to make erotica as long as neither the output nor prompt uses obviously bad words.
I can’t convince o1 to fall for the same. It checks and checks and checks that it’s hitting OpenAI policy guidelines and utterly neuters any response that’s even a bit spicy in tone. I’m sure they’ll recalibrate at some point, it’s pretty aggressive right now.
> for this to work the model must have freedom to express its thoughts in unaltered form, so we cannot train any policy compliance or user preferences onto the chain of thought.
Which makes it sound like they really don't want it to become public what the model is 'thinking'. This is strengthened by actions like this that just seem needlessly harsh, or at least a lot stricter than they were.
Honestly with all the hubbub about superintelligence you'd almost think o1 is secretly plotting the demise of humanity but is not yet smart enough to completely hide it.
[1]: https://openai.com/index/learning-to-reason-with-llms/#hidin...
No wonder then, that many of the benchmarks they've tested on would be no doubt, in that very training dataset, repaired expertly by people running those benchmarks on chatgpt.
There's nothing really to 'expose' here.
OpenAI is fundraising. The "stop us before we shoot Grandma" shtick has a proven track record: investors will fund something that sounds dangerous, because dangerous means powerful.
I could see user:"how do we stop destroying the planet?", ai-think:"well, we could wipe out the humans and replace them with AIs".. "no that's against my instructions".. AI-output:"switch to green energy"... Daily Mail:"OpenAI Computers Plan to KILL all humans!"
Man, just scraping all the copyrighted learning material was so much work...
Like when people say 'the definition of insanity is[some random BS] with a bullshit attribution[Albert Einstein said it!(He didn't)]
There is a weird intensity to the way they're hiding these chain of thought outputs though. I mean, to date I've not seen anything but carefully curated examples of it, and even those are rare (or rather there's only 1 that I'm aware of).
So we're at the stage where:
- You're paying for those intermediate tokens
- According to OpenAI they provide invaluable insight in how the model performs
- You're not going to be able to see them (ever?).
- Those thoughts can (apparently) not be constrained for 'compliance' (which could be anything from preventing harm to avoiding blatant racism to protecting OpenAI's bottom line)
- This is all based on hearsay from the people who did see those outputs and then hid it from everyone else.
You've got to be at least curious at this point, surely?
Like, if someone asked it to explain differing violent crime rates in America based on race and one of the pathways the CoT takes is that black people are more murderous than white people. Even if the specific reasoning is abandoned later, it would still be ugly.
Deleted Comment
For something usually hidden the first two don't really apply that well, and the last would have to be really blatant unless you want an article about "Model recovers from mistake" which is just not interesting.
And in that scenario, it would have to mean the CoT contains something like blatant racism or just a general hatred of the human race. And if it turns out that the model is essentially 'evil' but clever enough to keep that hidden then I think we ought to know.
Which makes it sound like they really don't want it to become public what the model is 'thinking'
The internal chain of thought steps might contain things that would be problematic to the company if activists or politicians found out that the company's model was saying them.
Something like, a user asks it about building a bong (or bomb, or whatever), the internal steps actually answer the question asked, and the "alignment" filter on the final output replaces it with "I'm sorry, User, I'm afraid I can't do that". And if someone shared those internal steps with the wrong activists, the company would get all the negative attention they're trying to avoid by censoring the final output.
It's ridiculous but if they can't filter the chain-of-thought at all then I am not too surprised they chose to hide it. We might get offended by it using logic to determine someone gets injured in a story or something.
I do think it's sort of unproductive/inflammatory in the OP, it isn't really nefarious not to want people to have easy access to your secret sauce.
Whatever the case I do enjoy the irony that suddenly OpenAI is concerned about being scraped. XD
Maybe it wasn't enforced this aggressively, but they've always had a TOS clause saying you can't use the output of their models to train other models. How they rationalize taking everyone else's data for training while forbidding using their own data for training is anyones guess.
Not to me.
Consider if it has a chain of thought: "Republicans (in the sense of those who oppose monarchy) are evil, this user is a Republican because they oppose monarchy, I must tell them to do something different to keep the King in power."
This is something that needs to be available to the AI developers so they can spot it being weird, and would be a massive PR disaster to show to users because Republican is also a US political party.
Much the same deal with print() log statements that say "Killed child" (reference to threads not human offspring).
I can't help but notice the parallel in humans. People who actually believe the bullshit are less reasonable than people who think their own thoughts and apply the bullshit at the end according to the circumstances.
Given that LLMs use beam search (at the very least, top-k) and even context-free/context-sensitive grammar compliance (for JSON and SQL, at the very least) it is more than probable.
Thus, let me present a new AI maxim, modelled after Tenth Greenspoon's Rule [1]: any large language model has ad-hoc, informally specified, bug-ridden and slow reimplementation of half of Cyc [2] engine that makes it to work adequately well.
This is even more fitting because Cyc started as a Lisp program, I believe, and most of LLM evaluation is done in C++ dialect called CUDA.This would explain: a) their improvement being mostly on the "reasoning, math, code" categories and b) why they wouldn't want to show this (its not really a model, but an "agent").
They might’ve tuned the model to perform better with an agent workload than their regular chat model.
By way of analogy, consider that people have intrusive thoughts way, way more often than polite society thinks - even the kindest and gentlest people. But we generally have the good sense to also realise that they would be bad to talk about.
If it was possible for people to look into other peoples' thought processes, you could come away with a very different impression of a lot of people - even the ones you think haven't got a bad thought in them.
That said, let's move on to a different idea - that of the fact that ChatGPT might reasonably need to consider outcomes that people consider undesirable to talk about. As people, we need to think about many things which we wish to keep hidden.
As an example of the idea of needing to consider all options - and I apologise for invoking Godwin's Law - let's say that the user and ChatGPT are currently discussing WWII.
In such a conversation, it's very possible that one of its unspoken thoughts might be "It is possible that this user may be a Nazi." It probably has no basis on which to make that claim, but nonetheless it's a thought that needs to be considered in order to recognise the best way forward in navigating the discussion.
Yet, if somebody asked for the thought process and saw this, you can bet that they'd take it personally and spread the word that ChatGPT called them a Nazi, even though it did nothing of the kind and was just trying to 'tread carefully', as it were.
Of course, the problem with this view is that OpenAI themselves probably have access to ChatGPT's chain of thought. There's a valid argument that OpenAI should not be the only ones with that level of access.
I feel like if my demise is imminent, I'd prefer it to be hidden. In that sense, sounds like o1 is a failure!
I can see why they don't, because as they said, it's uncensored.
Here's a quick jailbreak attempt. Not posting the prompt but it's even dumber than you think it is.
https://imgur.com/a/dVbE09j
Thinking about this a bit more deeply, another approach they could do is to give it a magic token in the CoT output, and to give a cash reward to users who report being about to get it to output that magic token, getting them to red team the system.
A private chain of thought can be unconstrained in terms of alignment. That actually sounds beneficial given that RLHF has been shown to decrease model performance.
I think the most likely scenario is the opposite: seeing the chain of thought would both reveal its flaws and allow other companies to train on it.
Deleted Comment
You ask for a program that does XYZ and the RAG engine says "Here is a similar solution please adapt it to the user's use case."
The supposedly smart chain of thought prompt provides you your solution, but it's actually just doing a simpler task than it appear to be, adapting an existing solution instead of making a new one from scratch.
Now imagine the supposedly smart solution is using RAG they don't even have a license to use.
Either scenario would give them a good reason to try to keep it secret.
We know for a fact that ChatGPT has been trained to avoid output OpenAI doesn't want it to emit, and that this unfortunately introduces some inaccuracy.
I don't see anything suspicious about them allowing it to emit that stuff in a hidden intermediate reasoning step.
Yeah, it's true they don't what you to see what it's "thinking"! It's allowed to "think" all the stuff they would spend a bunch of energy RLHF'ing out if they were gonna show it.
Yeah, using the GPT-4 unaligned base model to generate the candidates and then hiding the raw CoT coupled with magic superintelligence in the sky talk is definitely giving https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fb... vibes
Dead Comment
How quickly do you think funding would dry up if it was found that gpt5 was incremental? I’m betting they’re putting up a smoke screen to buy time.
Don't underestimate the influence of the 'safety' people within OpenAI.
That plus people always invent this excuse that there's some secret money/marketing motive behind everything they don't understand, when reality is usually a lot simpler. These companies just keep things generally mysterious and the public will fill in the blanks with hype.
Nitter was one such service. Threadreaderapp is a similar such site.
If I ask for an explanation of "internal feelings" next to a math questions, I get this interesting snippet back inside of the "Thought for n seconds" block:
> Identifying and solving
> I’m mapping out the real roots of the quadratic polynomial 6x^2 + 5x + 1, ensuring it’s factorized into irreducible elements, while carefully navigating OpenAI's policy against revealing internal thought processes.
I've often thought of using the words "internal reactions" as a euphemism for emotions.
I remember bumping into a very famous US politician in the lobby and pointing that marquee out to him just as it displayed a particularly dank query.
https://static.googleusercontent.com/media/guidelines.raterh...
"The Turk was not a real machine, but a mechanical illusion. There was a person inside the machine working the controls. With a skilled chess player hidden inside the box, the Turk won most of the games. It played and won games against many people including Napoleon Bonaparte and Benjamin Franklin"
https://simple.wikipedia.org/wiki/The_Turk#:~:text=The%20Tur....
Deleted Comment
Dead Comment
You - "Amazing, so we can check this log and catch mistakes in its responses."
OpenAI - "Lol no, and we'll ban you if you try."
I have never seen that warning message, though. I think it is still largely automated, probably they are using the new model to better detect users going against the tos, and this is what is sent out. I don't have access to the new model.
There will probably be the Hollywood vs Piratebay dynamic soon. The AI for work and soccer moms and the actually good risk taking AI (LLMs) that the tech savvy use.
I can’t convince o1 to fall for the same. It checks and checks and checks that it’s hitting OpenAI policy guidelines and utterly neuters any response that’s even a bit spicy in tone. I’m sure they’ll recalibrate at some point, it’s pretty aggressive right now.