With reports and blogposts [0][1] talking about how open ai has begun training their next flagship model, when do you expect the next model to be launched?
Furthermore, what do you think they're going to do to make it as "safe" as possible. It's funny OpenAI didn't release GPT-2 immediately to the public because of safety worries, but has now been releasing models without the same care for safety and I imagine this will continue with GPT-5
[0] https://www.zdnet.com/article/openai-is-training-gpt-4s-successor-here-are-3-big-upgrades-to-expect-from-gpt-5/ [1] https://openai.com/index/openai-board-forms-safety-and-security-committee/
- a steady increment of GPT-n+1 every 6 months for marketing purposes.
- each will improve on the last by smaller and smaller margins.
- hallucinations won't be fixed anytime soon.
- We will hit a bit of a winter, as the hype was so big but like self driving cars the devil is in the details. The general public realizes these things are just essentially giving us averages.
- A big market will emerge around "authenticity" and "verified texts" as the internet continues to get flooded with AI generated content.
You already failed then, gpt4 was released more than a year ago
But there is little to engage with in your comment, I'd be most curious to explore on what u believe it's capacities might be, and societal impact.
Since the start of their partnership in 2019, OpenAI has primarily utilized Microsoft's Azure data centers for training its models. In 2023, Microsoft acquired approximately 150,000 H100 GPUs. [1]
The initial version of GPT-4 ran on a cluster of A100 GPUs. It is likely that GPT-5 will run on the newly acquired H100 GPUs, and it is plausible that GPT-4 Turbo and GPT-4o also utilize this infrastructure. The inference speed of GPT-5 should not be significantly slower than that of GPT-4 to ensure it remains practical for most applications.
Assuming the H100 is 4.6 times faster for inference than the A100 [2], this gives us a lower bound for performance expectations. I anticipate GPT-5 to be at least five times larger in terms of model parameters. Given that both A100 and H100 have a maximum capacity of 80GB, it is unlikely we will see a single gigantic model. Instead, we can expect an increase in the number of experts. If GPT-4 operates as a mixture of experts with 8x220 billion parameters, then GPT-5 might scale up to something like 40x220 billion parameters. However, the exact release date, safety measures, and benchmark performance of GPT-5 remain uncertain.
[1]: https://www.tomshardware.com/tech-industry/nvidia-ai-and-hpc...
[2]: https://nvidia.github.io/TensorRT-LLM/blogs/H100vsA100.html
If you mean the hallucinations I don't think that will ever really be solved. I think people just have to learn that LLMs are not divine oracles that are always correct. Just like the training data generated by the flawed humans that are often either wrong or outright lying.
Garbage in, garbage out.
Not saying that AI isn't useful. But expecting what is basically a "human simulator" not to inherit humanity's flaws is a bit disingenious.
This is true by definition, since a 'hallucination' isn't a failure condition in which the system isn't working as designed, it's just a post-hoc term for when probabilistic output doesn't meet our expectations, which is inherent to the nature of all probabilistic systems.
Within the system, there is no distinction between 'hallucination' and 'non-hallucination'.LLMs are applying the same stochastic process in all cases, and the criteria of whether we call its output a 'hallucination' is entirely external to their functioning.
Strictly speaking, LLMs are always hallucinating, since they are always generating inferences based on hard-coded statistical models and have no awareness of the semantic meaning of the tokens they correlate, nor any internal criteria of external correctness.
> I think people just have to learn that LLMs are not divine oracles that are always correct.
We all need to keep repeating the mantra "all models are wrong; some models are useful."
This seems impossible to me. Many of the tasks I use GPT for inherently require understanding and thinking.
There is actually a big difference even between GPT4 and GPT4o when used for programming. The latter produces much bigger chunks of code, doesn't forget the variables names. My guess due to larger context window.
From what is available GPT5 should be more brainwashed and actually a collection of models including image, sound, may be video. Plus some algorithms like web search, data extraction, strict text formatting. OAI will likely use GPT4 to produce some high quality training data. This way they can make GPT5 'smarter', better at logic and problem solving.
GPT-4 is not an LLM, but a complex software system, which has LLM(s) at its core, but also other components like RAG, toxicity filter, apologizing mechanism, expert systems, etc. "GPT-4" is product name / marketing name. For OpenAI, this would be logical for performance and business reasons. This explains also how they can tune it, the apparent secrecy about the architecture, etc.
It's also logical to make small, incremental changes to this system instead of building whatever GPT-5 would mean from ground up. So I expect "GPT-5" is also just a marketing name for a slightly better black-box (for us) system and product line.
The Assistants API is closer to ChatGPT & what you are describing.
Edit: it should be noted that the Assistants API is somewhat model agnostic as well, so the product part of this isn't part of the inference system.
Basically the same trap as CPU's in the 90's early 2000's where the naming convention had to change to reflect the fact that speeds can't continue to double every 2 years.
I also believe that they will delay the release of GPT-5 as much as possible, the reason being that it will be underwhelming (at least in comparison to GPT3.5 hype). Possibly release close to some Google new release timeline (their main competitor).
They are the main driver of a bubble that has benefited a lot both Microsoft and NVidia and other hyperscalers, and if they release the model and display that we're at the "diminished returns" phase, this will crash a big part of the industry, not to mention NVidia.
Companies are buying H100s and investing in expensive AI talent because they believe they progress quickly, if the progress stalls for LLMs, there'll be a huge drop in sales and CAPEX in this industry.
There are still many up-and-coming projects that rely on NVidia hardware for training, like Tesla's autopilot and others, but the bulk of the investment in H100 in recent years has been mostly because of LLMs.
Also all the new AI talent will move on to do something new and hopefully we will have more discoveries and potential uses, but we're definitely peak LLMs.
(ps: just my opinion)
I bet it'll be focused on being a better Siri for Apple. This is good for them as a business, but innovation-wise, it's pretty meh.
It'll still suck for factual or precise information, and its reasoning will still be -1
The later iterations are heavily censored so the public was provided a bit of a transition period before things got too chaotic.
I'm sure there were many other reasons the authors themselves weren't aware of at the time such as the inundation of AI content skewing further training quality.
Of course this is a roundabout explanation, there's always more detail that can be added and I'd rather be objective. There's always a financial motive for companies too so take that into consideration. The hype definitely played into their marketing.