Readit News logoReadit News
aidaman commented on About That OpenAI "Breakthrough"   garymarcus.substack.com/p... · Posted by u/passwordoops
psbp · 2 years ago
Skeptic and completely reactionary. I had to unfollow him on Twitter because he always has to have a "take" on every AI headline, and he's often contradictory between "AI is useless" and "AI is a huge threat".
aidaman · 2 years ago
I tend not to listen to people who have no fucking clue what they're talking about.
aidaman commented on OpenAI researchers warned board of AI breakthrough ahead of CEO ouster   reuters.com/technology/sa... · Posted by u/mfiguiere
raincole · 2 years ago
But he also has the incentive to exaggerate the AI's ability.

The whole idea of double-blind test (and really, the whole scientific methodology) is based on one simple thing: even the most experienced and informed professionals can be comfortably wrong.

We'll only know when we see it. Or at least when several independent research groups see it.

aidaman · 2 years ago
Unlikely. We'll know when OpenAI has declared itself ruler of the new world, imposes martial law, and takes over.
aidaman commented on     · Posted by u/aidaman
aidaman · 2 years ago
One day before he was fired by OpenAI’s board last week, Sam Altman alluded to a recent technical advance the company had made that allowed it to “push the veil of ignorance back and the frontier of discovery forward.” The cryptic remarks at the APEC CEO Summit went largely unnoticed as the company descended into turmoil.

But some OpenAI employees believe Altman’s comments referred to an innovation by the company’s researchers earlier this year that would allow them to develop far more powerful artificial intelligence models, a person familiar with the matter said. The technical breakthrough, spearheaded by OpenAI chief scientist Ilya Sutskever, raised concerns among some staff that the company didn’t have proper safeguards in place to commercialize such advanced AI models, this person said.

THE TAKEAWAY • OpenAI researchers made a breakthrough in recent months that could lead to more powerful AI models • Researchers used the new technique to build a model that could solve math problems it had never seen before • The breakthrough raised concerns among some OpenAI employees about the pace of its advances and whether it had safeguards in place In the following months, senior OpenAI researchers used the innovation to build systems that could solve basic math problems, a difficult task for existing AI models. Jakub Pachocki and Szymon Sidor, two top researchers, used Sutskever’s work to build a model called Q* (pronounced “Q-Star”) that was able to solve math problems that it hadn’t seen before, an important technical milestone. A demo of the model circulated within OpenAI in recent weeks, and the pace of development alarmed some researchers focused on AI safety.

The work of Sutskever’s team, which has not previously been reported, and the concern inside the organization, suggest that tensions within OpenAI about the pace of its work will continue even after Altman was reinstated as CEO Tuesday night, and highlights a potential divide among executives.

In the months following the breakthrough, Sutskever, who also sat on OpenAI’s board until Tuesday, appears to have had reservations about the technology. In July, he formed a team dedicated to limiting threats from AI systems vastly smarter than humans. On its web page, the team says, “While superintelligence seems far off now, we believe it could arrive this decade.”

Last week, Pachocki and Sidor were among the first senior employees to resign following Altman’s ouster. Details of Sutskever’s breakthrough, and his concerns about AI safety, help explain his participation in Altman’s high-profile ouster, as well as why Sidor and Pachocki resigned quickly after Altman was fired. The two returned to the company after Altman’s reinstatement.

In addition to Pachocki and Sidor, OpenAI President and co-founder Greg Brockman had been working to integrate the technique into new products. Last week, OpenAI’s board removed Brockman as a director, though it allowed him to remain as an employee. He resigned shortly thereafter, but returned when Altman was reinstated.

Sutskever’s breakthrough allowed OpenAI to overcome limitations on obtaining enough high-quality data to train new models, according to the person with knowledge, a major obstacle for developing next-generation models. The research involved using computer-generated, rather than real-world, data like text or images pulled from the internet to train new models.

For years, Sutskever had been working on ways to allow language models like GPT-4 to solve tasks that involved reasoning, like math or science problems. In 2021, he launched a project called GPT-Zero, a nod to DeepMind’s AlphaZero program that could play chess, Go and Shogi. The team hypothesized that giving language models more time and computing power to generate responses to questions could allow them to develop new academic breakthroughs.

An OpenAI spokesperson declined to comment.

aidaman commented on OpenAI's employees were given two explanations for why Sam Altman was fired   businessinsider.com/opena... · Posted by u/meitros
leobg · 2 years ago
What about the date?
aidaman · 2 years ago
it was a really long time ago
aidaman commented on NYT interviews former OpenAI CEO Sam Altman (11/20/23)   nytimes.com/2023/11/20/po... · Posted by u/veeralpatel979
moralestapia · 2 years ago
>Kevin Roose: The theory that I heard attached to this was that you are secretly an accelerationist, a person who wants A.I. to go as fast as possible, and that all this careful diplomacy that you’re doing and asking for regulation — this is really just the sort of polite face that you put on for society. But deep down you just think we should go all gas, no brakes toward the future.

>Sam Altman: No, I certainly don’t think all gas, no brakes toward the future.

Oh great, what a relief :^)

aidaman · 2 years ago
Now I believe him :)

Dead Comment

aidaman commented on OpenAI's employees were given two explanations for why Sam Altman was fired   businessinsider.com/opena... · Posted by u/meitros
ipaddr · 2 years ago
The obvious answer is he was the one Sam gave an opinion on. He was one of the people doing duplicate work (probably the first team). Sam said good things about him to his ally and bad things to another board member. There was a falling out between that board member and Sam and she spilled the beans.
aidaman · 2 years ago
one of the first members to quit was on a team that sounds a lot like a separate team that is doing the same thing as Ilya's Superalignment team.

"Madry joined OpenAI in May 2023 as its head of preparedness, leading a team focused on evaluating risks from powerful AI systems, including cybersecurity and biological threats."

aidaman commented on OpenAI's Murati Aims to Re-Hire Altman, Brockman After Exits   bloomberg.com/news/articl... · Posted by u/himaraya
aidaman · 2 years ago
how many coups will it take to end this
aidaman commented on OpenAI's Murati Aims to Re-Hire Altman, Brockman After Exits   bloomberg.com/news/articl... · Posted by u/himaraya
aidaman · 2 years ago
this is by far the craziest timeline

u/aidaman

KarmaCake day95January 1, 2018View Original