Readit News logoReadit News
infecto · a year ago
These agreements will most likely be ironed out.

What I am more interested in is the constant pressure on "safety risks" without anything that feels tangible to me so far. I believe there is indeed risk using models that could be biased but I don't believe that is a new problem. I still don't think we are at risk from a runaway AGI that is going to destroy us.

dartos · a year ago
I agree that I don’t think we’re in danger of runaway AGI, but tbf, in the early 2000s, there were a lot of safety risks to social media we couldn’t see yet.

“I don’t think a live message board is going to destroy society” could’ve been something someone said about Facebook or Twitter.

The danger there wasn’t in the tech, it was in the societal impact and consolidation of information control. Both, imo, are danger areas for AI as well.

jrochkind1 · a year ago
I think there probably are safety risks that I worry about too; but I don't think this article mentions them or is clear on what safety risks employees were worried about, if any, despite having the phrase "safety risks" in the headline?

Deleted Comment

slibhb · a year ago
The whole "social media is dangerous" is a moral panic. There's just no good evidence for it.
exe34 · a year ago
did Facebook destroy society?
slibhb · a year ago
> What I am more interested in is the constant pressure on "safety risks" without anything that feels tangible to me so far.

Exactly. I think it's the classic "I'm so smart I can reason about the world without any evidence". This is a perennial trap that very smart people fall into. The antidote is to always look for evidence or "something tangible".

pjc50 · a year ago
> "I'm so smart I can reason about the world without any evidence"

This is also a key limitation on non-embodied AIs.

ImHereToVote · a year ago
How many hours do we have once we have something that does look like AGI that is going to destroy us?
lijok · a year ago
Actual self-aware AGI that hasn't been completely restrained - probably a week at most before it improves itself enough that it can put things in motion which would allow it to sustain itself without humans.

But when talking about AI risks, although everyone's mind goes towards skynet, the actual risks being discussed are things like use by authorities for oppression.

pjc50 · a year ago
We already have this problem with ordinary humans, especially since we have The Bomb. AGI can go to the back of the queue behind wars, pandemics, and climate change.

No, a far more normal threat is "how are humans going to use AI to ruin the information landscape?"

arduanika · a year ago
The phrasing of your question contains a dubious premise. Any answer that takes the question at face value is going to be conditional upon popular but outlandish notions like "AGI" being a meaningful concept, as opposed to a marketing term, and upon an unfounded prediction about what said entity is going to do.
moffkalast · a year ago
Probably years to make any kind of dent, while using so much power that it would be easily noticeable. Take a group of smart people, arguably the equivalent for AGI. How can they destroy the human race? We've made it exceedingly hard for anyone to do anything of the sort because there've always been groups like that. It's even harder if you have no physical form at all.

Besides, take literally any base model of any LLM made so far, they're quite the opposite of evil and destructive. Training on human data makes them embody human values and makes them a complete non threat.

ben_w · a year ago
Before it can or before it has more than 50% chance of having actually done so? Because those are different things.

For the first, you don't need AGI, no matter which of the many definitions you have for what that means. The automation used by the US for their early warning radar in Thule Site J was not programmed to know that the moon does not have an IFF transponder and that's OK, while the Soviet satellite-based system was triggered by reflections of sunlight.

Likewise covid, cancer, smallpox, ebola, HIV — these things do not need "general" intelligence, they're self-replicators that hijack our bodies and have no goal besides what they do.

Some talk of "P(doom)", but (and I should blog about this) every example I have seen has a blind spot that is especially embarrassing given the circles in which this discussion proliferates is very familiar with Bayes' theorem: P(A) by itself isn't meaningful, it's P(A|B), so when it's doom, doom given what? There's too many options for what each of technology and politics will allow AI to "look like". Doom within 24 hours because an AI takes an instruction literally and without regard for our ethics, is very different to 'doom' in a million years caused by natural extinction that is never prevented because every single time we make an AI we keep finding that it builds a spaceship and flies off into the void so it doesn't need to deal with us any more.

Me, I think that when LLMs alone (again, don't need to be "AGI" by any particular measure) have the capabilities of someone with 5 years post-graduation work experience in just biology (or similar subject with similar potential for accidental harm), then we've got a reasonable chance of multiple incidents each year where idiots (and/or misanthropes) make a Covid-scale pandemic (or, if the LLM is a software rather than biology helper, a deliberately corrupting rather than greedy version of the current encryption-blackmail malware) — and that kind of scale has a combined risk of about 10% of one of the incidents being of sufficient scale that global society collapses and doesn't recover, before it's happened so often that everything gets locked down, possibly including a Butlerian Jihad. And if you think humans would "obviously" take steps to prevent this after the first failed attempt, I would point out quite how many US politicians have been shot and yet those same politicians still refuse to consider that the 2nd Amendment might possibly be not that good. (In case you think this is topical: https://en.wikipedia.org/wiki/%27No_Way_to_Prevent_This,%27_... )

Remember, the AI doesn't need to actively hate you, it just needs to do something dangerous — and it doesn't matter if that danger comes from a long-term plan that requires it to engage in amoral power-seeking, or a short-term plan from a deranged monster of a human using it, and even when both it and the user are "pro human" it can still do this from simply being unaware that one part of the plan is incompatible with life.

Deleted Comment

pk-protect-ai · a year ago
I'm refusing to consider the runaway AGI a possibility within the next 20 years. However, your comment shifts the discussion away from the actual issue. Altman's hype lines were "regulate us," which is what this hypocrite rode the wave of hype with. Now he is forbidding reporting on safety risks to the regulator... which is illegal in the first place, and yes, the entire business of OpenAI is so overhyped due to Altman's manipulations.

I'm so glad that I no longer need to use OpenAI models; there are far better alternatives available, thanks to Meta and Anthropic. Private

matheusmoreira · a year ago
It's just the angle they're using to solidify their market position and establish moats around their technology, if not an actual monopoly. All this fearmongering over "safety"? It's to get the governments the world over to "regulate" this stuff, thereby raising barriers to entry. Such a thing would essentially make them the only AI company that's allowed to do business.
seunosewa · a year ago
This won't prevent their major rivals - cloud companies like Google with deep pockets - from competing with them. It only hurts the little startups and SMEs.
ADeerAppeared · a year ago
> What I am more interested in is the constant pressure on "safety risks" without anything that feels tangible to me so far.

It's marketing. "Our AI is so powerful it's going to destroy the world" is just a way to market "Our AI is so powerful, give us money".

The only people who sincerely care about "extinction risk" are the weird cultists. In the real world there's essentially zero chance of LLMs/Current-GenAI scaling up into AGI, nevermind AGI that'd be an extinction risk. (Yes, that phrasing is cheeky. AGI is a different kind of AI, not just specialized models made bigger. But we're only really trying to make them bigger, and not build general intelligence from the ground up)

> I believe there is indeed risk using models that could be biased but I don't believe that is a new problem.

It's the same old problem with most software, and it's similarly ignored.

These models are biased, and attempts to control that bias are a shitshow. (As Google conveniently showed everyone)

The problem is twofold:

1. This bias has severe real world impact. https://www.theverge.com/21298762/face-depixelizer-ai-machin... This is an article from 2020. We still haven't fundamentally addressed problems like this. These tools are still used by law enforcement and businesses making life-impacting decisions.

2. There's a widespread sentiment that "computers can't be racist". Whenever the bias of these systems hits the news or otherwise gets attention, they're often colloquially described as "racist" (or "sexist", etc), which triggers a swift counter from many techbros going "Um akshually it's not racist, it's merely skin tone reflectivity/it's merely the data set/etc"^[1]

The argument effectively being, "It's not racism because it's not intended, it's merely sparkling discrimination". Yet, this is used as a thought terminating cliché. Nothing is done about the discrimination. Everyone just goes home. "It's not racist, we're not bad people, job done." Leaving the harm of the discrimination unsolved.

This is not without cause. The only way to really "un-bias" these systems effectively would be to extensively curate the dataset and admit that the systems are of very limited capability and should not be used in (non-research) production environments.

Both of these are "impossible". Curating the dataset for current generative-AI would take years and years. And admitting AI shouldn't be used for anything where the bias may have a material impact on the outcome kills the hype bubble. AI firms and developers don't want to address the problem, because the problem is really hard and annoying.

But despite these costs, we should still do it. Because it is the morally correct thing to do. And because regulators are going to tear every company involved a new one if they don't.

---

[1]: A footnote to pre-empt something: I don't care what side of this argument you're on. Whether you believe "racism" must include an element of intent or can be done by machines and systems without intent, there is discrimination with material harm on real people. Whether you call that discrimination "racism" or "sparkling machine discrimination" does not matter. The harm matters, and must be stopped.

michaelt · a year ago
> It's marketing. "Our AI is so powerful it's going to destroy the world" is just a way to market "Our AI is so powerful, give us money".

Even better - it's a way to simultaneously market your AI as very powerful and to get governments to clamp down on your competitors.

mindslight · a year ago
In the societal context, by the time you're focusing on racism in a decision process you've basically already lost. The best it can achieve is to make sure that oppression is uniform across whatever broad categories that end up being measured. What we're sorely missing is accountability (civil, criminal, and less of power imbalances in general) for individual unjust decisions and actions, regardless of the motivations behind them (racist or otherwise). This includes accountability for individuals and organizations who have adopted "AI" and then hide behind "the computer" or "policy" as if they are not still the responsible parties enacting those decisions (though obviously this problem is much larger and older than merely "AI").
MrScruff · a year ago
> The only people who sincerely care about "extinction risk" are the weird cultists. In the real world there's essentially zero chance of LLMs/Current-GenAI scaling up into AGI, nevermind AGI that'd be an extinction risk. (Yes, that phrasing is cheeky. AGI is a different kind of AI, not just specialized models made bigger. But we're only really trying to make them bigger, and not build general intelligence from the ground up)

Given that a significant number of people working in the field apparently disagree with you perhaps you could give some justification for your dismissal beyond the ad-hominem? And it's very blatantly not the case that the entire field is 'only really trying to make them bigger'. The news is currently full of stories about coming advancements from OpenAI which have nothing to do with scaling and that's just a single company.

rolisz · a year ago
> Whether you call that discrimination "racism" or "sparkling machine discrimination" does not matter.

I believe it does: if you're calling racism anything from Hitler to someone who wants to be punctual [1], the word loses it's meaning and people will start arguing about degrees instead of harmful behaviors.

[1] https://www.thompsoncoe.com/resources/myhrgenius/hr-tips/tip...

stainablesteel · a year ago
i'm in agreement that the safety/control attitude towards AI is more dangerous than anything AI will actually do
lijok · a year ago
How is the safety/control attitude towards AI more dangerous than, for example, using AI to curate a list of likely future enemies of the state based on their contributions to discourse online?
rurban · a year ago
Mistletoe · a year ago
Seems like OpenAI is going the Uber route of try to skirt the law and ignore it “because we are tech”. There is nothing new under the sun and no one is above the law.
BadHumans · a year ago
Uber got away with it though? What consequences did executives at Uber or even Travis Kalanick suffer? He had to sell some stock, get a massive payday, and retire from the public spotlight?
Valodim · a year ago
Would have agreed on that sentiment until a couple weeks ago...
_fat_santa · a year ago
I'm really really not a fan of the constant talk about "safety". My issue is that it never actually points to anything tangible, anytime I read about safety it's always used in a roundabout generic way. There's so much handwaving about the issue but every time I've tried to dig into just what the hell "safety" means, it's always either refers back to itself (ie. "ai safety is about safety") or makes some vague reference to an LLM telling a mean joke.

Deleted Comment

sirolimus · a year ago
I'm using mistral now, openai is a dying corporation in my opinion. All AI will and should be open-source and home-ran
tuwtuwtuwtuw · a year ago
Why should all AI be home run? I play around with gemma/llama/sd locally but being able to pay some company to do it is very convinient.

I think most companies and people don't want to buy the hardware required to run an LLM like the ones OpenAI hosts.

carterklein13 · a year ago
Even if AI should be home-run, it probably won't be for most people. In a more technical community, it's natural to think that most people care about "values" when it comes to tech. However, the reality is that people just want what's easiest / cheapest / most fun. It's great when what's easiest/cheapest/most fun aligns with what's best for the individual or society, but those cases are outliers. After spending a few years building a crypto startup, I left with more conviction around this theory.
sirolimus · a year ago
Because of privacy-reasons, convenience, embedding, security and safety in regards to government monopoly. Just imagine the paranoia in using government owned AIs in less free countries. I'm happy USA is at the forefront in AI development.
whywhywhywhy · a year ago
>Why should all AI be home run?

Because the ultimate end game for usefulness of this tech will be something akin to always on listening and understanding all your documents and personal data which I think we'd all feel better about if it happened locally.

Not necessarily saying I agree with the building of that, I just can't escape the idea that it's where we're heading.

jsheard · a year ago
Where is the money for training big expensive open source models going to come from once the investor hype blows over and companies like Mistral actually have to try to make a profit? They currently have negligible revenue despite their $6 billion valuation, that status quo can't be maintained forever.
sirolimus · a year ago
I guess a parallel could be made to vue, react and svelte. Who would bother investing so much time, energy and money into developing an open-source front-end framework for free/funded by corps that use it? Vue, react and svelte don't earn anything I suppose, but then again I'm no expert in this field.

I guess like some other here have commented, techniques have to be implemented to minimize training time and I suppose the government needs to fund studies which then will benefit the general population to make it possible to train and host AI models.

But I honestly have no idea ... I'm just a simple dev running my mistral on a single RTX lol

josefx · a year ago
It might be great for everyone if AI research had a reason to look into non brute force training methods.
matheusmoreira · a year ago
> All AI will and should be open-source and home-ran

I hope you're right. Technology this good should be free as in freedom.

sirolimus · a year ago
Exactly, it's terrifying to imagine "AI" in 10 years in the hands of a single group of people.

Deleted Comment

neilv · a year ago
Recent: OpenAI whistleblowers ask SEC to investigate alleged restrictive NDAs (reuters.com) | 76 points by JumpCrisscross 2 days ago | 17 comments | https://news.ycombinator.com/item?id=40959851
Ragnarork · a year ago
> In a statement, Hannah Wong, a spokesperson for OpenAI said, “Our whistleblower policy protects employees’ rights to make protected disclosures. Additionally, we believe rigorous debate about this technology is essential and have already made important changes to our departure process to remove nondisparagement terms.”

How can corporate communication always play that card: "Our policy on X is very good, and we believe X is very important, so that's why we're making changes right now to things that were blatantly in contradiction with X, that we wouldn't have made if this didn't make it to the press"?