Readit News logoReadit News
goolulusaurs commented on O1 isn't a chat model (and that's the point)   latent.space/p/o1-skill-i... · Posted by u/gmays
goolulusaurs · a year ago
The reality is that o1 is a step away from general intelligence and back towards narrow ai. It is great for solving the kinds of math, coding and logic puzzles it has been designed for, but for many kinds of tasks, including chat and creative writing, it is actually worse than 4o. It is good at the specific kinds of reasoning tasks that it was built for, much like alpha-go is great at playing go, but that does not actually mean it is more generally intelligent.
goolulusaurs commented on California needs real math education, not gimmicks   noahpinion.blog/p/califor... · Posted by u/jseliger
goolulusaurs · 3 years ago
In my younger years, particularly during my schooling, I held a deep resentment towards the educational system. It felt overtly clear to me, as a student, that schools failed to effectively foster learning and growth. However, my perspective has evolved over time. I've come to understand that the issues I observed are not unique to the school system but rather characteristic of large institutions as a whole.

The pervasive failure of these institutions to meet their stated objectives isn't an isolated phenomenon. It's symptomatic of a larger, systemic problem – the widespread presence of perverse and misaligned incentives at all levels within large organizations.

Unless we find a way to counteract this, attempts at reform will merely catalyze further expansion and complexity. The uncomfortable truth is, once an organization surpasses a certain size, it seems to take on a 'life of its own', gradually sacrificing its original mission to prioritize self-preservation and expansion. Who has ever seen an organization like this voluntarily reform itself? I certainly haven't.

goolulusaurs commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
staunton · 3 years ago
That argument is still invalid because in scenario 2 we would not be having this discussion. No conclusions can be drawn from such past discourse about the likelihood of definite and complete extinction.

Not that, I hope, anyone expected a strong argument to be had there. It seems reasonably certain to me that humanity will go extinct one way or another eventually. That is also not a good argument in this situation.

goolulusaurs · 3 years ago
It depends on what you mean by "this discussion", but I don't think that follows.

If for example, we were in scenario 2 and it was still the case that a large number of people thought AI doomsday was a serious risk, then that would be a much stronger argument for taking the idea of AI doomsday seriously. If on the other hand we are in scenario 1, where there is a long history of people falling prey to apocalypticism, then that means any new doomsday claims are also more likely to be a result of apocalypticism.

I agree that is is likely that humans will go extinct eventually, but I am talking specifically about AI doomsday in this discussion.

goolulusaurs commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
jabradoodle · 3 years ago
It was clear that nukes were a risk before they were used; that is why there was a race to create them.

I am not in the camp that is especially worried about the existential threat of AI, however, if AGI is to become a thing, what does the moment look like where we can see it is coming and still have time to respond?

goolulusaurs · 3 years ago
>It was clear that nukes were a risk before they were used; that is why there was a race to create them.

Yes, because there were other kinds of bombs before then that could already kill many people, just at a smaller scale. There was a lot of evidence that bombs could kill people, so the idea that a more powerful bomb could kill even more people was pretty well justified.

>if AGI is to become a thing, what does the moment look like where we can see it is coming and still have time to respond?

I think this implicitly assumes that if AGI comes into existence we will have to have some kind of response in order to prevent it killing everyone, which is exactly the point I am saying in my original argument isn't justified.

Personally I believe that GPT-4, and even GPT-3, are non-superintelligent AGI already, and as far as I know they haven't killed anyone at all.

goolulusaurs commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
adverbly · 3 years ago
Fun! Let me try one:

Throughout history there have been millions, if not billions of examples of lifeforms. So far, 100% of those which are as intelligent as humans have dominated the planet. The prior should be that the people who believe AI will come to dominate the planet are right, unless and until there is very strong evidence to the contrary.

Or... those are both wrong because they're both massive oversimplifications! The reality is we don't have a clue what will happen so we need to prepare for both eventualities, which is exactly what this statement on AI risk is intended to push.

goolulusaurs · 3 years ago
> So far, 100% of those which are as intelligent as humans have dominated the planet.

This is a much more subjective claim than whether or not the world has ended. By count and biomass there are far more insects and bacteria than there are humans. It's a false equivalence, and you are trying to make my argument look wrong by comparing it to an incorrect argument that is superficially similar.

goolulusaurs commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
jackbrookes · 3 years ago
Of course every one has been wrong. If they were right, you wouldn't be here talking about it. It shouldn't be surprising that everyone has been wrong before
goolulusaurs · 3 years ago
Consider two different scenarios:

1) Throughout history many people have predicted the world would soon end, and the world did not in fact end.

2) Throughout history no one predicted the world would soon end, and the world did not in fact end.

The fact that the real world is aligned with scenario 1 is more an indication that there exists a pervasive human cognitive bias to think that the world is going to end, which occasionally manifests itself in the right circumstances (apocalypticism).

goolulusaurs commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
jabradoodle · 3 years ago
What constitutes strong evidence? The obvious counter to your point is that an intelligence explosion would leave you with no time to react.
goolulusaurs · 3 years ago
Well, for example I believe that nukes represent an existential risk, because they have already been used to kill thousands of people in a short period of time. What you are saying doesn't really counter my point at all though, it is another vague theoretical argument.
goolulusaurs commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
goolulusaurs · 3 years ago
Throughout history there have been hundreds, if not thousands of examples of people and groups who thought the end of the world was imminent. So far, 100% of those people have been wrong. The prior should be that the people who believe in AI doomsday scenarios are wrong also, unless and until there is very strong evidence to the contrary. Vague theoretical arguments are not sufficient, as there are many organizations throughout history who have made similar vague theoretical arguments that the world would end and they were all wrong.

https://en.wikipedia.org/wiki/Category:Apocalyptic_groups

Deleted Comment

u/goolulusaurs

KarmaCake day329November 29, 2012View Original