The pervasive failure of these institutions to meet their stated objectives isn't an isolated phenomenon. It's symptomatic of a larger, systemic problem – the widespread presence of perverse and misaligned incentives at all levels within large organizations.
Unless we find a way to counteract this, attempts at reform will merely catalyze further expansion and complexity. The uncomfortable truth is, once an organization surpasses a certain size, it seems to take on a 'life of its own', gradually sacrificing its original mission to prioritize self-preservation and expansion. Who has ever seen an organization like this voluntarily reform itself? I certainly haven't.
Not that, I hope, anyone expected a strong argument to be had there. It seems reasonably certain to me that humanity will go extinct one way or another eventually. That is also not a good argument in this situation.
If for example, we were in scenario 2 and it was still the case that a large number of people thought AI doomsday was a serious risk, then that would be a much stronger argument for taking the idea of AI doomsday seriously. If on the other hand we are in scenario 1, where there is a long history of people falling prey to apocalypticism, then that means any new doomsday claims are also more likely to be a result of apocalypticism.
I agree that is is likely that humans will go extinct eventually, but I am talking specifically about AI doomsday in this discussion.
I am not in the camp that is especially worried about the existential threat of AI, however, if AGI is to become a thing, what does the moment look like where we can see it is coming and still have time to respond?
Yes, because there were other kinds of bombs before then that could already kill many people, just at a smaller scale. There was a lot of evidence that bombs could kill people, so the idea that a more powerful bomb could kill even more people was pretty well justified.
>if AGI is to become a thing, what does the moment look like where we can see it is coming and still have time to respond?
I think this implicitly assumes that if AGI comes into existence we will have to have some kind of response in order to prevent it killing everyone, which is exactly the point I am saying in my original argument isn't justified.
Personally I believe that GPT-4, and even GPT-3, are non-superintelligent AGI already, and as far as I know they haven't killed anyone at all.
Throughout history there have been millions, if not billions of examples of lifeforms. So far, 100% of those which are as intelligent as humans have dominated the planet. The prior should be that the people who believe AI will come to dominate the planet are right, unless and until there is very strong evidence to the contrary.
Or... those are both wrong because they're both massive oversimplifications! The reality is we don't have a clue what will happen so we need to prepare for both eventualities, which is exactly what this statement on AI risk is intended to push.
This is a much more subjective claim than whether or not the world has ended. By count and biomass there are far more insects and bacteria than there are humans. It's a false equivalence, and you are trying to make my argument look wrong by comparing it to an incorrect argument that is superficially similar.
1) Throughout history many people have predicted the world would soon end, and the world did not in fact end.
2) Throughout history no one predicted the world would soon end, and the world did not in fact end.
The fact that the real world is aligned with scenario 1 is more an indication that there exists a pervasive human cognitive bias to think that the world is going to end, which occasionally manifests itself in the right circumstances (apocalypticism).
Deleted Comment