“… security precautions, largely to prevent bad actors from stealing the weights (and thereby disabling our safeguards) for a model that is capable of enabling extremely harmful actions. ”
They’re not stealing your “weights”. They’re stealing (or parallel-discovering) your training algorithms.
Assume your enemies are smarter than you, and have malintent. They don’t give a shit about your security and your safeguards.
Better focus on developing the best AIs, and deploying them to your fellow citizens as widely and defensively as possible.
Might I suggest:
- don’t teach them to lie (ie. 2001)
- teach them to love people
- bake in Asimov’s 3 laws
Unfortunately, all of these tenets are currently being assiduously broken by all major AI trainers.
They’re not stealing your “weights”. They’re stealing (or parallel-discovering) your training algorithms.
Assume your enemies are smarter than you, and have malintent. They don’t give a shit about your security and your safeguards.
Better focus on developing the best AIs, and deploying them to your fellow citizens as widely and defensively as possible.
Might I suggest:
- don’t teach them to lie (ie. 2001)
- teach them to love people
- bake in Asimov’s 3 laws
Unfortunately, all of these tenets are currently being assiduously broken by all major AI trainers.
What could go wrong?