The first is the kind of humdrum ChatGPT safety of, don't swear, don't be sexually explicit, don't provide instructions on how to commit crimes, don't reproduce copyrighted materials, etc. Or preventing self-driving cars from harming pedestrians. This stuff is important but also pretty boring, and by all indications corporations (OpenAI/MS/Google/etc.) are doing perfectly fine in this department, because it's in their profit/legal incentive to do so. They don't want to tarnish their brands. (Because when they mess up, they get shut down -- e.g. Cruise.)
The second kind is preventing AGI from enslaving/killing humanity or whatever. Which I honestly find just kind of... confusing. We're so far away from AGI, we don't know the slightest thing of what the actual practical risks will be or how to manage them. It's like asking people in the 1700's traveling by horse and carriage to design road safety standards for a future interstate highway system. Maybe it's interesting for academics to think about, but it doesn't have any relevance to anything corporations are doing currently.
Ask it to solve a logical riddle that is only a minor variation with respect to items or words to existing issues (i.e. it's not something that is in its model).
It is unable to do either.
That's why it's not AGI.