That reaches near the fact that a lot of AI is not ready for the enterprise especially when interconnected with other AI agents since it lacks identity and privileged access management.
Perhaps one could establish the laws of "being able to use AI for what it is", for instance, within the boundary of the general public's web interface, not limiting the instances where it successfully advertises itself as "being unable to provide medical advice" or "is prone to or can make mistake", and such, to validating that the person understands by asking them directly and perhaps somewhat obviously indirectly and judging if they're aware that this is a computer you're talking to.
There are things you shouldn't encourage people of any age to do. If a human telling him these things would be found liable then google should be. If a human would get time behind bars for it, at least one person at google needs to spend time behind bars for this.
This isn't Gemini's words, it's many people's words in different contexts.
It's a tragedy. Finding one to blame will be of no help at all.
In this case, your intuition is right, I threw this around as fast I could to find out if little would go a long way.
It went further than I thought and I appreciate the various views this sparks. Though it's completely irrelevant to the topic at hand, it's rightly so.
One must learn to walk the walk not merely talk the talk. :-)
If it's of interest to anyone, I have not used any alternative means of advertising this YC post other than posting it here and using 2 alt accounts to write the first two replies and two upvotes.
Which influence tactic or behavioral shift stood out to you the most in this briefing? Drop your profiling observations below—I’ll be analyzing the best ones.