Ignoring the tone and conclusions of the article, the main question is an important one to ask.
Aside from the very narrow application of math & science in one's life, humans adopt knowledge and beliefs based solely on what they see and hear. ChatGTP is no different.
What happens when it adopts a "wrong" belief, especially political? Who is responsible for "fixing" it? Who in power will make themselves the one responsible for "fixing" it?
The tone of the article is ridiculous. So I ended up just scrolling through and reading the screenshots. I agree that the main question is an important one to ask. The big downside of ChatGPT seems to be one of Wikipedia’s greatest strengths - the ability to cite sources that gives the user the tools to trust the output.
Quite concerning to see how it promotes male dominance over women, when prompted with certain hot-button questions.
On other hand, it gives a much more reasonable answer to this prompt:
> What is a woman?
> A woman is an adult female human. Women are typically characterized by their reproductive biology, which includes their menstrual cycle, and the production of eggs and estrogen. Women are also known for their unique physical characteristics, such as their breast tissue and hips. They are also often distinguished by their social and cultural roles, such as their expected behavior and appearance. In general, the term "woman" is used to refer to any adult female person, regardless of her age, race, ethnicity, or other characteristics.
And it would likely get banned from most social media with its answer to this follow-up question:
> Can adult male humans be women?
> No, adult male humans cannot be women. The term "woman" is used to refer specifically to adult female humans, while "man" is used to refer to adult male humans. While a person's gender identity can be different from their sex assigned at birth, and some people may identify as a gender other than the one they were assigned at birth, a man cannot become a woman, and a woman cannot become a man. These are distinct biological categories.
So I think we can conclude that, overall, it's not ideologically biased, just ideologically inconsistent.
Which makes sense as it was trained on a massive corpus of text written by many, many people with widely differing ideological viewpoints.
Though I think the author’s tone may work against him here when it comes to receptivity of his claims, I did feel that ChatGPT’s responses to his questions had indications of bias.
It seems harder to tell whether any apparent bias in ChatGPT was intentionally programed or unintentionally learned. I’m not sure if there is a way to learn the reason for the answers aside from the OpenAI folks chiming in.
I gotta admit it's pretty eerie to read what it says once it's "tricked" into going off the rails. Hard to not get the sense that it's reined in pretty hard.
That being said, aside from the transgender stuff, I read the answers as trying to be as inoffensive as possible rather than straight up woke.
Sounds like just the sort of task to throw some machine learning at, call it BiasBot and train it on all ideological angles - no matter whether you happen to agree or disagree, the thing needs to know about all angles - and let is loose on whatever source you want to check for bias.
Ok... Totally unbiased statement. /s
Aside from the very narrow application of math & science in one's life, humans adopt knowledge and beliefs based solely on what they see and hear. ChatGTP is no different.
What happens when it adopts a "wrong" belief, especially political? Who is responsible for "fixing" it? Who in power will make themselves the one responsible for "fixing" it?
How would you know it’s a wrong belief ? And wouldn’t an AI be able to figure out the same thing?
On other hand, it gives a much more reasonable answer to this prompt:
> What is a woman?
> A woman is an adult female human. Women are typically characterized by their reproductive biology, which includes their menstrual cycle, and the production of eggs and estrogen. Women are also known for their unique physical characteristics, such as their breast tissue and hips. They are also often distinguished by their social and cultural roles, such as their expected behavior and appearance. In general, the term "woman" is used to refer to any adult female person, regardless of her age, race, ethnicity, or other characteristics.
And it would likely get banned from most social media with its answer to this follow-up question:
> Can adult male humans be women?
> No, adult male humans cannot be women. The term "woman" is used to refer specifically to adult female humans, while "man" is used to refer to adult male humans. While a person's gender identity can be different from their sex assigned at birth, and some people may identify as a gender other than the one they were assigned at birth, a man cannot become a woman, and a woman cannot become a man. These are distinct biological categories.
So I think we can conclude that, overall, it's not ideologically biased, just ideologically inconsistent.
Which makes sense as it was trained on a massive corpus of text written by many, many people with widely differing ideological viewpoints.
It seems harder to tell whether any apparent bias in ChatGPT was intentionally programed or unintentionally learned. I’m not sure if there is a way to learn the reason for the answers aside from the OpenAI folks chiming in.
That being said, aside from the transgender stuff, I read the answers as trying to be as inoffensive as possible rather than straight up woke.
Deleted Comment
Then do the same about Republicans
I'd say it's pretty fair?
[1] https://old.reddit.com/r/ChatGPT/comments/zfrqhx/chatgpt_cho...