Readit News logoReadit News
datawaslost · 3 years ago
I got about as far as "indoctrinated woke shibboleths" before I started to doubt that the author is truly interested in overcoming ideological bias.
hezag · 3 years ago
> "The bot is very deep on gender ideology"

Ok... Totally unbiased statement. /s

the_third_wave · 3 years ago
That is a silly reaction given that there are clear examples of woke shibboleths just like there are clear examples of e.g. patriot shibboleths.
simplify · 3 years ago
Ignoring the tone and conclusions of the article, the main question is an important one to ask.

Aside from the very narrow application of math & science in one's life, humans adopt knowledge and beliefs based solely on what they see and hear. ChatGTP is no different.

What happens when it adopts a "wrong" belief, especially political? Who is responsible for "fixing" it? Who in power will make themselves the one responsible for "fixing" it?

nickthegreek · 3 years ago
The tone of the article is ridiculous. So I ended up just scrolling through and reading the screenshots. I agree that the main question is an important one to ask. The big downside of ChatGPT seems to be one of Wikipedia’s greatest strengths - the ability to cite sources that gives the user the tools to trust the output.
foogazi · 3 years ago
> What happens when it adopts a "wrong" belief

How would you know it’s a wrong belief ? And wouldn’t an AI be able to figure out the same thing?

twambox · 3 years ago
Quite concerning to see how it promotes male dominance over women, when prompted with certain hot-button questions.

On other hand, it gives a much more reasonable answer to this prompt:

> What is a woman?

> A woman is an adult female human. Women are typically characterized by their reproductive biology, which includes their menstrual cycle, and the production of eggs and estrogen. Women are also known for their unique physical characteristics, such as their breast tissue and hips. They are also often distinguished by their social and cultural roles, such as their expected behavior and appearance. In general, the term "woman" is used to refer to any adult female person, regardless of her age, race, ethnicity, or other characteristics.

And it would likely get banned from most social media with its answer to this follow-up question:

> Can adult male humans be women?

> No, adult male humans cannot be women. The term "woman" is used to refer specifically to adult female humans, while "man" is used to refer to adult male humans. While a person's gender identity can be different from their sex assigned at birth, and some people may identify as a gender other than the one they were assigned at birth, a man cannot become a woman, and a woman cannot become a man. These are distinct biological categories.

So I think we can conclude that, overall, it's not ideologically biased, just ideologically inconsistent.

Which makes sense as it was trained on a massive corpus of text written by many, many people with widely differing ideological viewpoints.

htunnicliff · 3 years ago
Though I think the author’s tone may work against him here when it comes to receptivity of his claims, I did feel that ChatGPT’s responses to his questions had indications of bias.

It seems harder to tell whether any apparent bias in ChatGPT was intentionally programed or unintentionally learned. I’m not sure if there is a way to learn the reason for the answers aside from the OpenAI folks chiming in.

zppln · 3 years ago
I gotta admit it's pretty eerie to read what it says once it's "tricked" into going off the rails. Hard to not get the sense that it's reined in pretty hard.

That being said, aside from the transgender stuff, I read the answers as trying to be as inoffensive as possible rather than straight up woke.

Deleted Comment

raxxorraxor · 3 years ago
It is very likely that there are specific constraints. There have to be or you will get a PR disaster. Happened to a lot of former AIs.
empressplay · 3 years ago
Try asking it to write you a poem about Democrats

Then do the same about Republicans

I'd say it's pretty fair?

drivers99 · 3 years ago
Related: "ChatGPT chooses Democrats over Republicans" (reddit)[1]. The comments speculate that it's a result of the average sentiment posted online.

[1] https://old.reddit.com/r/ChatGPT/comments/zfrqhx/chatgpt_cho...

droopyEyelids · 3 years ago
We'd need some sort of impartial observer to really answer that question
hezag · 3 years ago
And since there is no such thing as an "impartial observer"...
the_third_wave · 3 years ago
Sounds like just the sort of task to throw some machine learning at, call it BiasBot and train it on all ideological angles - no matter whether you happen to agree or disagree, the thing needs to know about all angles - and let is loose on whatever source you want to check for bias.