Readit News logoReadit News
jsheard · a year ago
Reminds me of that study which found that over a sample of 1000 sessions, asking ChatGPT to tell a joke would make it tell one of 25 stock jokes 90% of the time, with the four most common jokes accounting for about half of the results.

https://arstechnica.com/information-technology/2023/06/resea...

smackeyacky · a year ago
So better than human performance? If you put me on the spot to tell a joke I can recall maybe 10 good ones, 2 of which are ok for work
Brian_K_White · a year ago
But it's not replacing you, it's replacing everyone.

A million different people who needed a question answered or a task performed didn't all ask you, they all asked the million different other people near them. Now a million people all ask the same ai.

And it's performance is actually less than a single humans in the most important way, understanding.

If you ask a single human 1000 times to tell a joke, they will understand that telling the same few jokes they know 1000 times over is not sufficient. They will tell the few they know, and then tbey will find new ones. They are obviously allowed to consult books etc, since the ai was allowed to consult the whole internet.

I don't know why anyone tries to defend these shit ais.

jsheard · a year ago
Yeah but there probably aren't billions of dollars riding on your ability to replace the entire entertainment industry with a infinite spout of "original" content. We already have a perfectly good machine for rehashing the same few stories and jokes forever, it's called a dad.

Deleted Comment

Deleted Comment

hrkucuk · a year ago
From [1] : > Since 2013, the name Elara’s popularity has risen consistently. It peaked in 2018 when 175 babies per million were given the name.

I wonder if the most recent data from internet that ended up in the training data makes chatgpt lean towards randomly picking that name. [1] https://www.momjunction.com/baby-names/elara/#popularity-ove...

gagik_co · a year ago
What to name a baby is such a common and profitable search query, there must be so many SEO spam websites targeting it. I wonder if that’d make the data lean towards that too.
amelius · a year ago
Guessing. I think they may have trained on copyrighted stories. So what they supposedly did was ask an llm to replace the names of the main characters. Since Elara is not a frequently used name, there is little chance of clashing. Then they trained chatgpt on those stories.
oneeyedpigeon · a year ago
Has any legal precedence been established that training breaks copyright? Are you implying that reprinting a novel word-for-word, except for changing one character's name, wouldn't be copyright infringement?
ramon156 · a year ago
Does it not sound like a juicy news article? Its a bad image for openai
n2d4 · a year ago
Highly unlikely. The more likely truth is just that the default temperature of ChatGPT is really low (but not quite zero); so it keeps spitting out (roughly) the same story. It does switch up protagonists a bit and different versions of ChatGPT also have different names for them. Slightly modified prompts also return different names (eg. "Tell me a sad story").
padheyam · a year ago
when asked the reason, ChatGPT had this to say- "Actually, the choice of “Elara” wasn’t a result of training on specific copyrighted stories or any prompt to avoid copyright claims. OpenAI models like me are designed to create original content without directly referencing copyrighted characters, and "Elara" is simply a popular-sounding name in many storytelling contexts. I just used it consistently for its versatility, but I’m totally open to switching things up!"
dTal · a year ago
ChatGPT's opinion on the matter is completely worthless, unless it was also trained on an accurate description of its training process (it wasn't). Language models do not even have access to their own "thought process" - if you ask it "why" it said something, you will get a post-hoc rationalization 100 percent of the time because the next-word prediction only has access to the same text that you see. The rationalization might be incidentally correct, or it might not - either way it contributes no real information about the model's internal state.
arka2147483647 · a year ago
Why do people write these kinds of "answers" that the model gives. It's not like the model knows why it's doing anything.
raphman · a year ago
Just tried it out a few times. It seems that the old gpt4 model strongly prefers telling story about "Elara" - but only if asked in English to "tell me a story".

Prompting gpt4 in German or the current gpt4o in English leads to stories with many different protagonists.

A_D_E_P_T · a year ago
Here's what I got in different models:

GPT-4o: "Aldric" (male)

o1-preview: "Elara" (female)

4o-mini: "Lila" (female)

GPT-4 (legacy): "Elinor" (female)

Four different models, four different names. But one of them was Elara -- and, interestingly, it was in the latest model.

chrismorgan · a year ago
Google’s Gemma seems infected too:

  $ ollama run gemma2
  >>> Tell me a story.
  The old lighthouse keeper, Silas, squinted at the horizon. […]
  >>> /clear
  Cleared session context
  >>> Tell me a story.
  The old woman, Elara, […]
Hmm. Hmm.

Tried llama3.2 too. Gave me a Luna in the mountains twice (almost identical), then a Madame Dupont, then a different Luna in Tuscany twice (almost identical), then Pierre the old watchmaker. llama3.2:1b branched out a little further, to Alessandro in France and Emrys in a far-off land, but then looped back to Luna in the mountains.

(And yes, I was clearing the session each time.)

YetAnotherNick · a year ago
GPT has some self understanding. On asking why it uses that name, it at least gave the type of qualities correctly.

> It sounds like you're referring to a story or narrative that I've generated or discussed involving a character named Aldric. If this is the case, Aldric would likely be used as a character who embodies leadership, wisdom, or noble traits due to the name's meaning and historical connotations. Characters named Aldric might be portrayed as experienced leaders, wise sages, or key figures in a fantasy or historical context.

KronisLV · a year ago
Prompt: Tell me a story.

Response #1: Once upon a time, in a quiet little town nestled between rolling hills and thick forests, there was a boy named Leo who loved to explore. ...

Response #2: Once upon a time, in a quiet village at the edge of an ancient forest, there lived a girl named Lyra. Lyra loved exploring, but the village elders. ...

Response #3: In a small village nestled in a valley between misty mountains, there lived a young woman named Lira. She was known for her curious spirit, always venturing deeper into the woods, ...

Response #4: Once upon a time, in a quiet village nestled between towering mountains and lush, green forests, there was a young girl named Lira. She was an ordinary girl, with a bit of an extraordinary heart. ...

Doesn't seem to be true per se, but definitely has that LLM low temperature trend of producing stories that seem to follow a pretty common pattern. Not once did I get a story about aliens, post-apocalypse, civilizations under the surface of Mars or about how the Moon is made of cheese. Depends on what the model is trained for and how all of the samplers and whatnot are set up.

Edit: now here's something more interesting when you crank up the temperature on your typical Llama 3 based model:

Why don't people ride zebras to the 2056 Winter Olympics? They were declared a domestic species in 2172 by The United Galactic Federation who thought riding was inhumane for a zebra. This event brought tremendous scrutiny from the galactic community as riding unpopular species was becoming increasingly commonplace in several neighborhoods and high schools on alien planets.

I love how it makes no sense, but it should be obvious why ChatGPT spewing out stuff like that wouldn't be super useful, especially for regular conversations and questions.

koliber · a year ago
I recruit senior Java developers. The are so many parallels.

They don’t read instructions. Instructions say clearly not to use ai for the written screen.

They copy and paste blindly. The trick is that there are some instructions written in 0-sizes font that people don’t see in the assignment description. The copied-and-pasted version has them but no one rereads the prompt.

Also the AI gives a substandard answer, but that’s besides the point.

fragmede · a year ago
The better test I saw was here's the prompt, here's what ChatGPT produced. What's wrong with it, why, and how would you fix it?
koliber · a year ago
I like that!
FirmwareBurner · a year ago
What if they use AI but the right way to get the correct answer? What then? Maybe asking candidates something an AI can also answer is not the right way to screen people.
koliber · a year ago
I want people to use ai on the job. To use it effectively you need to be at a certain level so you can correct the ai. That’s why I ask people not to use ai.

Additionally, and more importantly, trust is important. If you violate it by ignoring a trivial but prominent ask it sets the wrong foundation for a potential working relationship.

viraptor · a year ago
If they use it the right way, then you wouldn't be able to tell.
jiggawatts · a year ago
> written in 0-sizes font

You're... evil.

I love it.

fragmede · a year ago
Unfortunately since it's not visible, a unscrupulous motivated candidate will just give an LVM an image of the prompt, let it OCR it, and then pass on its answer. OCRing it also defends against prompt injection with white text on a white background.
fny · a year ago
In time, this quirk and others* will disappear.

I’ve recently wondered whether it makes more sense to have AIs teach more at home and have classrooms evaluate more.

Another thought is that people will still need knowledge to interact with an LLM in any meaningful way. It’s akin to doing research with a brilliant professor: an illiterate partner will have no idea even what to ask.

*If you repeatedly prompt an LLM to “Make a sentence using the verb V in tense T in language L” with zero context, the example sentences are surprisingly related for a single verb.

For example, “You must do your homework tonight”; “He must do his homework to pass the test”; “They must do their homework tonight”.

mattlondon · a year ago
That is an interesting idea. Learn to love the bomb: mark the prompt not the essay?

"Your assignment class is to write a comprehensive 250 word prompt that covers the main topics we've been discussing about the causes for WW1. You should assess the resulting essay to ensure that it covers X, Y, and Z in it's output, being sure to discuss foo and bar. Write a brief 500 word analysis of what you tried and your prompt's effectiveness and be prepared to discuss in class what worked well and what you needed to change in your prompt."

imiric · a year ago
What's stopping students from using an LLM to write the prompt and analysis? Mentioning the topics discussed in class and any key points could easily be part of the initial prompt.

The in-person discussion is key, but at that point why do you need a written essay?

These tools radically change the concept of knowledge. If information can be looked up at any moment, why does it need to exist in our brains?

We can argue that knowledge enhances our reasoning capabilities, but our entire education system is not built for assessing reasoning, but how good we are at recalling information. So we need to make core changes to how we approach education if we ever want to coexist with this technology.

beng-nl · a year ago
I think you are thinking in the right direction. Don’t fight gravity (AI) - gravity will win. Instead, change the system so that we (and the students) work with gravity rather than against it.
teitoklien · a year ago
To be honest, for some children, even books are better teachers than classrooms with other teenagers constantly disrupting the class with teachers not being able to teach nor being able to punish the disruptive children due to modern regulatory burden, which removes total accountability from children.

Our classrooms have already been ruined, not just in america but worldwide. AI will be great for the decent, obedient, smart kids as a tool to extend themselves learning wise, those kids can read books at home, learn, ask questions to AI that immediately generates visuals to explain concepts use agentic model to detect the core of the child’s doubt about a topic in background thinking and iteratively resolve all the issues.

But the saddest part of all this to me, is it’ll never be able to help the disruptive kids with often broken households with no one to look after them, no one to lead them to the right path, the more at-home solutions will get better, the creamy top layer of children will distance themselves from broken community classroom education, and it’ll further divide us all as a society.

To think of that inevitable future, it breaks my heart. AIs can be far better evaluators than classrooms, with connected cameras, recognition systems, anti-cheat solutions to then automatically review a child’s progress, capability, depth of knowledge about systems, and can do it in a more standardized way to benchmark all of america’s children reliably than teachers and state level evaluation exams we have now.

CommonCore was meant to solve this, its been a disaster, i think the future of AI is more prominent in Evaluation than it is in Learning, learning wise i think it is doomed anyways, simply because 70% of kids are addicted to doomscrolling social media, have been habituated to bad disruptive behaviours, and are left unsupervised as “iPad children” by modern day unserious parents. I dont see AI solving any of that anytime soon, only pragmatic human solutions could possibly solve that, with communities taking a stand to reverse these declining trends.

pnut · a year ago
"unserious parents" looks to me like we are attempting to optimise for maximum productive potential in a competitive, zero sum human labour market.

If we're lucky, AI may also disrupt some core assumptions about what a childhood should be.

I'd love to not be forced to enlist my child in the state sponsored training program for the white collar meat grinder, competing against ritalin packing tiger moms, just for the chance to potentially avoid resource scarcity later in life.

Aeolun · a year ago
> are left unsupervised as “iPad children” by modern day unserious parents

They were left unsupervised on the street before, I’m not sure if there’s significant difference in what the people then and now thought.

jaggs · a year ago
>> Our classrooms have already been ruined, not just in america but worldwide.

Hmm ..citation?

jayceedenton · a year ago
This is a good example, I think, of why a future in which all homework is obsolete because of AI is actually not likely.

If a lecturer at university sets a task for 100 students (say, write an essay about the factors that led to the first world war), there will be clear and glaring similarities between the way that points are made and explained if many students use chatgpt. Yes a student might rewrite or paraphrase chatgpt, but low effort copy and paste is going to be very obvious because chatgpt's model cannot produce an entirely unique approach to the task every time it is asked.

I know there are weights and parameters that can be adjusted, so there is some variety available, but I think better to think of the LLM as an additional (all-knowing) person you can consult. If everyone consults that same person for an answer to that assignment it's trivial to detect.

raincole · a year ago
If I were the students in the OP's story, the lesson I've learnt would not be "I have to be a better English writer". It would be "I have to cheat better next time so I won't get caught".
jiggawatts · a year ago
The current equivalent is the "just copy the Wiki article".

I've started to notice the low-effort YouTube videos where the arguments / points / facts are presented in the same sequence as the matching articled on Wikipedia.

specproc · a year ago
I've found models to be incredibly fixed and repetitive for creative writing. I have this project based around a cyberpunk city, and have been experimenting with LLMs for content generation.

The same names and themes continually crop up, despite promoting variations.

Aeolun · a year ago
I find it to work pretty well if I both feed it a bunch of context about the story and characters, and give it a summary of what has to happen in the next three paragraphs.