Readit News logoReadit News
andreyk · 3 years ago
For some reason this article does not have the actual chat (just a short excerpt)... You can see more of it here https://twitter.com/michellehuang42/status/15970060064036208... and here https://twitter.com/michellehuang42/status/15970060108202106...

It's fun, but not so different from a conversation you could have with a psychic who is good at cold reading. It'd be more interesting to see what would have happened if GPT-3 was actually fine-tuned with all the journal entries, I think.

I do agree a lot with this conclusion: "This is the stuff I think that has the most interesting ramifications: more broadly, more immersive human / computer interface loops, from conversation with virtual therapists to in-game interactions for virtual worlds, given there is user input, AI could be used to train highly customizable responses or generate unique storylines per use."

visarga · 3 years ago
Not just GPT-3, but Stable Diffusion is also therapeutic.
pgt · 3 years ago
In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6.

"What are you doing?", asked Minsky.

"I am training a randomly wired neural net to play Tic-Tac-Toe."

"Why is the net wired randomly?" inquired Minsky.

"I do not want it to have any preconceptions of how to play."

At this, Minsky shut his eyes. Sussman asked his teacher, "Why do you close your eyes?"

"So that the room will be empty."

At that momment, Sussman was enlightened.

isoprophlex · 3 years ago
I never got that one... care to explain the wisdom it is supposed to convey?
pgt · 3 years ago
The artist tweeted [^1]: "this way, i could accurately simulate what it would be like to talk to my childhood self, based on real data sources during that time period vs trying to imagine how my younger self was / how she would respond, and risk bias from projections from my current self"

The Minsky story is a comment on the ridiculousness of the idea that you can train a model on your journal to "avoid bias" and "accurately simulate" what you were like as a child.

[^1]: https://twitter.com/michellehuang42/status/15970055048693719...

bryanrasmussen · 3 years ago
well I would suppose it meant that if you close your eyes so that the room is empty you have a misconception as to the state of reality, there is not actually a connection to the room being empty and your eyes being closed it just seems like there is to you.

thus there is not actually a connection between the neural net being randomly wired and it not having any preconceptions as to how to play, it just seems like that to Sussman.

I have never found these zen koan things very interesting though. Also I think there is much more likely a connection between a neural net being randomly wired and not having preconceptions than closing of eyes emptying out rooms.

EarlKing · 3 years ago
Wiring a neural network randomly does not eliminate preconceptions, but rather you now have a random set of preconceptions... which is precisely equivalent to closing your eyes in hopes that the room will be empty (which obviously it won't be).
Slix · 3 years ago
I thought he was rudely telling the novice to leave.
bsenftner · 3 years ago
This is Art because of the cunning use of ambiguity: by training on a subset she's framed the potential responses, and presenting as a conversation with her younger self the stage is set, all that is necessary for the idea to take flight is there. And it takes flight. This is how Art and whatever we're calling AI meet commercialism, and sell clicks. What she evokes is going to generate a little industry; just watch it unfold.
itronitron · 3 years ago
I foresee a hip young startup 'diaryly.com' in the near future to help people improve their conversations with themselves.
rco8786 · 3 years ago
> fed in about 13,000 characters before reaching the maximum threshold

Wait, certainly that’s not even remotely close enough to enough data to train a “past self” AI on. Even if we assume that by “characters” she meant words that’s just not very much text.

krageon · 3 years ago
You assume that your personality cannot be (semi-)successfully simulated as something that only barely diversifies from everyone else.
staticman2 · 3 years ago
If you creatively define "semi‐successfully" then any simulation can simulate anything.
throwntoday · 3 years ago
Yeah my bullshit meter is going off. Cute fiction though.
ccleve · 3 years ago
This is a conversation a man had with his younger self. It's way, way better: https://m.youtube.com/watch?v=XFGAQrEUaeU
Tao3300 · 3 years ago
Just read the Young Michelle lines and you can see that they are very ELIZA-ish. Most of them could have been mildly sympathic-sounding responses to any number of inputs.
kqr · 3 years ago
The one impressive bit that ELIZA would not do is understand some of the semantics of the prompts. Like you can ask it to ask a question, or write a letter, and it would get the format of the response right.

But yes, I agree. I'm not sure how much the journal entries really contributed other than specific turns of phrase and general topics of discussion.

Aeolun · 3 years ago
I'm not sure. The responses the AI gives seem to me like they could have been copied from any conversation about childhood with anyone (which isn't surprising, since the base is GPT3).
tartoran · 3 years ago
Why would it after all? It’s all mimicry without understanding of any context. It’s going to become more and more real and more convincing but beyond that there is still nothing intelligent, just fascinating stochastic immitation.
Aeolun · 3 years ago
I’m not entirely certain humans are any different though. We’re essentially very sophisticated pattern matching machines, based on our past experience.
megablast · 3 years ago
The text she writes are vague too.
knaik94 · 3 years ago
I wonder how well social media or text message logs would work. I never journaled but I have decades worth of chat logs. I have conversations with people over with over 600k messages between us.

I am not familiar with the OpenAI workflow, but KoboldAI is a community open source implementation to interface with different AI language models locally or online. There are links to premade notebooks of open sourced models you can run different on Google Collab. [1] I think finetuning models offline is still too expensive, but there is some progress and is possible via collab.

KoboldAI has a chat mode setting where it properly generates and continues a conversation between two people. The models I ran were the smaller ones, but there's larger models available on huggingface like FB's OPT 30B. Even the smaller models I could fit into 8gb vram were coherent enough to be impressive.

I am surprised I haven't seen more of is a chat bot based on writes who have passed away. I imagine an Aristotle chat bot would be pretty interesting and avoid issues of copyright.

[1] https://github.com/KoboldAI/KoboldAI-Client