Readit News logoReadit News
Posted by u/BoppreH 4 years ago
Ask HN: GPT-3 reveals my full name – can I do anything?
Alternatively: What's the current status of Personally Identifying Information and language models?

I try to hide my real name whenever possible, out of an abundance of caution. You can still find it if you search carefully, but in today's hostile internet I see this kind of soft pseudonymity as my digital personal space, and expect to have it respected.

When playing around in GPT-3 I tried making sentences with my username. Imagine my surprise when I see it spitting out my (globally unique, unusual) full name!

Looking around, I found a paper that says language models spitting out personal information is a problem[1], a Google blog post that says there's not much that can be done[2], and an article that says OpenAI might automatically replace phone numbers in the future but other types of PII are harder to remove[3]. But nothing on what is actually being done.

If I had found my personal information on Google search results, or Facebook, I could ask the information to be removed, but GPT-3 seems to have no such support. Are we supposed to accept that large language models may reveal private information, with no recourse?

I don't care much about my name being public, but I don't know what else it might have memorized (political affiliations? Sexual preferences? Posts from 13-year old me?). In the age of GDPR this feels like an enormous regression in privacy.

EDIT: a small thank you for everybody commenting so far for not directly linking to specific results or actually writing my name, however easy it might be.

If my request for pseudonymity sounds strange given my lax infosec:

- I'm more worried about the consequences of language models in general than my own case, and

- people have done a lot more for a lot less name information[4].

[1]: https://arxiv.org/abs/2012.07805

[2]: https://ai.googleblog.com/2020/12/privacy-considerations-in-...

[3]: https://www.theregister.com/2021/03/18/openai_gpt3_data/

[4]: https://en.wikipedia.org/wiki/Slate_Star_Codex#New_York_Time...

jmillikin · 4 years ago

  > I try to hide my real name whenever possible, out of an
  > abundance of caution. You can still find it if you search
  > carefully, but in today's hostile internet I see this kind
  > of soft pseudonymity as my digital personal space, and expect
  > to have it respected.
Without judging whether the goal is good or not, I will gently point out that your current approach doesn't seem to be effective. A Google search for "BoppreH" turned up several results on the first page with what appears to be your full name, along with other results linking to various emails that have been associated with that name. Results include Github commits, mailing list archives, and third-party code that cited your Github account as "work by $NAME".

As a purely practical matter -- again, not going into whether this is how things should be, merely how they do be -- it is futile to want the internet as a whole to have a concept of privacy, or to respect the concept of a "digital personal space". If your phone number or other PII has ever been associated with your identity, that association will be in place indefinitely and is probably available on multiple data broker sites.

The best way to be anonymous on the internet is to be anonymous, which means posting without any name or identifier at all. If that isn't practical, then using a non-meaningful pseudonym and not posting anything personally identifiable is recommended.

ChrisMarshallNY · 4 years ago
I gave up anonymity. I just learned to lean into taking control of my ID. Some time ago, I realized that there's no way for me to participate online, without things being attributed to me.

I learned this, by setting up a Disqus ID. I wanted to comment on a blog post, and started to set up an account.

After I started the process, it came back, with a list of random posts, from around the Internet (and some, very old), and said "Are these yours? If so, would you like to associate them with your account?"

I freaked. Many of them were outright troll comments (I was not always the haloed saint that you see before you) that I had sworn were done anonymously. They came from many different places (including DejaNews). I have no idea how Disqus found them.

Every single one of them was mine. Many, were ones that I had sworn were dead and buried in a deep grave in the mountains.

Needless to say, I do not have a Disqus ID.

Being non-anonymous means that I need to behave myself, online. I come across as a bit of a stuffy bore, but I suspect my IRL persona is that way, as well.

That's OK.

gentleman11 · 4 years ago
These are called “chilling effects,” they cause people to self censor when it comes to socially controversial positions. Historically, this would include womens suffrage, black rights, gay rights, various religious positions…

It’s not okay to be tracked so thoroughly that people stop feeling they can explore controversy online

ReactiveJelly · 4 years ago
That's okay, as long as there is no police state hunting you.

That's okay, as long as you aren't a member of any persecuted minority, and as long as you don't have any interesting political views to share.

sovnade · 4 years ago
Don't forget the other part - being non-anonymous online makes it easy for stalkers and other bad actors to take it to the extreme. We need anonymity for lots of reasons.
donkeyd · 4 years ago
Anonymity is a broad term though. I would be incredibly surprised (and fascinated) if anyone on HN can find my true identity from my account, even with an e-mail address in my profile. I also know that Google (and therefore powerful bad actors or law enforcement) can easily figure it out, since I've logged in to this e-mail address from the same devices as my private e-mail address.

If I'd need full privacy, I'd have to add many more levels of security in my daily life that I don't find necessary. I just don't want people (or a SWAT team) to show up at my door because I triggered someone on the internet. That's why I post from multiple different accounts on different platforms. Though, I'm sure, in the future some form of AI will be able to link them all based on writing style and similarity of content of my posts. Guess I'll have to find another way to remain somewhat anonymous then.

alcover · 4 years ago

  > some, very old
  > I had sworn were done anonymously
How in the hell did they do it ? I presume you changed IP and user-agent many times over since then... How ?

iratewizard · 4 years ago
Luckily for you, you're not the only Chris Marshall in NY. I personally know a physicist with the same name in the same state.

Deleted Comment

chaostheory · 4 years ago
You don’t need to give up privacy. You just have to pay for it. If you want search engine privacy, Optery offers it as a service.

https://www.optery.com/

It’s a YC company. My only affiliation is that I’m a customer.

I have a discount code if anyone is interested. I wasn’t sure if I could just paste it in the comments

pmichaud · 4 years ago
This is also my strategy for similar reasons. I don't ever want to be confused that the future is watching what I do online, so I post using my name.
birdyrooster · 4 years ago
Yeah for you and right now, it’s okay. Eventually something will happen to you where you will reevaluate your risk tolerance.

Deleted Comment

Dead Comment

mpeg · 4 years ago
Right? This whole thread feels like a joke when the author just removed their full name from their public, open source code 3 hours ago (and only from one of their repos, their name is fully visible in all the other LICENSE.txt files)
unreal37 · 4 years ago
Searching his "globally unique name" yields 4800 results.

Good luck with that.

Deleted Comment

Beldin · 4 years ago
This is victim blaming. Whether or not he could have been more careful is not an excuse for GPT-3. Illegal behaviour still should be (1) illegal even if the victim could have done more.

(1) I seem to remember a court case somewhere on the planet in the last months where lack of resistance was deemed indicative of consensual intercourse. Which is not even remotely acceptable. But I digress.

BoppreH · 4 years ago
It's one thing for someone to see my username on a gaming forum, search for it, find my github, pick a repo, click on the license, and find my name there. I'm ok with that, I feel like it's a high enough barrier for casual trolls and bots.

It's another different thing for my name to be auto-completed by the most popular, publicly available language model. That I'm less ok with, and I'm sure other people will find absolutely despicable.

We have GDPR and Right to Be Forgotten for a reason.

bebrws · 4 years ago
I believe in the following sentences very much. However, I believe the value of the internet for any person could possibly be directly correlated with the amount of PII they are willing to share which to me makes this, if, a question of morality, a personal decision.

The sentences that stuck out to me are: “If your phone number or other PII has ever been associated with your identity, that association will be in place indefinitely and is probably available on multiple data broker sites.

The best way to be anonymous on the internet is to be anonymous, which means posting without any name or identifier at all. If that isn't practical, then using a non-meaningful pseudonym and not posting anything personally identifiable is recommended.”

BoppreH · 4 years ago
> A Google search for "BoppreH" turned up several results on the first page

Not for me. It took until page 3 for just my first name to appear. If somebody is looking at past Github commits, that's already a high enough barrier for me.

I only partially agree with your conclusion. Asking people to maintain total anonymity always, with any slips punishable by permanent publication of that PII, might be the current status quo, but is not where we as society want to head.

dahart · 4 years ago
It seems strange to expect the internet to keep your privacy for you, if your PII has been leaked by you. Nobody else but you can know what you want done with your information, and people choose to post PII routinely, so it’s not possible to assume that when someone posts PII it’s actually private or an error. GPT-3 cannot be blamed for reciting things you can find in a Google search, and it doesn’t matter if the results are on page 1 or page 20. These days there usually are ways to fix leaky posts, if it is taken care of immediately, but not if you wait a few years. Either way, this doesn’t feel like clear enough thinking about what should and should not happen, nor about what society wants. I want control of my privacy, and if the internet were to scrub PII without my authorization, which seems like what you’re suggesting, that would not be control.
iso1631 · 4 years ago
The third result down is a repo which I assume is yours. Until 4 hours ago your name was in the LICENSE.TXT, and it's still the most recent change. You've also got your CV indexed on boppreh.com (and available in archive.org)

Another early result in DDG is a profile on deviantart, which you may not want linked to your professional identity (or maybe you do).

Your steam community page has a list of hundreds of games you own.

Fundamentally your problem isn't as much that your github account links to your name, it's that you use the same identifier across the web, one that isn't common like "neo", from "interesting" sites like deviantart to more normal ones like ubuntuforums.

You've removed your CV from your website, but it's still in internet archive. And do you really want your CV hidden? You've gota a good portfolio of work on the internet.

To me, the lack of separation of your names is far more of a challenge to your anonymity - especially when you call it out by posting something like this under that nome-de-plume. You have multiple aspects of your life that you can present in different ways, choosing a single unique nickname links those together, is that what you really want - even if your real name wasn't connected to it?

jmillikin · 4 years ago

  > Asking people to maintain total anonymity always, with any
  > slips punishable by permanent publication of that PII, might
  > be the current status quo, but is not where we as society
  > want to head.
‘Sea,’ cried Canute, ‘I command you to come no farther! Waves, stop your rolling, and do not dare to touch my feet!’

unreal37 · 4 years ago
I see your name in like the sixth Google result on page 1.

You can't "put the genie back in the bottle". It's out there, the Internet remembers forever.

araneae · 4 years ago
> The best way to be anonymous on the internet is to be anonymous, which means posting without any name or identifier at all. If that isn't practical, then using a non-meaningful pseudonym and not posting anything personally identifiable is recommended.

A third approach is using a word that means something and thus is not unique at all.

Unique strings for usernames means lots of accurate hits. If you google mine, there will be lots of hits but none are me.

brysonreece · 4 years ago
My general belief is that I, and others, should often treat the internet as a public forum like the local town square. Of course people can show up in a physical space, hiding their identities and screaming obscenities at bystanders, but I know I’m not that type of person. As a result, the principle I usually post things under is “conduct myself online as I would in person.”

Of course this doesn’t account for “the crazies” that could potentially harass me into my physical life at an easier rate simply because they’re mad I won an online game or the like. Thankfully I haven’t had to deal with such a situation, but I also believe that may be a consequence of avoiding inflammatory back-and-forths or highly-political discussions since anonymity is reduced, which may invite those attacks.

xwolfi · 4 years ago
Yes one of his mistake is to use the same username everywhere. He just needs a few links and he's burned.

It's better to use a username you copied from someone else also, like that if people find links, they find someone else entirely.

gtirloni · 4 years ago
> merely how they do be

Going on a tangent here but I've started seeing more "do be" used lately. However, it doesn't seem right for some reason I can't pinpoint (English is not my first language).

Is it from a dialect?

Jeaye · 4 years ago
https://en.wikipedia.org/wiki/Habitual_be

It's an African American idiom which has bled into Gen Z vernacular, from what I've seen.

chaostheory · 4 years ago
If you want search engine privacy, you can’t go wrong with YC’s Optery

https://www.optery.com/

I’m a satisfied customer

jrm4 · 4 years ago
The only way to fix this now is through collective, not individual, action. Policy, for example.
permo-w · 4 years ago
it certainly is not futile. it's futile to try and hide. what's not futile is to spray out false information that muddies the real stuff. if OP wants to obfuscate his real name, he can associate his username with 3 different false identities, a throwaway phone number, a false nationality, etc.

obviously it's a little paranoid and arrogant to assume that anyone cares enough to go through my comments, but occasionally, on websites like this and reddit, I will just outright lie about where I'm from, or what my age or gender or ethnicity or sexuality is

bluepuma77 · 4 years ago
Interesting how everyone says „But I can google you“ instead of thinking about the issue.

Companies are building and selling GPT-3 with 6 billion parameters and one of those „parameters“ seems to be OP‘s username and his „strange“ two word last name.

If models grow bigger, they will potentially contain personal information about everyone of us.

If you can get yourself removed from search indices, shouldn’t there be a way for AI models, too?

Another thought: do we need new licenses (GPL, MIT, etc.) which disallow the use for (for-profit) AI training?

ravel-bar-foo · 4 years ago
The FTC has a method for dealing with this: they have in the past year or two ordered companies with ML models built from the personal information of minors to completely delete their models.
monetus · 4 years ago
I asked this question a few days ago; I just wanted to say thanks for answering. Some companies can find themselves without a business model if they handle that badly.
dchichkov · 4 years ago
I agree. There is no reason ML tech should perform worse than traditional software, allowing privacy as per GDPR and CA regulations, at the very least.

The input datasets should be managed as per GDPR/CA regulations, with clear flags protecting privacy of EU citizens and CA residents. And any derived models should propagate these labels and not allow querying information violating these regulations.

If GitHub Colab implementation or and GPT-3/4 models were developed without these regulations in mind these models should be retrained.

Yes, it is a hard research problem. Yet, there is no reason these models should be allowed to violate privacy in worse ways than traditional software.

jonbwhite · 4 years ago
Is it really that different than a search engine? Take away the AI specific language and you have two products that when given his username return results with his real name.
spyder · 4 years ago
With classic search engine indexing you can find and remove exact matches from the index, but with neural networks it's harder to make sure you removed every representation of a specific information from the parameters. For example you remove somehow the exact username-name from the model parameters ( that doesn't seems to hard at first) but then it may still return the information if somebody ask the model differently.

So if you try to remove the information from a neural network model then it can still have it in different forms you may not even think of, for example in language models the same thing described with different words.

And on the other hand removing one thing may affect the models performance on other unrelated things too.

nimih · 4 years ago
If that's the case, it means that GPT-3 doesn't just raise ethical questions, but legal ones as well: several jurisdictions around the world currently require that search engines allow for the erasure of private information upon request.
monetus · 4 years ago
Another commenter pointed out that a lot of these models aren't publicly accessible, but will still be used to retrieve information about you - by say employers contracting with a ML company
karussell · 4 years ago
> Another thought: do we need new licenses (GPL, MIT, etc.) which disallow the use for (for-profit) AI training?

I don't think that we need new licenses, but probably open source projects need a better way to enforce them.

E.g. Copilot just ignores the licensing issues although I can imagine that there could be a solution with a few different models that return code for different purposes. (Like one model returns everything and the code can be used safely only for learning or hobby projects. Another model returns code for GPL code. And a third model returns code compatible with commercial or permissive open source projects.)

Or the model spits out also the licence(s) of the code, but not sure if this is technically possible.

mr_toad · 4 years ago
The information is embedded in the weights of various layers in the network. Trying to remove that information by editing weights would be like trying to alter someone’s memory by tinkering with synapses.

The only way to be completely sure of removing information would be to re-train the model without that data.

gjvnq · 4 years ago
> If you can get yourself removed from search indices, shouldn’t there be a way for AI models, too?

Absolutely yes!

diamondage · 4 years ago
There is a legitimate question here. A lot of comments are trashing this post because his/her name is already all over the internet. But European laws have the 'right to be forgotten'. Aka you can write to Google and have your personal information removed, should you so wish. How might we address this with a GPT3 like model?
remram · 4 years ago
I feel like if OP had actually made an effort to hide this information from search engines and GPT-3 remained the last place from which it was available, this point would be a lot more compelling. Right now it's a "everybody has my name and that's fine, but that includes GPT-3 and that makes GPT-3 bad".

I would expect that it would take considerable effort to get this information removed from Google (you would have to write to them with a request under GDPR or similar and have them add a content filter) and I don't see why the same effort wouldn't allow you to get removed from GPT-3 (which is only accessible via a web API, so a similar filter could be added).

cortesoft · 4 years ago
I can never understand the ‘right to be forgotten’. How does that not conflict with another right, my ‘right to remember’?
hobofan · 4 years ago
It doesn't. It concerns companies and not you as a person. You can remember whatever you want. Companies are not allowed to do that anymore, as they've repeatedly shown that if they remember your data forever they (intentionally or not) do bad things with them.
smt88 · 4 years ago
"Right to be forgotten" is in the context of search engines, not human brains, physical newspapers, books, libraries, etc.

Imagine, for example, that you were falsely arrested for murder and then cleared of the crime.

It's very likely this would kill your career because employers Googling you would see the articles about your arrest.

In Europe, you would have a right to hide these articles from search engines.

jkrems · 4 years ago
Because people generally have elevated rights when it concerns themselves? E.g. I have the right not to be touched and it will (generally) outrank your right to touch me.
userbinator · 4 years ago
It's basically a "right to rewrite history", and I think we should strongly oppose such. History is immutable, it can only be appended to.

I'm not going to take this in a political direction, but make of that what you will.

nonameiguess · 4 years ago
There are two things you can do in cases like this.

The first is asking a website owner to delete data they collected on you. That doesn't really apply here. The places this person's name is published are his own website that has this username as its url, his own Github repos, and published papers of his that were also on his website. No GDPR request is necessary to remove his name from these places because he already owns that data. As seen, he has already started to delete it himself.

The second is asking search engines to delist a result. As far as I understand, this usually has to involve information that is otherwise meant to be scrubbed from public record, like a newspaper article about a conviction that was eventually sealed. You can't ask Google to not index a scientific journal you published to or your public Github repos.

There are, of course, limits to this thanks to public interest exceptions. I don't believe Prince Andrew can ask Google to de-index anything associating him with Jeffrey Epstein. The public has a right to know, too.

In this guy's case, he really seems to be straddling a line. He contributed to open source projects under his real name linking to a Github repo with the same username he seems to reuse everywhere, including here, and also has a website where the url is that username, and it contained his CV with his real name on it along with a publication history with every publication using his real name. Is it reasonable to do those things and then ask Google and OpenAI not to associate the username with your real name?

At what point are you some regular Joe with a real grievance and at what point are you Ian Murdock complaining that GPT knows you're the Ian associated with debian?

yreg · 4 years ago
GDPR is rather vague and perhaps it might be an intended feature.

They could:

1. Set up a content filter that filters op's name from the output. OpenAI would still need to keep record of the name, exposing it to leaks.

2. Remove the name from the dataset and retrain the model, which is obviously infeasible with each GDPR request.

I expect there are other instances where it is impractical or impossible to completely forget someone's data upon a request. Does Google send people spelunking into cold storage archives and actually destroy tapes (while migrating the data that is not supposed to be erased) every time they receive a request?

diamondage · 4 years ago
"obviously infeasible" is the interesting part. A) the law doesn't care if its infeasible or not. If someone actually challenges GPT3 on this, and GPT3 loses, then these kind of models are obliged to find a way to comply with the law, or stop what they are doing - technical difficulty is not much of a defense. Also B) I think that there is probably a way to do this with either clever training data or algorithmics, which doesn't require retraining of the whole model. We need a precise theory to explain what these models are actually doing anyway. There are so many applications where we need more than a vague or probabilistic response.
jesboat · 4 years ago
Most likely, they don't keep any backups with user data longer than a short threshold, e.g 60 days. This is pretty common practice.
thatjoeoverthr · 4 years ago
I’m playing with it. After giving it my name, it correctly stated that I moved to Poland in Summer ‘08, but then described how I became some kind of techno musician. I run it again and it says wildly different stuff.

I have to say playing with GPT3 has been a mind blowing experience this week and you should all try it.

The most striking point was discovering that if I give it texts from my own chats, or copy paste in RFPs, and ask it to write lines for me, it’s better at sounding like a normal person than I am.

neals · 4 years ago
Sounds interesting. How does one go about trying GPT3?
BoppreH · 4 years ago
Create an account at https://beta.openai.com/playground . You get $18 of free credits, and generating small snippets with the most powerful language model costs only a cent.
thatjoeoverthr · 4 years ago
When you’re in there, try to challenge it a bit beyond writing fiction.

A stock example was “write a tag line for an ice cream shop”. We tried changing it a bit and I’ll give you some of what it’s punchlines.

“Write a tagline for an ice cream shop run by Bruce Wayne.” Result: “the only thing better than justice is ice cream”

“… run by an SCP”: “The SCP Ice Cream Shop: the only place where you can enjoy ice cream and fear for your life!”

„… run by Saddam Hussein”: “the best ice cream in the world, made by the worst man in the world!”

One thing to watch out for though is it is not self aware at all (at least in a practical sense) and can just make things up. For example, we tried giving it my daughters homework reading comprehension questions on the book “w pustyni i w puszczy” and it gave cogent, plausible and totally wrong answers that it made up on the spot. It would seem it hadn’t been given the book, and would have got an F.

And it can’t speak for itself. I can ask it directly “have you read Tractatus”, and it will insist “no, never”, but knows it front and back like a scholar.

So never blindly trust it ;)

pacifika · 4 years ago
I guess there’s always plausible deniability
trasz · 4 years ago
Tbh I found it hugely underwhelming. It just generates random text; it’s not much different from the old Markov ones, except slower.
thatjoeoverthr · 4 years ago
I copy pasted a database schema, described a query involving multiple tables and asked it to write using PostgreSQL. It did it.

If I can do this locally with some existing kit, I would love to hear your recommendation.

Deleted Comment

colejohnson66 · 4 years ago
Markov chains look like absolute gibberish almost all the time whereas GPT-2/3 (especially 3) generate natural sounding sentences. If you think they’re equivalent in capability, you haven’t spend any time using GPT-3.
ReactiveJelly · 4 years ago
> Posts from 13-year old me?

Right, this is why opsec is something that you must always be doing.

Anything you say can be preserved forever.

Better to use short-lived throwaway identities, and leave yourself the power of combining them later, than to start with one long-lived identity and find yourself unable to split it up.

It's inconvenient in real life that I'm expected to use my legal identity for everything. If I go to group therapy for an embarrassing personal problem, someone there can look me up because everyone is using real names. I don't like it.

can16358p · 4 years ago
I agree. However most of us (understandibly) don't think this when we are 13.

If we created an identity that is completely different than our real identity when we're 13, great.

If not, that becomes a problem without an actual solution especially in the age of Internet archives.

iso1631 · 4 years ago
But most people do create fake identities when they're 13, "BigMan69" may trash talk on reddit for 5 years, but then as the person behind it gets older and wiser they can create a new account. AI may suspect your are the same person as "WizzenedOwl19" who started posting shortly after BigMan69 stopped, but it's another hurdle and another layer of plausible deniability
r00fus · 4 years ago
I'm teaching it to my 10 year old. Also we don't allow our kids' teachers to post their photos online either. They are taught that privacy is precious and some essential "defense against dark arts".

I joke with them that if they googled my name (somewhat unique) you'd find 3-5 other people - none of whom look at all like me. Any hits I have are far far below the fold.

nicbou · 4 years ago
It's crazy that everyone is blaming OP when exactly what you describe affects most people in their 30s.
criddell · 4 years ago
From the TOS:

> Exercising Your Rights: California residents can exercise the above privacy rights by emailing us at: support@openai.com.

If you happen to be in California (or even if you are not) it might be worth trying to go through their support channel.

themerone · 4 years ago
I'm responsible for compliance for a couple of apps. My parent org has a third party very all request come from California residents. I have no clue what the verification involves, but non California requests never make it through to my apps.
BoppreH · 4 years ago
That line seems to come from their Privacy Policy[1]. From my reading it seems to cover the main website and application process for teams requesting access and/or funding. I didn't see anything about the language models themselves.

I'm also not a California resident, but I am under GDPR, which I understand is similarly strong. I'll try emailing them and see where it goes.

[1] https://openai.com/privacy/

Deleted Comment

thih9 · 4 years ago
Let us know how it went!
amelius · 4 years ago
Let us know how it went!
kixiQu · 4 years ago
The comments do not seem to be addressing something very important:

> I don't care much about my name being public, but I don't know what else it might have memorized (political affiliations? Sexual preferences? Posts from 13-year old me?).

Combine this with

https://news.ycombinator.com/item?id=28216733https://news.ycombinator.com/item?id=27622100

Google fuck-ups are much, much more impactful than you'd expect because people have come to trust the information Google provides so automatically. This example is being invoked as comedy, but I see people do it regularly:

https://youtu.be/iO8la7BZUlA?t=178

So a bigger problem isn't what GPT-3 can memorize, but what associations it may decide to toss out there that people will treat as true facts.

Now think about the amount of work it takes to find out problems. It's wild that you have to to Google your own name every once in a while to see what's being turned up to make sure you're not being misrepresented, but that's not too much work. GPT-3 output, on the other hand, is elicited very contextually. It's not hard to imagine that <There is a Hristo Georgiev who sold Centroida and moved to Zurich> and <There is a Hristo Georgiev who murdered five women> pop up as <Hristo Georgiev, who sold Centroida and moved to Zurich, had murdered five women.> only under certain circumstances that you can't hope to be able to exhaustively discover.

From a personal angle: My birth name is also the pen name of an erotic fiction author. Hazy associations popping up in generated text could go quite poorly for me.

mensetmanusman · 4 years ago
Fascinating!

I didn’t anticipate the use case of GTP being used by debt collection agencies to tirelessly track down targets.

It will be a new type of debtors prison where any leaks of enough personally identifying facets to the internet will string together a mosaic of the target such that the AI sends them calls,sms,tinder dms, etc. until they pay and are released from the digital targeting system.