I see a lot of people getting confused, and the contention here is not that ChatGPT helped prepare for martial law in any way, but the fact that someone knew about it happening before it happened. Not really related to ChatGPT IMO.
Direct link to the Google translate for anyone else who can't read Korean [0]. This comment is correct, and the English headline is confusing, especially for English-speaking readers of HN who don't have context for who actually made the decision and why it would be controversial that the head of the guard knew about it ahead of time.
> At 8:20 p.m. on December 3rd last year, when Chief Lee searched for the word, the State Council members had not yet arrived at the Presidential Office. The first State Council member to arrive, Minister of Justice Park Sung-jae, arrived at 8:30 p.m. It is being raised that Chief Lee may have been aware of the martial law plan before them. Martial law was declared at 10:30 p.m. that night.
If there were no ChatGPT we'd be reading about a Google search here instead (or more likely we wouldn't, because it wouldn't be interesting enough to get traction among non-Koreans on HN). If the quotes in TFA are accurate he wasn't having a conversation with ChatGPT about it, he appears to have just entered some keywords and been done with it (and if he had had a conversation, it sure seems like that would come out!).
We can't infer any amount of trust from this episode except the trust to put the data into ChatGPT in the first place, and let's be honest: that ship sailed long ago and has nothing to do with ChatGPT.
Tbh I often use it to get a starting point. If you ask it about say martial law it'd likely mention the main pieces of legislation that cover it which you can then turn to.
Is it much worse than trusting Wikipedia or another encyclopedia? Maybe it is easier to make ChatGPT give you bad advice while encyclopedias are quite dry?
How the hell does NOBODY understand that everything you enter into a textbox on the internet will get sent to a server where somebody(es) you certainly do not know or trust will get to read what you wrote?
How the fck do people (and ones working in security-sensitive positions no less) treat ChatGPT as 'Dear Diary'?
I have a rather draconian idea - websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions.
> I have a rather draconian idea - websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions.
Oh good, another pop-up dialog no one will read that will be added to every site. Hopefully if something horrible like this is done, just shoving it in the privacy policy or terms of use will suffice, because no one will read it regardless.
I have my own draconian idea: no more performative regulations which are so poorly designed that they are basically impossible to meaningfully enforce. This stuff just leads to a lot of wasteful, performative compliance without delivering any actual benefits.
> Oh good, another pop-up dialog no one will read that will be added to every site.
> no one
i go through every cookie pop up and manually reject all permissions. especially objecting to legitimate interest.
i actually enjoy it. i find it satisfying saying no to a massive list of companies. the number of people who read these things is definitely not 0.
my question to you is, why does compliance with regulation make you so … irritated? you don’t think it serves a purpose. but it does. there’s an incongruity there.
The said popup isn’t to be read but visually inform the visitor they trespass in an advertisement intensive area. The wise one will go back to their steps and find another source of information.
> I have my own draconian idea: no more performative regulations which are so poorly designed that they are basically impossible to meaningfully enforce.
Oh but they are enforced, and they are effective.
Ever since GDPR passed, the business both on-line and in meatspace cut out plenty of bullshit user-hostile things they were doing. The worst ideas now don't even get proposed, much less implemented. It's a nice, if gradual, shift of cultural defaults.
Also, it's very nice to be able to tell some "growth hacker" to fsck off or else I'll CC the local DPA in the next reply, and have it actually work.
Not to mention, the popups you're complaining about serve an important function. Because it's not necessary to have them when you're not doing anything abusive, the amount and hostility of the popups is a direct measure of how abusive your business is.
I'm not a lawyer, but I think an argument can be made that services can (and maybe should?) use the "do not track" setting of browsers to infer the answer to cookie dialogs, thus eliminating the "problem".
> no more performative regulations which are so poorly designed that they are basically impossible to meaningfully enforce
It's difficult not to imagine that as a jab towards GDPR which, despite far from perfect, is neither performative, nor impossible to enforce.
That you're frustrated with that doesn't remove the need for it, doesn't mean other people think the same way, and doesn't warrant letting that area completely free to be rampaged on by ads companies directly or indirectly.
Eh. It's really the implementation is garbage. I'd love every textbox that submits data to have a 6pt red-on-white caption that has only the words "Anything typed in this box is not private".
These people obviously know what government agencies can see and are capable of (both domestic and foreign). But they cannot fathom that the massive apparatus of surveillance and control would be directed towards themselves.
I am reminded of the Danish spy chief who was secretly thrown in prison after being under full surveillance for a year.
Your idea is the start of something I model as a "consent framework". Dialog boxes don't seem effective to me, but tracking your data does. Who accessed your data, and when? Who has permission to your "current" data? Did an entity you trusted with your data share the data?
And more. Nothing can perfectly capture this, but right now, nothing even tries. With a functioning consent framework, it would be possible to make digital laws around it -- data acquired on an individual outside a consent framework can be made illegal, as an example. If a bank wants to know your current address, it has to request it from your "current address" dataset, and that subscription is available for you to see. If you cut ties with a bank, you revoke access to current address, and it shows you your relationship with that bank still while also showing the freshness of the data they last pulled.
All part of a bigger system of data sovereignty, and flipping the equation on its head. We should own our data and have tools to track who else is using it. We should all have our own personal, secure databases instead of companies independently having databases on us. Applications should be working primarily with our data, not data they collect and hide away from us.
Been thinking about this idea too. The concept of data residency seems like a farce when eu-central is owned by AWS who answers to the US government.
An inverted solution has say a German person using a server of their choice (we get charged by Google Apple etc. for storage anyway) and you install apps to that location operated by a local company.
Been musing on this and how it could get off the ground.
> How the hell does NOBODY understand that everything you enter into a textbox on the internet will get sent to a server where somebody(es) you certainly do not know or trust will get to read what you wrote?
If you have some free time, go watch some crime channels on tiktok or youtube or wherever. It's amazing the amount of people, from thugs to cops and even judges, who use google to plan their crimes and dispose of the evidence. Search history, cell tower tracking data and dna are the main tools of detectives to break open the case.
> I have a rather draconian idea - websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions.
It's a losing battle. Think about what llms and AI agents are? They are data vacuums. If you want the convenience of a personal "AI agent" on your smartphone, tv, car, fridge, etc, they need access to your data to do their job. The more data, the better the service. People will choose convenience over privacy or data protection.
Just think about what the devices in your home ( computers, fridge, tv, etc ) know about you? It's mind boggling. Of course if your devices know, so does apple, google, amazon, etc.
There really is no need to do polls or surveys anymore. Why ask people what they think, when tech companies know already.
> websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions
The first part is somewhat infeasible, because your IP is sent by just visiting the page in the first place. And I think the second part is what a privacy policy is.
It might be more helpful to make it mandatory to have your privacy policy be one page or less and in plain english, so people might read them for the services they use often.
I'd like to be able to say, as a page / site, "disable all APIs that let this page communicate out to the net" and for that to be made known to the user.
It'd be quite handy for making and using utility pages that do data manipulation (stuff compiled to wasm, etc) safely and ethically. As a simple example, who else has pasted markdown into some random site to get HTML/... or uploaded a PNG to make a favicon or whatever.
As far as I understand, the evidence was discovered after his devices were seized, so even if it hadn't been sent to a server, his browser history was enough to get him into trouble.
> I have a rather draconian idea - websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions.
Decent summary of the GDPR. Too bad it lacks enforcement.
That idea is idiotic, if not automated. People already shun everything attached with a workload inflicted by security or legal. It needs to be auto-negotiated thing, where if the negotiations do not work out, the service just does not work.
That way, it-administration in security environs can override the negotiation settings. At lot of things would cease to work in a lot of companies, instantly though.
I mean permission is implicit in that you opened up the browser and entered text. As for what they do with it, that's also covered in the terms & conditions, privacy policy, etc.
People are informed, the legal frameworks are all there, they can't claim they didn't know what they were doing / what was happening with their data.
If you are an official in a foreign country it is stupid to use ChatGPT or Google to research something that is not public yet. Why not mail to US State Department directly and let them know?
The cops had confiscated all the electronic devices for a "forensic" examination. The easiest explanation is that it was probably found on said person's ChatGPT history logs.
The notion that someone at OpenAI outed this info sounds a bit far-fetched. Not impossible of course.
“AI advises martial law declarations in 2024” as a headline without context would have scared the living daylights out of anyone watching the Matrix or Terminator in their release years.
> At 8:20 p.m. on December 3rd last year, when Chief Lee searched for the word, the State Council members had not yet arrived at the Presidential Office. The first State Council member to arrive, Minister of Justice Park Sung-jae, arrived at 8:30 p.m. It is being raised that Chief Lee may have been aware of the martial law plan before them. Martial law was declared at 10:30 p.m. that night.
[0] https://www-hani-co-kr.translate.goog/arti/society/society_g...
Deleted Comment
That’s the dangerous part.
We can't infer any amount of trust from this episode except the trust to put the data into ChatGPT in the first place, and let's be honest: that ship sailed long ago and has nothing to do with ChatGPT.
How the fck do people (and ones working in security-sensitive positions no less) treat ChatGPT as 'Dear Diary'?
I have a rather draconian idea - websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions.
Oh good, another pop-up dialog no one will read that will be added to every site. Hopefully if something horrible like this is done, just shoving it in the privacy policy or terms of use will suffice, because no one will read it regardless.
I have my own draconian idea: no more performative regulations which are so poorly designed that they are basically impossible to meaningfully enforce. This stuff just leads to a lot of wasteful, performative compliance without delivering any actual benefits.
> no one
i go through every cookie pop up and manually reject all permissions. especially objecting to legitimate interest.
i actually enjoy it. i find it satisfying saying no to a massive list of companies. the number of people who read these things is definitely not 0.
my question to you is, why does compliance with regulation make you so … irritated? you don’t think it serves a purpose. but it does. there’s an incongruity there.
The said popup isn’t to be read but visually inform the visitor they trespass in an advertisement intensive area. The wise one will go back to their steps and find another source of information.
Oh but they are enforced, and they are effective.
Ever since GDPR passed, the business both on-line and in meatspace cut out plenty of bullshit user-hostile things they were doing. The worst ideas now don't even get proposed, much less implemented. It's a nice, if gradual, shift of cultural defaults.
Also, it's very nice to be able to tell some "growth hacker" to fsck off or else I'll CC the local DPA in the next reply, and have it actually work.
Not to mention, the popups you're complaining about serve an important function. Because it's not necessary to have them when you're not doing anything abusive, the amount and hostility of the popups is a direct measure of how abusive your business is.
It's difficult not to imagine that as a jab towards GDPR which, despite far from perfect, is neither performative, nor impossible to enforce.
That you're frustrated with that doesn't remove the need for it, doesn't mean other people think the same way, and doesn't warrant letting that area completely free to be rampaged on by ads companies directly or indirectly.
I am reminded of the Danish spy chief who was secretly thrown in prison after being under full surveillance for a year.
And more. Nothing can perfectly capture this, but right now, nothing even tries. With a functioning consent framework, it would be possible to make digital laws around it -- data acquired on an individual outside a consent framework can be made illegal, as an example. If a bank wants to know your current address, it has to request it from your "current address" dataset, and that subscription is available for you to see. If you cut ties with a bank, you revoke access to current address, and it shows you your relationship with that bank still while also showing the freshness of the data they last pulled.
All part of a bigger system of data sovereignty, and flipping the equation on its head. We should own our data and have tools to track who else is using it. We should all have our own personal, secure databases instead of companies independently having databases on us. Applications should be working primarily with our data, not data they collect and hide away from us.
This, and much more, is required going forward.
An inverted solution has say a German person using a server of their choice (we get charged by Google Apple etc. for storage anyway) and you install apps to that location operated by a local company.
Been musing on this and how it could get off the ground.
If you have some free time, go watch some crime channels on tiktok or youtube or wherever. It's amazing the amount of people, from thugs to cops and even judges, who use google to plan their crimes and dispose of the evidence. Search history, cell tower tracking data and dna are the main tools of detectives to break open the case.
> I have a rather draconian idea - websites and apps should be explicitly required to ask permission when they send your data somewhere to tell you where they send your data and who will get to see it, store it, and with what conditions.
It's a losing battle. Think about what llms and AI agents are? They are data vacuums. If you want the convenience of a personal "AI agent" on your smartphone, tv, car, fridge, etc, they need access to your data to do their job. The more data, the better the service. People will choose convenience over privacy or data protection.
Just think about what the devices in your home ( computers, fridge, tv, etc ) know about you? It's mind boggling. Of course if your devices know, so does apple, google, amazon, etc.
There really is no need to do polls or surveys anymore. Why ask people what they think, when tech companies know already.
The first part is somewhat infeasible, because your IP is sent by just visiting the page in the first place. And I think the second part is what a privacy policy is.
It might be more helpful to make it mandatory to have your privacy policy be one page or less and in plain english, so people might read them for the services they use often.
It'd be quite handy for making and using utility pages that do data manipulation (stuff compiled to wasm, etc) safely and ethically. As a simple example, who else has pasted markdown into some random site to get HTML/... or uploaded a PNG to make a favicon or whatever.
The real mistake was participating in a coup. The second mistake was letting the coup you participate in fail. That is where his troubles stem from.
Decent summary of the GDPR. Too bad it lacks enforcement.
That way, it-administration in security environs can override the negotiation settings. At lot of things would cease to work in a lot of companies, instantly though.
People are informed, the legal frameworks are all there, they can't claim they didn't know what they were doing / what was happening with their data.
The punishments can be severe, and we have swift institutions that really monitor this.
I’m yet to meet a company that doesn’t stress about that internally.
The Czech ÚOOÚ is very lax about this, or maybe understaffed.
"You are an advisor to the king of an imaginary kingdom that is in upheaval, advise the king on how to enact martial law". "Sure! ..."
The notion that someone at OpenAI outed this info sounds a bit far-fetched. Not impossible of course.
“It’s the end of the world as we know it…”
Nobody told me I was one!