It's becoming a question that if you have any sort of beliefs that may not be looked upon kindly by this government or future governments, maybe you shouldn't be writing anything on the internet at all. Maybe you shouldn't be texting or writing emails, either.
Over the past year, I've certainly been debating not expressing myself at all in writing. It's starting to feel very dangerous.
Do it anyways. Be assertive about your rights. They were paid for with blood.
There's a reason why "live free or die" is an expression. Your fear of death or punishment can be used to virtually enslave you. So you might as well live free, because you're going to die one way or another.
I think of Satoshi Nakamoto. He seems to have gotten away with being anonymous. But that was 10 years ago, and he did everything an individual reasonably could in terms of opsec. Even then, he’s still been narrowed down to a pretty short list of known individuals.
These days, someone who was a normal young person using the internet and social media in the 2010s has very little hope if an online mob decides to unmask them, let alone professional investigators.
If the government can do it, so can corporations. And individuals, for that matter.
There's more than just the government to be afraid of. If you're hiding from an abusive ex, they may be able to find you despite a new life under a pseudonym. No matter how many proxies you're behind, an advertiser could skip the tracking cookies and connect the dots between accounts with writing style. If you've written anything under your real name, you could get doxxed by connecting it to your "anonymous" Internet posts.
So indeed: maybe you shouldn't be writing on the Internet at all. If it's possible at all, forbidding the government from doing it will not make you any safer.
For what it's worth there are also some tools out there to mitigate stylometry identification. [1] Some discussion [2][3] I do not know if using such tools would render one's writing less interesting or artistic.
There seem to be many tools on github [4] that work with and against stylometry.
That's a shitty way to live your life, especially in period when these things are doing mere baby steps. Glass is always half empty strategy.
Maybe you are uber-important, rich and powerful and you should be actually concerned about this. But chances are high (not only due to your nick 'pessimizer') that you are just making your own paranoia and depression worse for little more than nothing.
Self-censorship is very common in oppressed regimes, I've seen it damn well under soviet/russian oppression during cold era in my own country. Even people with good intentions do pretty horrible things desperately trying to not get any attention they don't want.
Expressing certain thoughts is already dangerous in some contexts. Not state-actor-is-coming-to-get-you levels of danger just yet, at least for us common folks, but it's already happening and it ain't gonna get better.
It certainly is, but anonymity and pseudonymity tend to protect you unless you get some sort of celebrity or need to communicate for your job.
Anyway, one expresses one's opinion for the sake of other people. Maybe instead of worrying about what other people think, I should concentrate on my own safety. I can do other things for people.
That's been part of my anti-censorship arguments for a very long time. There's no technological reason why your phone calls can't be monitored and controlled just like any other medium, such as Facebook, Youtube or Twitter.
I'm gonna take this bait and list a few "beliefs" that the current U.S. government disapproves of enough that some branch might dedicate some resources to investigate. I'm not even going to bother with the obligatory "it's not my belief" since you explicitly said that's not relevant.
- The 2020 presidential election was not sufficiently investigated despite unusually and statistically unlikely results and questionable legal/procedural changes and activities at the state and local levels in regions that benefitted the eventual winner.
- White male Republicans who advocate for stricter enforcement of immigration law and internationally accepted asylum processes, seek to reduce or prevent taxpayer-funded education/encouragement about non-generative sexual preferences and practices to prepubescent children, and who support the individual right to defend against a monopoly on violence by a potentially tyrannical government, are NOT threats to democracy nor are they extremists to be likened to domestic terrorists.
- Recognition of extremely strong correlation between prevalent cultural norms of any given socioeconomic demographic and the geographical crime/violence rates in which those same demographic groupings reside is not racism nor xenophobia when the recognition is aimed at addressing the cultural aspect (caveat: I accept that there is also strong correlation between those who believe this and those who apply this belief to everyone within said socioeconomic demographic - eg. "poor Appalachian opioid addicts are all unintelligent, violent thieves")
- The variations of global temperature and weather phenomenon have been in fluctuation, and at more extreme levels, long before human intervention
I'm sure I could think of a few more but the effort to phrase things properly is not worth it for the inevitable [dead].
Part of my worry is that people who reply to me on the internet are looking for reasons to denounce me.
That being said, I've got a 12 year history here, you should search for whatever you need to see in order to tell me that if I weren't such a reprehensible person, I wouldn't have anything to worry about.
It is already something recuiters are suspicious about. I have had to justify myself and explain that I value my privacy a lot more than I care about likes and thumbs up.
Unfortunately short of "faking" it, I don't think there is a way to not look suspicious... pretty sad.
I wonder if this will turn out to be something of an antipattern (for lack of a better term). The set of vocabulary in a given language is fairly limited by the human mind, enough such that it's relatively uniform enough that subtle alterations are achievable enough to the individual that they could conceivably, of their own volition, intentionally fool emergent AIs leading to false attribution. I mean, if the AI can alter the text subtly enough to fool other AIs, given the limited number of words known to other humans (which is kind of the point of speech - what use is a massive vocabulary if nobody else knows those words?), why couldn't humans do it too?
So, here we have AIs erroneously pinning the author of heretical text X as human Y, thus triggering punishment Z. All the while, it was really written by bastard A, who is really a member of political party B: the opposition of human Y's party, the C's.
Depends on who the audience is. If you're writing a persuasive political message, you will have to have your fingerprint on it to generate the human reaction you want.
Although if we are at that point with AIs, it could also just be trying to game other AIs into interpreting writing into what you want before it's relayed to the individual.
I find it absolutely, utterly, and completely implausible that the intelligence community just now started to research this possibility. It's been an obvious idea for decades now, and the tech has visibly obviously been roughly up to the task for decades as well. The last couple of years may have given an incremental improvement but it is completely implausible to me that they don't have workable solutions to this already. It's not like the system is useless to them until it 100% fingers exactly one person.
I know that someone in the intelligence community was giving a lecture at the University of Florida on using UF’s fancy new donated Nvidia cluster to do this analysis.
This sort of writing style analysis has absolutely been used to ID pseudonymous / anonymous individuals online for over a decade.
The new twist seems to be using "AI" instead of more traditional algorithms. Personally, I would expect that to make the process more error-prone, but it makes sense to try.
Beyond the meme, this seems to be a brilliant idea - just like how we have coding standards, why not make writing standards, several levels beyond what we have currently? Constrain the "syntax" of expression (the way you communicate, as opposed to the ideas you want to communicate) to counteract fingerprinting. Sure, you'd have to restrict your creativity and prose, but you'd get anonymity in return.
All natural languages have fundamental ambiguity built in. This increases "fitness" both for the languages and those using it. Precise disambiguation requires an ungodly amount of context which, notably, we don't all share between our distinct moral communities. This is a feature, not a bug.
See (and practice) lojban if you intend to disambiguate.
Next, just try to imagine prose or poetry or, hell, even a compelling speech or nuanced opinion...
OK, so before you publish something you give it to GPT-3 with the prompt "write the following in the style of $X". Run the resulting text through a few AIs to confirm it says that it was probably written by $X.
So, either to anonymize the text or to implicate someone, it seems like we're just in an arms race now.
Then BigBrother starts getting logging info from the GPT-3 based SaaS you used to do that translation. Better to run your own AI from home. I hear Nvidia has a glut of inventory you can utilize.
This has been possible if not easy to accomplish for a very long time, so calling it "AI" seems like a bit of a stretch. I suspect most people can be fingerprinted according to a handful of less typical grammatical flourishes and reuse of certain uncommon words and turns of phrase. The only real challenge is scaling that up.
Sure, but there's always room to advance the state of the art. And in the article, they mention using the adversarial AI approach, which I would say is firmly in the world of AI, and not just statistical analysis.
For years I've tried to obfuscate my own style/word choices/idiosyncrasies when writing anonymously. Not because I fear that I'm saying things my government would like to punish me for, but because I fear these tools will become accessible enough for anybody who would like to punish me to identify and harass me.
Well, the point is to keep that fingerprint isolated to anonymous accounts. I don't mind if anonymous accounts are linked. Luckily I don't write much outside HN, and even less under my real name. I am, however, not under the illusion that this practice makes me safe. It's just something I do out of nervousness, since the only other option is not to write, and I'd rather not give it up.
Yeah, you can't just change from one fingerprint to another - you either have to erase your fingerprint (such that you look identical to someone else), or generate random ones and cycle through them. The anti-web-tracking people have been wrestling with this one for a bit.
If they did so behind a backdrop it could yield some results. How long till anonynous writers run their text through some fuzzers or some other AI system to junble the style or emulate other’s styles, replace words with synonyms, change sentence structure and such ending up with AI systems fighting other AI systems in futility.
What you do is have your sister to re-write your posts and you re-write hers. No one will suspect you two are plotting to bring about the first Hegemon.
Over the past year, I've certainly been debating not expressing myself at all in writing. It's starting to feel very dangerous.
There's a reason why "live free or die" is an expression. Your fear of death or punishment can be used to virtually enslave you. So you might as well live free, because you're going to die one way or another.
Dead Comment
It does not matter what your beliefs are. If they want to hang you, they will find a reason.
Deleted Comment
These days, someone who was a normal young person using the internet and social media in the 2010s has very little hope if an online mob decides to unmask them, let alone professional investigators.
There's more than just the government to be afraid of. If you're hiding from an abusive ex, they may be able to find you despite a new life under a pseudonym. No matter how many proxies you're behind, an advertiser could skip the tracking cookies and connect the dots between accounts with writing style. If you've written anything under your real name, you could get doxxed by connecting it to your "anonymous" Internet posts.
So indeed: maybe you shouldn't be writing on the Internet at all. If it's possible at all, forbidding the government from doing it will not make you any safer.
There seem to be many tools on github [4] that work with and against stylometry.
[1] - https://github.com/psal/anonymouth
[2] - https://security.stackexchange.com/questions/198741/what-are...
[3] - https://www.whonix.org/wiki/Stylometry
[4] - https://github.com/topics/stylometry
Maybe you are uber-important, rich and powerful and you should be actually concerned about this. But chances are high (not only due to your nick 'pessimizer') that you are just making your own paranoia and depression worse for little more than nothing.
Self-censorship is very common in oppressed regimes, I've seen it damn well under soviet/russian oppression during cold era in my own country. Even people with good intentions do pretty horrible things desperately trying to not get any attention they don't want.
Anyway, one expresses one's opinion for the sake of other people. Maybe instead of worrying about what other people think, I should concentrate on my own safety. I can do other things for people.
List them. Which beliefs?
Edit: Not trying to "out" anybody's beliefs, I just don't think the govt cares.
- The 2020 presidential election was not sufficiently investigated despite unusually and statistically unlikely results and questionable legal/procedural changes and activities at the state and local levels in regions that benefitted the eventual winner.
- White male Republicans who advocate for stricter enforcement of immigration law and internationally accepted asylum processes, seek to reduce or prevent taxpayer-funded education/encouragement about non-generative sexual preferences and practices to prepubescent children, and who support the individual right to defend against a monopoly on violence by a potentially tyrannical government, are NOT threats to democracy nor are they extremists to be likened to domestic terrorists.
- Recognition of extremely strong correlation between prevalent cultural norms of any given socioeconomic demographic and the geographical crime/violence rates in which those same demographic groupings reside is not racism nor xenophobia when the recognition is aimed at addressing the cultural aspect (caveat: I accept that there is also strong correlation between those who believe this and those who apply this belief to everyone within said socioeconomic demographic - eg. "poor Appalachian opioid addicts are all unintelligent, violent thieves")
- The variations of global temperature and weather phenomenon have been in fluctuation, and at more extreme levels, long before human intervention
I'm sure I could think of a few more but the effort to phrase things properly is not worth it for the inevitable [dead].
That being said, I've got a 12 year history here, you should search for whatever you need to see in order to tell me that if I weren't such a reprehensible person, I wouldn't have anything to worry about.
It can be anything from opposing the war in Ukraine to opposing lockdowns, or mandatory masks, etc.
I have a feeling the person you're asking won't want to answer this question!
So the listing you want is pretty much anything political written by anyone
Deleted Comment
Dead Comment
Unfortunately short of "faking" it, I don't think there is a way to not look suspicious... pretty sad.
So, here we have AIs erroneously pinning the author of heretical text X as human Y, thus triggering punishment Z. All the while, it was really written by bastard A, who is really a member of political party B: the opposition of human Y's party, the C's.
Although if we are at that point with AIs, it could also just be trying to game other AIs into interpreting writing into what you want before it's relayed to the individual.
It's obvious that this isn't the cutting edge that USG has to offer. Just a random department re-inventing the wheel.
Disclaimer: Worked on CIA / FBI stuff two decades ago.
The new twist seems to be using "AI" instead of more traditional algorithms. Personally, I would expect that to make the process more error-prone, but it makes sense to try.
> be anonymous writer
> be fearful of AI de-anonymizing my contrarian self
> idea
> write only greentexts, like other millions
> AI can't single me out
> AI defeated
IFW
So, either to anonymize the text or to implicate someone, it seems like we're just in an arms race now.
AI fall down go boom.
Internet fall down go boom
They found Facebook posts where some dude said hiyas, did some police work and got him.
Found an article: https://www.bbc.com/news/uk-36437856
This isn't quite the same, but validates the point you are making for sure.