All of this of course skates past the problem that PGP's UX practically guarantees that someone will eventually reply to an encrypted email in plaintext, often compromising the whole conversation. Practically everyone who has used encrypted email "at scale" has seen it happen. An intolerable, irrevocable disaster we accept only because most of us don't actually care about the cryptographic security of our emails, most of the time.
Autocrypt is a sensible approach leveraging PGP but avoiding most of that ceremonial UX. Unfortunately the gnupg project is heading in its own direction with major clients such as Thunderbird and Protonmail following. Few clients actually implement Autocrypt.
The group that has used PGP largely seems to like the ceremony required and chooses not to understand it is an impediment for both pervasive encrypted email and usability for nontechnical users. Hell even for technical users PGP is a hassle.
I'm doing what I can to spread the word on Autocrypt, but nontech types can't or won't switch to the few clients implementing it and the PGP fanbase sees their 6 levels of trust as Zimmerman given dogma that must not be altered under pressure of earthly realities, or security insights developed after the Code was written down.
>...the PGP fanbase sees their 6 levels of trust as Zimmerman dogma that must not be altered under pressure of earthly realities, or security insights developed after the Code was written down.
I am an actual OpenPGP shill who is entirely in favour of a very conservative approach to the standard but I really don't see the problem here. Autocrypt is a separate standard. Clients can implement it completely and just switch the trust to OpenPGP if a PGP identity shows up. There is nothing wrong with a client adding some extra trust information to accommodate autocrypt.
I had a bug opened against evolution for something like a decade because it defaulted to using plaintext in replies to S/MIME encrypted emails. So it's far from only a PGP UX thing.
I'm still not sure they've fixed it... I haven't checked for several years now.
I think PGP has played an important part in propagating encrypted communication, and I think it's still quite useful as a tool for teaching/learning about asymmetric encryption, but there's just way too much friction for practical use for most people.
I wonder if there's some sort of educational tool/walkthrough that uses PGP under the hood but explains each step as you go...
As someone who worked as a pen-tester, security is almost ALWAYS at odds with convenience.
Making better security less and less inconvenient is the name of the game. Even if 100% security would be as easy as ticking a box, though, the fact of the matter is that most people don't care and if it's not "secure by default", it simply will stay not secure.
Maybe a synonym for "turned on by default" is "absolutely zero inconvenience or attention needed"
Honestly, this makes me appreciate Signal all the more – it seems the only barrier is getting friends to adopt it, but if used by an organization could it be an alternative to E2EE email?
In my experience, the client is stupidly buggy. I've lost messages silently multiple times. This is a known issue and after losing several key messages stopped using it.
That's just my opinion, but to me E2EE from Whatsapp/Signal/Telegram is just Marketing. The trust is worse than Trust on first use (TOFU) since it can change repeatedly, most people install it through the Play Store so one might argue that it is in a certain sense as secure as closed-source software and the whole key exchange is hidden from the user. Not to mention that most Smartphones are blackboxes to a large part and phone number based apps are vulnerable to sim hijacking (yes we see when the key changes, but nobody cares and it's difficult to reestablish the trust).
Probably depends on the use-case, but as a normal user that lives in a non-repressive state, there is little value in using Signal. (FWIW Signal uses central servers, so in a repressive state it can just be turned off?) GPG based security is superior and not tied to a blackbox platform...
Could you please explain to me why Signal is the superior messaging platform?
I can't understand why it requires ID/passport from you to use it. I guess if you want to use an encrypted messaging platform then you don't want it to be tied to the government issued ID.
p.s. you phone number is tied you your ID, so it's basically your ID
That's a leap, from 'some security' to 'absolutely zero inconvenience'. There's no middle ground? Don't we all use passwords now for many things? That's not zero attention needed.
Often the mentality you will find is "any hole is a hole that will be exploited", so they make these incredibly difficult systems with perfect security that nobody ever uses because they're so hard to use.
SSH is flawed in that you have to trust the host the first time you connect to it. For most people this is a reasonable tradeoff, but for security nerds it can be an impossible barrier to adoption. This is why some SSH servers spit out big long hexcodes or weird picture things, you are supposed to securely message the owner of the server to verify it before you accept it. Details are of course left up to the user because any default system you set up will of course be exploited. It's head of line security exploits all the way down.
Password managers have turned this into near-zero inconvenience: when I arrive at the login page my UID and PW are already filled in.
(I also have three hard tokens for from banks and investments, two ubikeys (both for LastPass, one in storage), and google authenticator on my iphone for anyone that offers it... so I've taken a step backward for added security.)
My interpretation of the parent post was not that there is no middle ground, but that if a user can use a service without engaging in an extra step to increase security, they will do so, no matter how easy the extra step is. Passwords are not an extra step - they are required.
Counterpoint: I work for a company that sells software to mostly big companies. Sometimes they want to host it themselves. Creating valid certificates usable for HTTPS is a never-ending support issue.
We're developing an electronic health record platform that stores data encrypted in a way that we don't have the decryption key. We don't have end-to-end encryption, but if you got a database dump then it'd be quite tricky to decrypt what's in it as data belonging to a person can be unlocked with a secret that the person has, and we don't; only in memory when serving the user.
So we're very far from perfect, but still better than most providers in this space. However, the fact that we can't easily do database migrations, debugging, etc makes our lives extremely hard, and it's much more difficult to compete in terms of UX.
If you measure two things, you can draw the "frontier" between them, sketching out the max of X that you can have while having an amount of Y. This will draw a graph where the lower left is the part you can reach, and the the upper right is the part you can't, e.g. you can't have 100% security and 100% usability.
When you are on that frontier, than the two things are in apparent opposition to each other, in the sense that you can't get more of one without having less of the other. However, if you are not on that frontier, then suddenly the conflict evaporates, because you can indeed have more of one without less of the other.
(Almost everything we ever talk about as software engineers being in "opposition" to each other is actually in this relationship. Sometimes optimality on one axis is so easy to achieve that it's still practical to discuss the two things as being in "opposition", but most of the time, before worrying about to things being in opposition it should first be checked that we are indeed on this optimality frontier. Otherwise we risk constraining our thoughts into a win/lose frame and miss the win/win options on the table.)
All of that is a lead up to my claim that the idea that GPG is on the Pareto frontier for usability and security doesn't pass the smell test. In fact it manages to have such a bad UI that it adversely impacts the security it can provide. It isn't just what git calls the "porcelain", either; some of the fundamental data structures GPG uses are just not quite right and produce fundamental confusion. It certainly doesn't help that the UI is so obscure that even a heavy user can be confused by everything that is going on.
GPG really needs a total UI overhaul, but I think this is one of those cases where the existence of an apparently "blessed" product (the GPG distribution itself) prevents anything better from being able to get enough of a foothold to succeed. If you waved a magic wand and made me the PM over the GPG product, I'd be putting out a call to the community to make a better UI, no holds barred (i.e., fundamental data structure changes are on the table and while I wouldn't necessarily want to promise a lack of backwards compatibility, don't be afraid to break it), and in a year we'll circle back around to the proposals and the GPG project itself will bless one of them. But that probably won't happen. (And I personally lack the bandwidth and the gpg street cred, so "why don't you do it" isn't a terribly practical response.)
If I say "Public private key cryptography" it is like music to my ears. The term is near poetic. The experience is so hard to explain I think I have to make a drawing.
> Making better security less and less inconvenient is the name of the game.
Bingo. It's really easy to overlook UX, but it's the name of the game for actually improving things. Dev Experience too. There's a perverse problem in secure development, where you need to know enough to figure out why something is going wrong, but you must not leak information.
> security is almost ALWAYS at odds with convenience.
I hate this attitude. Vehemently. It's borderline ethically irresponsible to say it out loud, because it gives muggles the notion that they can save costs by reducing security, which is often simply not true.
Security is very often the cheaper option, or at least the smoother one. Certainly from an end-users' point of view, it's more convenient.
Let me list some examples:
Windows v1909 has a bunch of hardware-enfoced security turned on by default (=low admin effort), invisibly to the end user (=convenient). Older operating systems either have none of this, or it's a manual task to enable these optional features (=expensive).
A good HR automation process means that when a new employee starts, they automatically gain access to whatever they need, and nothing they don't. This is great for the user, because "everything just works" and they have everything they need available (=convenient), but it's secure because there is no error-prone manual processes to grant them access to things (=cheaper).
Windows 10 Hello for Business is essentially a virtual smart card stored in the TPM chip of your laptop, secured with a PIN or biometrics as a second factor. It uses modern cryptography and works transparently with Kerberos. It's as strong as a Smart Card but is more automatic than a password.(=convenient) It requires no hardware to deploy, and essentially eliminates a bunch of attack vectors (=cost neutral or even cheaper).
Simply enabling WSUS or leaving Windows in automatic patch mode is literally the cheapest and most secure option, yet many organisations insist on spending tons of time, effort, and money on "managing" their patches. This inevitably results in totally unnecessary paper pushing, and no measurable benefit. Meanwhile, Microsoft releases protocol-breaking changes (such as the CredSSP thing recently) in waves, with the "enabling" patch one month, and then the "enforcing" patch the next month. Smooth as silk. Except for every. Single. One. Of my customers that insisted on "managing" their patches in quarterly rollouts or whatever that had a major outage because they skipped patches.
You get the idea. There is a "happy" path of less effort, more convenience, yet very good security. Not perfect, but good enough.
Conversely, this attitude of "we must reduce security to reduce costs or improve convenience" is just insane. I've seen people go out of their way to purposefully weaken the default security of a system because of this logic.
So please. Even if you know better, just never, ever say anything like this out loud. The world is full of Muggles, and they hear this shit and do random stupid stuff that lets the Chinese government steal our hard work.
It has very little to do with the UX of email encryption, IMO. People don't use encrypted email because the parties who would have to do the work in order to use it have never, and do not know anyone who has ever, suffered a problem that encrypted email would address.
Web sites use encryption because the people who would need to do the work of setting it up know that search rankings, user experience thanks to browser reporting of unencrypted connections, and consumer confidence all suffer if they don't.
And it improves their security in a couple ways too.
For encrypted email, the onus is on the recipient of email to set it up. Such a vanishingly small number of recipients have ever suffered a problem that encryption would solve, it might as well be zero.
We can talk about making the UX better. But unless it's effort-free and happens by default, there's no motivation.
End-to-end encryption is still a very challenging problem almost everywhere except for direct connection server-side TLS. Yet, everyone has a problem that encryption solves, which is that you actually cannot if some entity has been interfering with your email, perhaps re-writing certain paragraphs. There is plenty of evidence that actually interfering with email is in fact something that happens. Similarly, in the age before widespread use of TLS, cases of ISPs injecting content/javascript/trackers into web traffic was also widespread.
> End-to-end encryption is still a very challenging problem almost everywhere except for direct connection server-side TLS.
Is this the case with WhatsApp and Signal? Their implementations have been working great. Why can't email work the same way?
> There is plenty of evidence that actually interfering with email is in fact something that happens.
Please cite examples. I know Avast antivirus used to obnoxiously tamper with the emails sent by users in order to spam other people with its own advertising.
Encryption is not signing, and relying on it to tell if someone has interfered with the content of a document has a track record of disaster.
You need to rely on signing for that.
Even if you're using an encryption scheme that claims to preserve integrity.
Signing is easier, because only the sender needs to set it up for it to be effective.
At any rate, I think I see your point, and that's why I chose the word "suffered." People have suffered from not using server side TLS. Practically no one knows of any consequence they've suffered for sticking to plaintext email.
Alas, in some places communications security and anonymity is a ‘black swan’ problem in that it's barely noticeable most of the time, but someday it might irreparably mess up one's life. So of course people are split into those who may occasionally think that it'd be nice to have encrypted messaging, and those who heard of the consequences and now worry about them like it's plague.
Are these sort of leaks usually from intercepting email? or some other method like, from a trusted participant in the conversation leaking them, a backup being found, the machine itself being compromised?
That is sadly related to the fact that email as a platform is pretty much dead outside small niches. Very few applications are built on top of email. Of course that is also chicken and egg problem; without good security story its bit difficult to build anything on it these days.
There is a key area where mail is central: The "I lost my password" feature of websites. Apple ID, Google+ (or whatever they call it now) and Facebook login replace it to some degree (especially on mobile device apps) but for far most websites newly created passwords or activation links are sent via regular mail unprotected.
An emerging HN trope is that “almost no one uses PGP.”
PGP remains wildly popular on Tor cryptomarkets, an area where users assume server compromise will happen yet still decide to transact using encrypted messages.
Don’t underestimate a technology gaining popularity in fringe communities with young user bases, often from non-technical backgrounds. Kudos to companies like Keybase and Protonmail for investing in PGP’s future.
I think a lot of people think about encryption technologies in terms of global adoption. HTTPS, encrypted filesystems, and E2E encrypted messaging have all reached a significant percentage of global Internet users.
But in that context, 50k PGP users is arguably "almost no one." If we guesstimate 4 billion Internet users, that's 0.001% of all users.
I think the point is not that PGP is impossible to use, I think the point is that PGP is not appropriate to scale to the global Internet the way other encryption technologies have.
It's not true that something isn't valuable just because it hasn't scaled.
PGP has an outdated and confusing UX, a broken threat model, and is extremely bad in many respects. But it does have users, and for some people and some tasks, it works better than alternatives. (My use cases: exchanging routine work email with other nerds who have it set up, share confidential documents.)
So, while we shouldn't minimize its very major problems, we shouldn't pretend that its user base does not exist and that it's not addressing at least some use cases.
Yeah, it's tedious if you do it the long way around because you, for some reason, have arbitrarily restricted yourself to only doing it via command line...
Just get an email client that has PGP capability built in.
> Just get an email client that has PGP capability built in.
That step isn't as straightforward as you might think.
Put yourself in the shoes of a complete novice. As of this writing, a Google search for {pgp email windows} returns gpg4win as the top search result, which is the tool the author used.
Otherwise, the software listed on the openpgp website (https://www.openpgp.org/software/) mostly doesn't have PGP capability "built in." In particular, both Evolution and Outlook require addons for PGP support.
Despite suffering from bad UX, the instructions the author followed are largely those present in a top-Google-searched guide (https://yanhan.github.io/posts/2017-09-27-how-to-use-gpg-to-... -- infobox result for {gpg encrypted email}). Easier methods that are harder to find or less obvious may as well not exist for a novice such as the author of this article.
The whole article is basically "lol look at this thing that is actually relatively well supported with a small amount of work, but I couldn't be bothered to spend time doing some research about my use case and instead just read the first google result I found, so I'm going to crap all over it and make it look worse than it actually is for Internet Karma."
And the comment about being a novice, looking at the rest of the blog site implies the user has some level of competence when it comes to computing technologies as they are messing around with webserver configs etc.
I really enjoy articles like these because it offers a perspective that is difficult for developers of software to see themselves. Say you're starting a company that provides a technical service and you claim on your homepage "3-click install!" Rarely it's ever just 3 clicks. It's a good idea to watch videos or read written stories of every step a user takes in order to use your service, including learning how to use it and their mistakes.
> Each public key must be signed before messages from the owner of that public key can be decrypted.
Nope.
> Your public key must be sent to anyone to whom you send an encrypted email.
Nope, you need theirs.
I get it, in the current year, it's cool to bash gpg because it's sooooo complicated. There may even be some merit to that argument, but it's pretty lame if the argument is based on not understanding the very basics of public key cryptography.
The group that has used PGP largely seems to like the ceremony required and chooses not to understand it is an impediment for both pervasive encrypted email and usability for nontechnical users. Hell even for technical users PGP is a hassle.
I'm doing what I can to spread the word on Autocrypt, but nontech types can't or won't switch to the few clients implementing it and the PGP fanbase sees their 6 levels of trust as Zimmerman given dogma that must not be altered under pressure of earthly realities, or security insights developed after the Code was written down.
I am an actual OpenPGP shill who is entirely in favour of a very conservative approach to the standard but I really don't see the problem here. Autocrypt is a separate standard. Clients can implement it completely and just switch the trust to OpenPGP if a PGP identity shows up. There is nothing wrong with a client adding some extra trust information to accommodate autocrypt.
I'm still not sure they've fixed it... I haven't checked for several years now.
PGP seemed almost like magic when I first used it, but I look forward to it being relegated to the history books.
I wonder if there's some sort of educational tool/walkthrough that uses PGP under the hood but explains each step as you go...
Making better security less and less inconvenient is the name of the game. Even if 100% security would be as easy as ticking a box, though, the fact of the matter is that most people don't care and if it's not "secure by default", it simply will stay not secure.
Maybe a synonym for "turned on by default" is "absolutely zero inconvenience or attention needed"
So, IMO the barrier is, "actually working"
https://github.com/signalapp/Signal-Android/issues/5253https://github.com/signalapp/Signal-Android/search?q=missing...
Probably depends on the use-case, but as a normal user that lives in a non-repressive state, there is little value in using Signal. (FWIW Signal uses central servers, so in a repressive state it can just be turned off?) GPG based security is superior and not tied to a blackbox platform...
I can't understand why it requires ID/passport from you to use it. I guess if you want to use an encrypted messaging platform then you don't want it to be tied to the government issued ID.
p.s. you phone number is tied you your ID, so it's basically your ID
SSH is flawed in that you have to trust the host the first time you connect to it. For most people this is a reasonable tradeoff, but for security nerds it can be an impossible barrier to adoption. This is why some SSH servers spit out big long hexcodes or weird picture things, you are supposed to securely message the owner of the server to verify it before you accept it. Details are of course left up to the user because any default system you set up will of course be exploited. It's head of line security exploits all the way down.
(I also have three hard tokens for from banks and investments, two ubikeys (both for LastPass, one in storage), and google authenticator on my iphone for anyone that offers it... so I've taken a step backward for added security.)
Still requires a passphrase or initializing a keypair on the client & server.
> HTTPS
HTTPS requires user interaction when a site forgets to renew a certificate.
Also SSH and HTTPS don't compare well. One does client authentication, the other doesn't.
So we're very far from perfect, but still better than most providers in this space. However, the fact that we can't easily do database migrations, debugging, etc makes our lives extremely hard, and it's much more difficult to compete in terms of UX.
If you measure two things, you can draw the "frontier" between them, sketching out the max of X that you can have while having an amount of Y. This will draw a graph where the lower left is the part you can reach, and the the upper right is the part you can't, e.g. you can't have 100% security and 100% usability.
When you are on that frontier, than the two things are in apparent opposition to each other, in the sense that you can't get more of one without having less of the other. However, if you are not on that frontier, then suddenly the conflict evaporates, because you can indeed have more of one without less of the other.
(Almost everything we ever talk about as software engineers being in "opposition" to each other is actually in this relationship. Sometimes optimality on one axis is so easy to achieve that it's still practical to discuss the two things as being in "opposition", but most of the time, before worrying about to things being in opposition it should first be checked that we are indeed on this optimality frontier. Otherwise we risk constraining our thoughts into a win/lose frame and miss the win/win options on the table.)
All of that is a lead up to my claim that the idea that GPG is on the Pareto frontier for usability and security doesn't pass the smell test. In fact it manages to have such a bad UI that it adversely impacts the security it can provide. It isn't just what git calls the "porcelain", either; some of the fundamental data structures GPG uses are just not quite right and produce fundamental confusion. It certainly doesn't help that the UI is so obscure that even a heavy user can be confused by everything that is going on.
GPG really needs a total UI overhaul, but I think this is one of those cases where the existence of an apparently "blessed" product (the GPG distribution itself) prevents anything better from being able to get enough of a foothold to succeed. If you waved a magic wand and made me the PM over the GPG product, I'd be putting out a call to the community to make a better UI, no holds barred (i.e., fundamental data structure changes are on the table and while I wouldn't necessarily want to promise a lack of backwards compatibility, don't be afraid to break it), and in a year we'll circle back around to the proposals and the GPG project itself will bless one of them. But that probably won't happen. (And I personally lack the bandwidth and the gpg street cred, so "why don't you do it" isn't a terribly practical response.)
https://i.ibb.co/DgQ82kt/gnuPG.png
Deleted Comment
Bingo. It's really easy to overlook UX, but it's the name of the game for actually improving things. Dev Experience too. There's a perverse problem in secure development, where you need to know enough to figure out why something is going wrong, but you must not leak information.
Consider the magnitude of inconvenience from having your identity stolen. I think it dwarfs any minor inconvniences needed to secure your identity.
That said, no doubt user-computer interaction should study security a little more I think.
I hate this attitude. Vehemently. It's borderline ethically irresponsible to say it out loud, because it gives muggles the notion that they can save costs by reducing security, which is often simply not true.
Security is very often the cheaper option, or at least the smoother one. Certainly from an end-users' point of view, it's more convenient.
Let me list some examples:
Windows v1909 has a bunch of hardware-enfoced security turned on by default (=low admin effort), invisibly to the end user (=convenient). Older operating systems either have none of this, or it's a manual task to enable these optional features (=expensive).
A good HR automation process means that when a new employee starts, they automatically gain access to whatever they need, and nothing they don't. This is great for the user, because "everything just works" and they have everything they need available (=convenient), but it's secure because there is no error-prone manual processes to grant them access to things (=cheaper).
Windows 10 Hello for Business is essentially a virtual smart card stored in the TPM chip of your laptop, secured with a PIN or biometrics as a second factor. It uses modern cryptography and works transparently with Kerberos. It's as strong as a Smart Card but is more automatic than a password.(=convenient) It requires no hardware to deploy, and essentially eliminates a bunch of attack vectors (=cost neutral or even cheaper).
Simply enabling WSUS or leaving Windows in automatic patch mode is literally the cheapest and most secure option, yet many organisations insist on spending tons of time, effort, and money on "managing" their patches. This inevitably results in totally unnecessary paper pushing, and no measurable benefit. Meanwhile, Microsoft releases protocol-breaking changes (such as the CredSSP thing recently) in waves, with the "enabling" patch one month, and then the "enforcing" patch the next month. Smooth as silk. Except for every. Single. One. Of my customers that insisted on "managing" their patches in quarterly rollouts or whatever that had a major outage because they skipped patches.
You get the idea. There is a "happy" path of less effort, more convenience, yet very good security. Not perfect, but good enough.
Conversely, this attitude of "we must reduce security to reduce costs or improve convenience" is just insane. I've seen people go out of their way to purposefully weaken the default security of a system because of this logic.
So please. Even if you know better, just never, ever say anything like this out loud. The world is full of Muggles, and they hear this shit and do random stupid stuff that lets the Chinese government steal our hard work.
Web sites use encryption because the people who would need to do the work of setting it up know that search rankings, user experience thanks to browser reporting of unencrypted connections, and consumer confidence all suffer if they don't.
And it improves their security in a couple ways too.
For encrypted email, the onus is on the recipient of email to set it up. Such a vanishingly small number of recipients have ever suffered a problem that encryption would solve, it might as well be zero.
We can talk about making the UX better. But unless it's effort-free and happens by default, there's no motivation.
Is this the case with WhatsApp and Signal? Their implementations have been working great. Why can't email work the same way?
> There is plenty of evidence that actually interfering with email is in fact something that happens.
Please cite examples. I know Avast antivirus used to obnoxiously tamper with the emails sent by users in order to spam other people with its own advertising.
You need to rely on signing for that.
Even if you're using an encryption scheme that claims to preserve integrity.
Signing is easier, because only the sender needs to set it up for it to be effective.
At any rate, I think I see your point, and that's why I chose the word "suffered." People have suffered from not using server side TLS. Practically no one knows of any consequence they've suffered for sticking to plaintext email.
Perhaps from the POV of The Valley, but I can assure you that email is alive and well elsewhere.
PGP remains wildly popular on Tor cryptomarkets, an area where users assume server compromise will happen yet still decide to transact using encrypted messages.
Don’t underestimate a technology gaining popularity in fringe communities with young user bases, often from non-technical backgrounds. Kudos to companies like Keybase and Protonmail for investing in PGP’s future.
But in that context, 50k PGP users is arguably "almost no one." If we guesstimate 4 billion Internet users, that's 0.001% of all users.
I think the point is not that PGP is impossible to use, I think the point is that PGP is not appropriate to scale to the global Internet the way other encryption technologies have.
PGP has an outdated and confusing UX, a broken threat model, and is extremely bad in many respects. But it does have users, and for some people and some tasks, it works better than alternatives. (My use cases: exchanging routine work email with other nerds who have it set up, share confidential documents.)
So, while we shouldn't minimize its very major problems, we shouldn't pretend that its user base does not exist and that it's not addressing at least some use cases.
Just get an email client that has PGP capability built in.
That step isn't as straightforward as you might think.
Put yourself in the shoes of a complete novice. As of this writing, a Google search for {pgp email windows} returns gpg4win as the top search result, which is the tool the author used.
Otherwise, the software listed on the openpgp website (https://www.openpgp.org/software/) mostly doesn't have PGP capability "built in." In particular, both Evolution and Outlook require addons for PGP support.
Despite suffering from bad UX, the instructions the author followed are largely those present in a top-Google-searched guide (https://yanhan.github.io/posts/2017-09-27-how-to-use-gpg-to-... -- infobox result for {gpg encrypted email}). Easier methods that are harder to find or less obvious may as well not exist for a novice such as the author of this article.
https://protonmail.com/support/knowledge-base/how-to-use-pgp...
The whole article is basically "lol look at this thing that is actually relatively well supported with a small amount of work, but I couldn't be bothered to spend time doing some research about my use case and instead just read the first google result I found, so I'm going to crap all over it and make it look worse than it actually is for Internet Karma."
And the comment about being a novice, looking at the rest of the blog site implies the user has some level of competence when it comes to computing technologies as they are messing around with webserver configs etc.
encrypted email is broken, and a lot of that has to do with ui/ux problems; but the author is making this way harder than it has to be.
If PGP is criticized because a user could cc someone, how about taking a screenshot from your messenger conversation?
People do this reflexively and upload almost anything.
Why would I trust a phone (tracking device) in the first place?
But I have no trouble finding ones that I trust who agree with the key parts of that rant. For example https://www.schneier.com/blog/archives/2016/12/giving_up_on_..., https://blog.cryptographyengineering.com/2014/08/13/whats-ma..., https://dev.to/artis3n/encrypting-files-in-a-post-pgp-age-59... and many, many more.
Whether or not you can trust your phone doesn't change that PGP is missing nearly 30 years of what we've learned about best practices for encryption.
>Encrypting Email > >Don’t.
... which is obviously absurd. I think you might of been taken in by a troll attempt...
Nope.
> Your public key must be sent to anyone to whom you send an encrypted email.
Nope, you need theirs.
I get it, in the current year, it's cool to bash gpg because it's sooooo complicated. There may even be some merit to that argument, but it's pretty lame if the argument is based on not understanding the very basics of public key cryptography.