Readit News logoReadit News
dang · 3 years ago
Related:

The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=28068725 - Aug 2021 (21 comments)

The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=14369342 - May 2017 (6 comments)

The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=12483067 - Sept 2016 (11 comments)

The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=522900 - March 2009 (20 comments)

The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=167850 - April 2008 (1 comment)

The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=35811 - July 2007 (2 comments)

efitz · 3 years ago
Yep, it’s worth repeating.

Although I’d spin the issue about host vs network security differently. I’ve found that engineering teams prioritize security a lot more if they don’t feel like they’re safe in a cocoon of local network bliss behind network firewalls. I love “beyond corp” or “zero trust” precisely because you’re making it explicit that they’re on the internet and they’re a target.

arp242 · 3 years ago
> Yep, it’s worth repeating.

I don't know; I haven't really seen most of these things in the wild for a long time.

For "#4) Hacking is Cool" the zeitgeist has moved in the exact opposite direction with "white hat", bug bounties, etc. I think that section in particular is a pretty outdated view of things.

"#6) Action is Better Than Inaction" is probably the only one that still broadly applies today, and is actually a special case of "X exists, therefore, therefore we must use it ASAP, and any possible negativities are not our problem and inevitable anyway" attitude that seems the be prevalent among a certain types of people.

Deleted Comment

moffkalast · 3 years ago
There's something ironic about there being exactly six past posts about it.
dexterdog · 3 years ago
Except there are at least 12 previous posts of this article. It was only posted once the year it came out, but it had a real renaissance about 6 years ago for some reason.
tptacek · 3 years ago
A reminder that a big part of the subtext of this piece is a reactionary movement against vulnerability research that Ranum was at the vanguard of. Along with Schneier, Ranum spent a lot of energy railing against people who found and exploited vulnerabilities (as you can see from items #2, #3, and #4). It hasn't aged well.

I'm not sure there's anything true on this list that is, in 2023, interesting; maybe you could argue they were in 2005.

The irony is, Ranum went on to work at Tenable, which is itself a firm that violates most of these tenets.

diamondo25 · 3 years ago
I've read about 80% of this page, and eventually stopped at the part where he says that the next generation will be more cautious. This, in my opinion, is false. Most software has simplified for user experience, and has not helped kids in the slightest bit. Its more addictive than ever, and all caution gets thrown out of the window when we let kids browse youtube unsupervised. Heck, a wrong search query or random text can give you NSFW content. And with the rise of shorts/stories/tiktoks you'll be molded by the algorithms. You don't or have barely any control over the content you see. If it notices you watch, what, 5 seconds? of a clip, it'll start recommending that.

The issues we have nowadays are different than those in 2005. People that havent seen the bad parts of the internet, will not teach their kids about it either...

eduction · 3 years ago
> It hasn't aged well.

You’re wrong, it’s aged quite well.

Just take as one important set of examples the new mobile operating systems since this piece was published. Even the most thoughtfully designed and locked down (even with hardware, various uses of encryption etc) continue to have vulnerabilities at the base layer year after year. Bug hunting looks every year more and more like just an expensive sport for condescending security experts who think little about the broader context in which they operate. As much as we all appreciate the whack a mole.

Where there has been genuine security improvement is where we’ve taken the structural, locked down approach advocated here (see also djb’s paper about qmail security). iOS and Android apps (particularly the former) seem genuinely more secure than most desktop apps because they are structured to have very limited permissions from day one. The app environments on those systems looks like they were designed with many of the principles from this post expressly in mind.

The lessons for the OS layer seem obvious. Qubes and in particular Joanna’s post about “Qubes Air” point in one very promising direction.

UncleMeat · 3 years ago
Offensive research is what motivates lessons for the OS layer. Look at the struggles were are having with things like kernel-level memory safety even when we can point at mountains of CVEs found by white hats. The community would be dragging its feet even more if the shared consensus was that actually it is really hard to beat ASLR and DEP so we are all done and have solved it.
tptacek · 3 years ago
One of the major points of the qmail paper is that the structural locked down approach wasn't successful.

(I disagree with the paper in this regard, but it's a weird thing to hang your argument against vulnerability research on).

Georgi Guninski would have a thing or two to say about the applicability of vulnerability research to djb software.

wildmanx · 3 years ago
> > It hasn't aged well.

> You’re wrong, it’s aged quite well.

Part of the problem is that there are many people in the field of security with overly strong opinions. This is not healthy. The field is full of know-it-all people, with if-only-people-were-not-as-dumb kind of attitudes. This is not helping anybody. Any not-as-strongly-opinionated bystander looks at this and has no clue whom to listen to, since so many people are strongly expressing 100% opposing views. Calling everybody else "dumb". This is not helpful to bring the field as a whole forward.

rfoo · 3 years ago
> (see also djb’s paper about qmail security)

You mean the guy who refused to fix an integer overflow bug, claiming it isn't practical to exploit then 64-bit really happened then years later suddenly the fine guys at Qualys decided to have fun? [1] Sure, he is a crypto expert and we're all grateful for his work on curve25519, salsa/chacha, nacl, djbsort, etc (and I'm sure I missed a lot). This does not mean he is an expert on weird machine.

[1] https://www.qualys.com/2020/05/19/cve-2005-1513/remote-code-...

autokad · 3 years ago
I agree. they go onto a useless rant about how pen testing is useless, red team research only enables hackers, etc. That's not true at all. That work is what pushes the improvements in both detection and better programming practices.

Educating users is not dumb, its one of the most important parts of security a company should address. I really don't know where they are coming from here, this section was nonsense to me.

I also have a point that will get me downloaded and piss off a lot of people, Security is very important, but not THAT important. If the business doesn't operate, then there's no need for security. So what's the solution? The author comes off as one of those that treat security like a wheelbarrow full of bricks that everyone has to push around. This wont get buy in and people will find ways around it. Instead, security should be like tennis shoes. restrictive but they also allow you to run faster.

mavhc · 3 years ago
It's dumb to create a system will fail because of dumb users, why did you invent a system that required everyone using it not to be an idiot, have you never met humans?
pvg · 3 years ago
What was the argument against vuln research? The 'Penetrate & Patch' bit makes it sound it's something like 'this is pointless because the proper way to fix this stuff is better design and other things are a waste of time and effort'.
castillar76 · 3 years ago
This predated a lot of the responsible-disclosure culture that exists now, so there was a lot of “find vuln, post right away for the credits” going on. Couple that with a lot of tool research that was important but also felt very grey-hat, and it was easy to feel like much of the “vulnerability research” community were like a group of scientists working on making cancer airborne “for research purposes”. I admit to having felt that way then, too.

Fortunately a lot of that has subsided. The focus on responsible disclosure while still holding companies accountable, the great security research being done by projects like Talos or Project Zero, and the consistent flow of new open-source blue-team tooling has really helped balance the scales (if they were ever unbalanced).

tptacek · 3 years ago
This is a whole can of worms, and my response will be biased and untrustworthy, but here's my take:

In the early-to-mid 1990s, serious security research was intensely cliquish. There wasn't a norm of published vulnerability research; in fact, there was the opposite norm: CERT, the well-known public resource, diligently stripped details about vulnerabilities (beyond where to find the patches) out of announcements, and discussions about how vulnerabilities actually worked was relegated to "secret" lists like Core, which were of course ultimately leaked to became BBS tfiles.

Ranum came to prominence in that era. In the mid-to-late 1990s, after Bugtraq took over, there remained a sort of informal best friends club of, like, Ranum and Dan Farmer and Wietse Venema and like one or two young vulnerability researchers --- Elias "Aleph One" Levy, for instance. There was a sort of acceptance of the idea that Elias and Mudge were doing vulnerability research that was well-intentioned and OK... but that everyone else was just trading exploits on #hack.

There was a sort of focused beam of hatred on eEye, a security vendor that came to prominence in the early 2000s, and most especially during the "Summer of Worms", some of which worms were based on vulnerabilities that eEye's team --- at the time truly one of the most influential teams in all of vulnerability research --- had published. I worked at the industry's first commercial vulnerability research team and had a soft spot for eEye, which was doing the work we did but like several levels better than us, and it has always pissed me off how Ranum and Schneier tried to make hay by dunking on eEye and casting them as "hackers".

(Of course, if you tried to make that argument now you'd sound like a clown, so you won't see people like Ranum and Schneier saying that kind of stuff. But the fact is, the arguments were clownish and inappropriate back then, too.)

So, if you ask me, the argument Ranum is advancing is literally that public vulnerability research is bad, and that details of vulnerabilities should be kept between vendors and a few anointed 3rd party researchers. Because otherwise, you were just helping people break into computers.

The Ranum of 2005 is I think especially characteristic of what I'd call "moralizing" information security; that security is actually a fight between good and evil, that what's important about machines getting owned up is that somebody's livelihood depends on that machine running, and the hacking is a crime, and the details of how the hack worked are about as relevant as the details of how a burglar breaks into the window of a house they're burgling without setting off the alarms or whatever. I get it, but I'm from the opposing school of information security, which is that security is just a super interesting computer science problem.

mytailorisrich · 3 years ago
> Ranum spent a lot of energy railing against people who found and exploited vulnerabilities

That's not at all what #2 says. "enumerating badness" is explained as trying to track everything that's 'bad' instead of what's not. It is claimed to be 'dumb' because what 'bad' is orders of magnitude larger and more complex than what's not.

tptacek · 3 years ago
It's not what #2 says, it's just why he was saying it.
nine_k · 3 years ago
I suppose the idea of denying by default (#1, #2) and the idea of defense in depth (mentioned at the end) aged well enough.

I'm not sure about educating users. It's obviously not going to be a bulletproof solution. But not educating users at all also does not seem right either: it's hard for a person to care about stuff they have no idea about.

SAI_Peregrinus · 3 years ago
The better way is to make the secure path the easy path. You don't have to educate users to do something that makes their life harder, you can educate them in an easier way to do what they want. That's far more likely to stick.

Usability is a security issue; at the ultimate extreme a DoS attack is just creating a very poor user experience.

rfoo · 3 years ago
Thanks for the context! I was too young to know these backstories.
bluedino · 3 years ago
This describes the security industry as a whole.

We had a user click an email and get phished.

We tried training the users with tools like KnowBe4, banners above the emails that say things like THIS IS AN OUTSIDE EMAIL BE VERY CAREFUL WHEN CLICKING LINKS. Didn't help.

The email was a very generic looking "Kindly view the attached invoice"

The attached invoice was a PDF file

The link went to some suspicious looking domain

The page the link brought up was a shoddy impersonation of a OneDrive login

In just minutes, the users machine was infected, it emailed itself to all of their Outlook contacts...

So this means nothing in this list detected a goddamn thing:

    Next-generation firewall
    AI-powered security
    'MACHINE LEARNING'
    'Prevent lateral spread'
    enterprise defense suite with threat protection and threat detection capabilities designed to identify and stop attacks
    AV software that was advertised to 'Flag malicious phishing emails and scam websites'
    'Defend against ransomware and other online dangers'
    'Block dangerous websites that can steal personal data'
    the cloud-based filtering service that protects your organization against spam, malware, and other email threats
And the company that we pay a huge sum of money to 'delivers threat detection, incident response, and compliance management in one unified platform' didn't make a peep.

But, we are up to the standards of quite a few acronyms.

It's all a useless shitshow. And plenty of productivity-hurting false flags happen all the time.

causi · 3 years ago
Have you tried threats and public humiliation?

"ATTN ALL employees: Dave Smith ignored security training and was phished into installing malware. He is now fired because he was an idiot."

dexterdog · 3 years ago
I think there are a number of departments that will help you join Dave in his new-found freedom from employment if you send that.
bigbillheck · 3 years ago
> Have you tried threats and public humiliation?

Looks like we've found a seventh.

Spooky23 · 3 years ago
Several years ago, I worked on an incident response for an incident that was detected and stopped.

Tl;dr, a targeted phishing email was the catalyst for the whole thing. The various systems that detect these thing effectively blocked it ~97/100 times. One click was all it took. The user who clicked had a bad feeling and used a blame-free and convenient reporting mechanism to report it.

That doesn’t mean that tools and training are useless. As a defender in any context, defense has to be multilayered and flexible as circumstances change. In IT, sports or warfare, it’s the same process or funnel.

The scenario you described likely would have been detected by an EDR tool, or by log analysis if there was a process to do that. Declaring “shitshow” is accepting a bad outcome. Unfortunately as the value of compromising a company has gone up, the opponents have leveled up, and defenders need to as well.

namaria · 3 years ago
"The spot where we intend to fight must not be made known; for then the enemy will have to prepare against a possible attack at several different points; (...) If he sends reinforcements everywhere, he will everywhere be weak."

Sun Tzu, Art of War. I know, cheesy to compare network security with warfare. But, I've learned that big shinny stack of tools is a red flag. If there is no threat model and focused hardening, you're not doing security, you're doing compliance.

ufmace · 3 years ago
I wonder how well we all think this article has aged?

"Penetrate and Patch" is supposedly dumb. But what do we practically do with that? We've seen in the last decade or so a lot of long-lived software everyone thought was secure get caught with massive security bugs. Well, once some software you depend on has infact been found to have a bug, what's there to do but patch it? If some software has never had a bug found in it, does that actually mean that it's secure, or just that no skilled hackers have ever really looked hard at it?

Also web browsers face a constant stream of security issues. But so what? What are we supposed to do instead? Any simpler version doesn't have the features we demand, so you're stuck in a boring corner of the world.

"Default Permit" - nice idea in most cases. I've never heard of a computer that's actually capable of only letting your most commonly used apps run though. It's not very clear how you'd do that, and ensure none of them were ever tampered with, or be able to do development involving frequently producing new binaries, or figure out how to make sure no malicious code ever took advantage of whatever mechanism you want to use to make app development not terrible. And everyone already gripes about how locked-down iOS devices are, wouldn't this mean making everything at least that locked down or more?

tptacek · 3 years ago
1. Default deny is one of the oldest best practices in security engineering; it barely needed saying in 1995 (but Cheswick & Bellovin said exactly that in Firewalls & Internet Security).

2. "Enumerating badness" is simultaneously an attempt to connect vulnerability research to antivirus (security practitioners have had contempt, mostly justified, for AV since the late 1980s) and an endorsement of the heuristic detection scheme companies like NFR sold. Apart from the shade it throws at vulnerability research, it's fine.

3. "Penetrate and patch" has aged so poorly that Ranum's own career refutes it; he ended up Chief of Security at Tenable, one of the industry's great popularizers of the idea.

4. "Hacking is cool": literally, this is "get off my lawn".

5. Objecting to user education is an idea that is coming back into vogue, especially with authentication and phishing. It's the idea that has held up best here.

6. "Action is better than inaction" --- this is just a restatement of "something must be done, this is something", or the Underpants Gnome thesis. Is it true? Sure, I guess.

As a whole, this piece has not aged well at all.

jonstewart · 3 years ago
Agree with tptacek. His rant in Enumerating Badness is deeply intertwined with his rant against Default Permit (which as tptacek points out, was a bit of a strawman even then). I could agree that checking against all possible bad things is a flawed approach for security products and IT staff.

However, enumerating badness is hugely valuable in the security industry for two reasons:

1) It’s the backbone of security research, just as physiology and anatomy are to zoology and medicine. With enumeration (observation), we can classify, abstract, find trends, identify risky software and approaches, direct engineering resources, and create broad defenses (yay ASLR).

2) Attackers are lazy, too. I work at a security consulting firm, and routinely see attackers reuse the same TTP across different target companies. Enumerating badness not only offers detection opportunities (perhaps not the best, but higher level detection techniques are often built off understanding the enumeration of badness) but also denies attackers opportunity for reuse. “Impose cost,” as thoughtlords like to say.

saagarjha · 3 years ago
> Objecting to user education is an idea that is coming back into vogue, especially with authentication and phishing. It's the idea that has held up best here.

This can be good or bad depending on how you do it. If you default to the good thing and there really isn’t any need to do the bad thing then it can be quite good. If there is a genuine need for some people to sometimes do the bad thing (so it’s not actually universally “bad”) pretending like nobody can ever make an informed decision here is not a good policy. Sure, it’s very difficult to get people to make informed choices, but you can’t really brush this off as people being uneducateable.

acdha · 3 years ago
> 1. Default deny is one of the oldest best practices in security engineering; it barely needed saying in 1995 (but Cheswick & Bellovin said exactly that in Firewalls & Internet Security).

I agree with this as the theoretical grounding and certainly wouldn't say that this was especially insightful but I can understand why he said it if he was encountering a long tail of obsolete advice similar to what several places I worked were hearing at the time. I think one of the big problems was simply inertia: I remember the Cisco guys at one job saying they didn't want to do a default-deny policy because it was too much work to switch and they weren't sure if the hardware could handle that many rules.

u801e · 3 years ago
> Objecting to user education is an idea that is coming back into vogue, especially with authentication and phishing. It's the idea that has held up best here.

The notion that users can't really be educated has led to a lot of questionable security practices that prioritize ease of use over real security. For example, 2FA using codes sent by email or SMS as the second factor rather than relying on key based authentication like client side TLS certificates issued by the service that the client is using.

This, to some extent, has actually decreased security by allowing people to bypass authentication by compromising the second factor through use of social engineering.

prof-dr-ir · 3 years ago
> 4. "Hacking is cool": literally, this is "get off my lawn".

I do not quite agree with this verdict. Back in the early two thousands there were lots of people who thought it rather cool to just hack around (or more precisely crack around), for example by exploiting SQL injection bugs in whatever web form they encountered.

Of course I might not have a representative impression of the community, but I think this kind of stuff is now much more widely seen as unethical and uncool. So I think the article's prediction that this will "be a dead idea in the next 10 years" was actually quite accurate.

randomcarbloke · 3 years ago
4/6 and it hasn't aged well?

1 might be ubiquitous even in 2005 but it's 2022 and this was the page that was shared, obviously it was worth Ranum stating...

acdha · 3 years ago
> Also web browsers face a constant stream of security issues. But so what? What are we supposed to do instead? Any simpler version doesn't have the features we demand, so you're stuck in a boring corner of the world.

The charitable interpretation of the “penetrate and patch” section is the architectural parts, and browsers are a great example. At the time he wrote that, a browser was a single process running everything in traditional C/C++ calling other unsafe code (i.e. Flash) in the same process. People did patch a lot but they also did things like split components into separate processes with different privilege levels, change practices throughout the codebase to harden things like pointers or how memory is allocated, rewrite portions in memory-safe languages, etc. It took a decade but browsers became a lot harder to successfully exploit.

josephg · 3 years ago
I think this article has aged very well.

> Also web browsers face a constant stream of security issues. What are we supposed to do instead?

There's not much that users can do, but web browsers have spent the last decade moving away from "Penetrate and Patch" to much more proactive approaches. Eg, Chrome pioneered moving each tab to a separate process with full sandbox isolation. Firefox is talking about using webassembly as an intermediate compilation step for 3rd party C++ code to effectively sandbox it at a compilation level. Rust was invented by Mozilla in large part because they wanted to solve memory corruption bugs in the browser in a systematic way.

> "Default Permit" - nice idea in most cases. I've never heard of a computer that's actually capable of only letting your most commonly used apps run though.

MacOS requires user consent for apps to access shared parts of the filesystem. The first time you see dialogues asking "Do you allow this app to open files in your Documents folder" its sort of annoying, but its a fantastic idea.

As you say, my iPhone is more secure than linux for the same reason - because iOS has a "default deny" attitude toward app permissions. A single malicious app (or a single malicious npm package) on linux can cryptolocker all my data without me knowing. The security model of iOS / Android doesn't allow that and thats a good thing.

I wish iOS was more open, but on the flipside I think linux could do a lot more to protect users from malicious code. I think there's plenty of middle ground here that we aren't even exploring. Linux's permission model can be changed. We have all the code - we just need to do the work.

Also since this article was written, we've seen a massive number of data breaches because MongoDB databases were accidentally exposed on the open internet. In retrospect, having a "default permit" policy for mongodb was a terrible idea.

jjav · 3 years ago
> my iPhone is more secure than linux for the same reason

The phrase "more secure" doesn't mean anything, I wish it wasn't used.

One needs to talk about the threat model(s) you care about and how a particular solution addresses them (or not). Any given solution can be both more secure and less secure than an alternative, depending on what threat models you care about. Which may well be different than the threat models someone else cares about.

If you unconditionally trust Apple and all government agencies which have power over Apple (e.g. NSLs) then one could say iOS has a more secure file access model than Linux. But that's a big if. Personally I could never trust a closed source proprietary solution.

scotty79 · 3 years ago
> Chrome pioneered moving each tab to a separate process with full sandbox isolation.

I don't think it was done for security. Before chrome where all tabs ran in single process it was common for a bad site to stall your whole browser. Separating it into single processes was basically admission that, yes, web browser sucks, so the best we can do is give you ability to kill a part of it when it misbehaves.

ufmace · 3 years ago
The consumer OS lockdown side does have a lot of interesting points. One I also thought of - I'd bet that, even if we reject web browsers, basically every user's "30 most used apps" has at least one that has a plugin system that loads unverified code, or runs macros in some kind of interpreter that is or may later be proved to be exploitable, or parses structured data from files that can't be trusted using non-memory-safe code, or some other thing I haven't thought of.
saagarjha · 3 years ago
> The first time you see dialogues asking "Do you allow this app to open files in your Documents folder" its sort of annoying, but its a fantastic idea.

It’s not a great idea because it’s annoying. It is not really useful in its current incarnation to most people.

munchbunny · 3 years ago
The points didn’t age well, but there’s a kernel of truth in there: none of those things will ever work 100%, so if you’re trying to really lock things down you need defense in depth, which was also not a new security concept in 2005, but it was one we were, as an industry, less sophisticated about.
denton-scratch · 3 years ago
"Defence in depth" is a term with obvious military origins.

You have a relatively thin front-line of defence, with orders to fall back if they are in danger of being overrun. Then you have a very strong second line, manned with assault troops. As the first line falls back, the second line counterattacks.

This strategy was developed by the Germans in WWII, and adopted by the Russians.

I disapprove of it's use in computer security. There, it means something different; it means basically having multiple lines of defence, without any notion of counterattack.

Gigachad · 3 years ago
The future is probably a two tiered system like what Apple is showing with Lockdown Mode. Normal users get the full speed fully functional system. And those who are at risk of being targeted use a locked down system with less convenience features but more security.

Along with better languages and tooling ruling out entire classes of exploits.

bartread · 3 years ago
> #4) Hacking is Cool

As an old I strongly object to the corruption of the terms "hacking" and "hacker" in the diatribe following this heading. I'm a fan of hacker culture, in the old sense, and encourage our developers to adopt a hacker mindset when approaching the problems they're trying to solve. Hacking is cool.

tibanne · 3 years ago
As an old :D
donatj · 3 years ago
> Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?

That’s like saying “Why don’t they just design locks that are unpickable?”

They’ve been working on that, for a while. But you need to know what you’re protecting against. Anyone who watches The Lock Picking Lawyer knows about the swaths of new locks vulnerable to comb attacks - a simple attack that had been solved for almost a hundred years but somehow major lock manufacturers forgot about.

You can’t build something safe without considering potential vulnerabilities, that’s just a frustratingly naive thing to say.

albntomat0 · 3 years ago
To take the strongest form of the author’s argument, his point is that it’s not possible to take a pile of terrible code with no security, and fix all the problems in it. It’s better to architect it in a way that provides security (e.g least privilege everywhere, sandbox, memory safe languages, etc.).

I think the author could have phrased it better, in that the best approach is having a good security design, and then taking out all the bugs it couldn’t cover.

superkuh · 3 years ago
Back in 2005 the idea that you shouldn't run every bit of executable code sent to you was drilled into people. Nowadays you can't use a commercial/institutional websites without doing the modern equivalent of opening random email attachments.
Gigachad · 3 years ago
You also use an OS and browser which is space age technology compared to what they had in 2005. Back then a kid could write an email to install a rootkit on your computer. Now you'd get paid $100k+ if you could work out how to do that.

It also used to be common knowledge that if someone has physical access to your device, its game over. Which is something that is becoming rapidly untrue. If I hand my macbook to my friend for a day, I can be quite confident they haven't been able to defeat the boot chain security to replace my kernel with a malware version like you trivially could pre secure boot environments.

Another piece of common advice was to not use public wifi because anyone could steal your password or credit card details. Security advice from 2005 really hasn't held up much at all.

mb7733 · 3 years ago
But the client side code in a web-app is run within the browser sandbox, which is not equivalent to running a random exe... Unless you meant something else?
superkuh · 3 years ago
Speculative execution, sandbox exploits, etc, etc. I thought everyone (myself included) stopped believing in the power of VMs/containers/sandboxes to protect you when all that happened (and kept happening). And it's just getting worse as the JS engine(s) get access to more and more bare metal features and become a true OS in more than just spirit.

Thus all the crazy insistence on CA TLS in modern web protocols like HTTP/3 which can't even establish an connection without CA based TLS hand-holding.

Animats · 3 years ago
(2005)

"Default Deny" was, for a while, called "App Store". However, the app store vendors have done much better at keeping out things for competitive reasons than at keeping out things for security reasons.