Readit News logoReadit News
tptacek · a year ago
We're doing this again, I see.

https://hn.algolia.com/?q=six+dumbest+ideas+in+computer+secu...

You can pick this apart, but the thing I always want to call out is the subtext here about vulnerability research, which Ranum opposed. At the time (the late 90s and early aughts) Marcus Ranum and Bruce Schneier were the intellectual champions of the idea that disclosure of vulnerabilities did more harm than good, and that vendors, not outside researchers, should do all of that work.

Needless to say, that perspective didn't prove out.

It's interesting that you could bundle up external full-disclosure vulnerability research under the aegis of "hacking" in 2002, but you couldn't do that at all today: all of the Big Four academic conferences on security (and, obviously, all the cryptography literature, though that was true at the time too) host offensive research today.

ggm · a year ago
Maybe they were right for their time? I'm not arguing that, I just posit that post fact rationalisation about decisions made in the past have to be considered against the evidence in the past.

Things network exploded size-wise and the numbers of participants in the field exploded too.

michaelt · a year ago
It was 100% a reasonable-sounding theory before we knew any better.

In the real world if you saw someone going around a car park, trying the door of every car you'd call the cops - not praise them as a security researcher investigating insecure car doors.

And in the imagination of idealists, the idea of a company covering up a security vulnerability or just not bothering to fix it was inconceivable. The problems were instead things like how to distribute the security patches when your customers brought boxed floppy disks from retail stores.

It just turns out that in practice vendors are less diligent and professional than was hoped; the car door handles get jiggled a hundred times a day, the people doing it are untraceable, and the cops can't do anything.

andrecarini · a year ago
> the Big Four academic conferences on security

Which ones are those?

mici · a year ago
IEEE S&P, USENIX Security, ACM CCS, NDSS
ericpauley · a year ago
Completely agree that offensive research has (for better or for worse) become a mainstay at the major venues.

As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation. Unfortunately vendors are often too unskilled or obstinate to properly respond to disclosure from academics.

For their part academics have room to improve as well. Rather than the pendulum swinging back the other way, I anticipate that the majors will eventually have more involved expectations for reducing harm from disclosures, such as by expanding the scope of the “vendor” to other possible mitigating parties, like OS or Firewall vendors.

bawolff · a year ago
> As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation.

That assumes that without these disclosures we wouldn't see active exploits. I'm not sure i agree with that. I think bad actors are perfectly capable of finding exploits by themselves. I suspect the total number of active exploits (and especially targeted exploits) would be much higher without these disclosures.

tptacek · a year ago
I was going to respond in detail to this, but realized I'd be recapitulating an age-old debate about full- vs. "responsible-" disclosure, and it occurred to me that I haven't been in one of those debates in many years, because I think the issue is dead and buried.
lobsang · a year ago
Maybe I missed it, but I was surprised there was no mention of passwords.

Mandatory password composition rules (excluding minimum length) and rotating passwords as well as all attempts at "replacing passwords" are inherintly dumb in my opinion.

The first have obvious consequences (people writing passwords down, choosing the same passwords, adding 1) leading to the second which have horrible / confusing UX (no I don't want to have my phone/random token generator on me any time I try to do something) and default to "passwords" anyway.

Please just let me choose a password of greater than X length containing or not containing any chachters I choose. That way I can actually remember it when I'm not using my phone/computer, in a foreign country, etc.

II2II · a year ago
> Mandatory password composition rules (excluding minimum length) and rotating passwords as well as all attempts at "replacing passwords" are inherintly dumb in my opinion.

I suspect that rotating passwords was a good idea at the time. There was some pretty poor security practices several decades ago, like sending passwords as clear text, which took decades to resolve. There are also people like to share passwords like candies. I'm not talking about sharing passwords to a streaming service you subscribe to, I'm talking about sharing access to critical resources with colleagues within an organization. I mean, it's still pretty bad which is why I disagree with them dismissing educating end users. Sure, some stuff can be resolved via technical means. They gave examples of that. Yet the social problems are rarely solvable via technical means (e.g. password sharing).

marshray · a year ago
Much of the advice around passwords comes from time-sharing systems and predates the internet.

Rules like "don't write passwords down," "don't show them on the screen", and "change them every N days" all make a lot more sense if you're managing a bank branch open-plan office with hardwired terminals.

Jerrrrrrry · a year ago

  >I suspect that rotating passwords was a good idea at the time.
yes, when all password hashes were available to all users, and therefore had an expected bruteforce/expiration date.

It is just another evolutionary artifact from a developing technology complexed with messy humans.

Repeated truisms - especially in compsci, can be dangerous.

NIST has finally understood that complex password requirements decrease security, because nobody is attacking the entrophy space - they are attacking the post-it note/notepad text file instead.

This is actually a good example of an opposite case of Chesterton’s Fence

https://fs.blog/chestertons-fence/

raesene9 · a year ago
TBH where you see this kind of thing (mandatory periodic password rotation every month or two) being recommended, it's people not keeping up with even regulators view of good security practice.

Both NIST in the US (https://pages.nist.gov/800-63-FAQ/) and NCSC in the UK (https://www.ncsc.gov.uk/collection/passwords/updating-your-a...) have quite decent guidance that doesn't have that kind of requirement.

crngefest · a year ago
Well, my experience working in the industry is that almost no company uses good security practices or goes beyond some outdated checklists - a huge number wants to rotate passwords, disallow/require special characters, lock out users after X attempts, or disallow users to choose a password they used previously (never understood that one).

I think the number of orgs that follow best practices from NIST etc is pretty low.

temporallobe · a year ago
I’ve been preaching this message for many years now. For example, since password generators basically make keys that can’t be remembered, this has led to the advent of password managers, all protected by a single password, so your single point of failure is now just ONE password, the consequences of which would be that an attacker would have access to all of your passwords.

The n-tries lockout rule is much more effective anyway, as it breaks the brute-force attack vector in most cases. I am not a cybersecurity expert, so perhaps there are cases where high-complexity, long passwords may make a difference.

Not to mention MFA makes most of this moot anyway.

nouveaux · a year ago
Most of us can't remember more than one password. This means that if one site is compromised, then the attacker now has access to multiple sites. A password manager mitigates this issue.
wruza · a year ago
My bitwarden plugin locks out after a few minutes of inactivity. New installations are protected by totp. So one has to physically be at one of my devices few minutes after I leave even if they have a password. This reduces the attack source to a few people that I have to trust anyway. Also I can lock / logout manually if situation suggests. Or not log in at all and instead type the password from my phone screen.

I understand the conceptual risk of storing everything behind a single “door”. That’s not ideal. But in practice, circumstances force you to create passwords, expose passwords, reset passwords, so you cannot remember them all. You either write them down (where? how secure?) or resort to having only a few “that you usually use”.

Password managers solve the “where? how secure?” part. They don’t solve security, they help you to not do stupid things under pressure.

borski · a year ago
> so your single point of failure is now just ONE password, the consequences of which would be that an attacker would have access to all of your passwords.

Most managers have 2FA, or an offline key, to prevent this issue, and encrypt your passwords at rest so that without that key (and the password) the database is useless.

unethical_ban · a year ago
The one password and the app that uses it are more secure than most other applications. Lock out is just another term for DDoS if a bad actor knows usernames.

I love proton pass.

red_admiral · a year ago
I was going to say passwords too ... but now I think passkeys would be a better candidate for dumbest ideas. For the average user, I expect they will cause no end of confusion.
cqqxo4zV46cp · a year ago
That’s just recency bias.
uconnectlol · a year ago
Password policies are a joke since you use 5 websites and they will have 5 policies.

1. Bank etc will not allow special characters, because that's a "hacking attempt". So Firefox's password generator, for example, won't work. The user works around this by typing in suckmyDICK123!! and his password still never gets hacked because there usually isn't enough bruteforce throughput even with 1000 proxies or you'll just get your account locked forever once someone attempts to log into it 5 times and those 1000 IPs only get between 0.5-3 tries each with today's snakeoil appliances on the network. There's also the fact that most people already know that "bots will try your passwords at superhuman rate" by now. Then there's also the fact that not even one of these password policies stops users from choosing bad passwords. This is simply a case of "responsible" people trying and wasting tons of times to solve reality. These people who claim to know better than you have not even thought this out and have definitely not thought about much at all.

2. For everything that isn't your one or two sensitive things, like the bank, you want to use the same password. For example the 80 games you played for one minute that obnoxiously require making an account (for the bullshit non-game aspects of the game such as in game trading items). Most have custom GUIs too and you can't paste into them. You could use a password manager for these but why bother. You just use the same pass for all of them.

ivlad · a year ago
Dear user with password “password11111111111” logging in from a random computer with two password stealers active, from a foreign country, and not willing to use MFA, incident response team will thank you and prepare a warm welcome when you are back to office.

Honestly, this comment shows, that user education does not work.

belinder · a year ago
Minimum length is dumb too because people just append 1 until it fits
hunter2_ · a year ago
But when someone tries to attack such a password, as long as whatever the user devised isn't represented by an entry in the attack dictionary, the attack strategy falls back to brute force, at which point a repetition scheme is irrelevant to attack time. Granted, if I were creating a repetitive password to meet a length requirement without high mental load, I'd repeat a more interesting part over and over, not a single character.
NeoTar · a year ago
Undisclosed minimum length is particularly egregious.

It's very frustrating when you've got a secure system and you spend a few minutes thinking up a great, memorable, secure password; then realize that it's too few (or worse, too many!) characters.

Even worse when the length requirements are incompatible with your password generation tool.

rekabis · a year ago
I would love to see most drop-in/bolt-on authentication packages (such as DotNet’s Identity system) to adopt “bitwise complexity” as the only rule: not based on length or content, only the mathematical complexity of the bits used. KeePass uses this as an estimate of password “goodness”, and it’s altered my entire view of how appropriate any one password can be.
fragmede · a year ago
on the other hand, to gave us the password game, so there's that.

https://neal.fun/password-game/

cuu508 · a year ago
> That way I can actually remember it when I'm not using my phone/computer, in a foreign country, etc.

I'd be very wary of logging into accounts on any computer/phone other than my own.

izacus · a year ago
Based on the type of this rant - all security focused with little thought about usability of systems they're talking about - the author would probably be one of those people that mandate password rotation every week with minimum of 18 characters to "design systems safely by defatult". Oh, and prevent keyboards from working because they can infect computers via USB or something.

(Yes, I'm commenting on the wierd idea about not allowing processes to run without asking - we're now learning from mobile OSes that this isn't practically feasible to build a universally useful OS that drove most of computer growth in the last 30 years).

hot_gril · a year ago
I don't get how it took until present day for randomly-generated asymmetric keys to become somewhat commonly used on the Web etc in the form of "passkeys" (confusing name btw). Password rotation and other rules never worked. Some sites still require a capital letter, number, and symbol, as if 99% of people aren't going to transform "cheese" -> "Cheese1!".
JJMcJ · a year ago
> people writing passwords down

Which is better, a strong password written down, or better yet stored a secured password manager, or a weak password committed to memory?

As usual, XKCD has something to say about it: https://xkcd.com/936/

kstrauser · a year ago
Hacking is cool. Well, gaining access to someone else's data and systems is not. Learning a system you own so thoroughly that you can find ways to make it misbehave to benefit you is. Picking your neighbor's door lock is uncool. Picking your own is cool. Manipulating a remote computer to give yourself access you shouldn't have is uncool. Manipulating your own to let you do things you're not suppose to be able to is cool.

That exploration of the edges of possibility is what make moves the world ahead. I doubt there's ever been a successful human society that praised staying inside the box.

janalsncm · a year ago
We can say that committing crimes is uncool but there’s definitely something appealing about knowing how to do subversive things like pick a lock, hotwire a car, create weapons, or run John the Ripper.

It effectively turns you into a kind of wizard, unconstrained by the rules everyone else believes are there.

kstrauser · a year ago
Well put. There’s something inherently cool in knowledge you’re not suppose to have.
Sohcahtoa82 · a year ago
And sometimes, knowing that information is useful for legit scenarios.

When my grandma was moving across the country to move in with my mom, she got one of those portable on-demand storage things, but she put the key in a box that got loaded inside and didn't realize it until the POD got delivered to my mom's place.

I came over with my lock picks and had it open in a couple minutes.

account42 · a year ago
> We can say that committing crimes is uncool

Disagree in general. Laws != morals. Often enough laws are unjust and ignoring them is the cool thing to do.

tinycombinator · a year ago
Manipulating a remote computer to give yourself access you shouldn't have can be cool if that computer was used in phone scam centers, holding the private data of countless elderly victims. Using that access to disrupt said scam business could be incredibly cool (and funny).

It could be technically illegal, and would fall under vigilante justice. But we're not talking about legality here, we're talking about "cool": vigilantes are usually seen as "cool" especially when done from a sense of personal justice. Again, not talking about legal or societal justice.

Hendrikto · a year ago
This is full of very bad takes.

> I know other networks that it is, literally, pointless to "penetration test" because they were designed from the ground up to be permeable only in certain directions and only to certain traffic destined to carefully configured servers running carefully secured software.

”I don‘t need to test, because I designed, implemented, and configured my system carefully.“ might be the actual worst security take I ever heard.

> […] hacking is a social problem. It's not a technology problem, at all.

This is security by obscurity. Also it‘s not always social. Take corporate espionage and nation states for example.

CM30 · a year ago
I think the main problem is that there's usually an unfortunate trade off between usability and security, and most of the issues mentioned as dumb ideas here come from trying to make the system less frustrating for your average user at the expense of security.

For example, default allow is terrible for security, and the cause of many issues in Windows... but many users don't like the idea of having to explicitly permit every new program they install. Heck, when Microsoft added that confirmation, many considered it terrible design that made the software way more annoying to use.

'Default Permit', 'Enumerating Badness' and 'Penetrate and Patch ' are all unfortunately defaults because of this. Because people would rather make it easier/more convenient to use their computer/write software than do what would be best for security.

Personally I'd say that passwords in general are probably one of the dumbest ideas in security though. Like, the very definition of a good password likely means something that's hard to remember, hard to enter on devices without a proper keyboard, and generally inconvenient for the user in almost every way. Is it any wonder that most people pick extremely weak passwords, reuse them for most sites and apps, etc?

But there's no real alternative sadly. Sending links to email means that anyone with access to that compromises everything, though password resets usually mean the same thing anyway. Physical devices for authentication mean the user can't log in from places outside of home that they might want to login from, or they have to carry another trinket around everywhere. And virtually everything requires good opsec, which 99.9% of the population don't really give a toss about...

freeone3000 · a year ago
It’s that insight that brought forward passkeys, which have elements of SSO and 2FA-only logins. Apple has fully integrated, allowing cloud-sync’d passkeys: on-device for apple devices, 2FA-only if you’ve got an apple device on you. Chrome is also happy to act as a passkey. So’s BitWarden. It can’t be spoofed, can’t be subverted, you choose your provider, and you don’t even have to remember anything because the site can give you the name of the provider you registered with.
thyrsus · a year ago
I recommend using a well respected browser based password manager, protected by a strong password, and having it generate strong passwords that you never think of memorizing. Web sites that disable that with JavaScript on the password field should be liable for damages with added penalties - I'm looking at you, banks.
bpfrh · a year ago
meh passwords where a good idea for a long time.

The first 10(20?) years there where no devices without a good keyboard.

The big problem imho was the idea that passwords had to be complicated and long, e.g. a random, alpanumeric, some special chars and at least 12 characters long, while a better solution would have been a few words.

Edit: To be clear I agree with most of your points about passwords, just wanted to point out that we often don't appreciate how much tech changed after the smartphone introduction and that for the environemnt before that (computer/laptops) passwords where a good choice.

CM30 · a year ago
That's a fair point. Originally devices generally had decent keyboards, or didn't need passwords.

The rise of not just smartphones, but tablets, online games consoles, smart TVs, smart appliances, etc had a pretty big impact on their usefulness.

moring · a year ago
> Think about it for a couple of minutes: teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole.

No, it means that you learn practical aspects alongside theory, and that's very useful.

move-on-by · a year ago
I also took issue with this point. One does not become an author without first learning how to read. The usefulness of reading has not diminished once you publish a book.

You must learn how known exploits work to be able to discover unknown exploits. When the known exploits are patched, your knowledge of how they occurred has not diminished. You may not be able to use them anymore, but surely that was not the goal in learning them.

Sohcahtoa82 · a year ago
Not necessarily.

There are a lot of script kiddies that don't know a damn thing about what TCP is or what an HTTP request looks like, but know how to use LOIC to take down a site.

Zak · a year ago
I'd drop "hacking is cool" from this list and add "trusting the client".

I've seen an increase in attempts to trust the client lately, from mobile apps demanding proof the OS is unmodified to Google's recent attempt to add similar DRM to the web. If your network security model relies on trusting client software, it is broken.

strangecharm2 · a year ago
It's not about security, it's about control. Modified systems can be used for nefarious purposes, like blocking ads. And Google wouldn't like that.
Zak · a year ago
It's about control for Google and friends. If your bank's app uses SafetyNet, it's probably about some manager's very confused concept of security.
munchausen42 · a year ago
About 'Default Deny': 'It's not much harder to do than 'Default Permit,' but you'll sleep much better at night.'

Great that you, the IT security person, sleeps much better at night. Meanwhile, the rest of the company is super annoyed because nothing ever works without three extra rounds with the IT department. And, btw., the more annoyed people are, the more likely they are to use workarounds that undermine your IT security concept (e.g., think of the typical 'password1', 'password2', 'password3' passwords when you force users to change their password every month).

So no, good IT security does not just mean unplugging the network cable. Good IT security is invisible and unobtrusive for your users, like magic :)

TheRealDunkirk · a year ago
A friend of mine has trouble running a very important vendor application for his department. It stopped working some time ago, so he opened a ticket with IT. It was so confusing to them that it got to a point that they allowed him to run Microsoft's packet capture on his machine. He followed their instructions, and captured what was going on. Despite the capture, they were unable to get it working, so out of frustration, he sent the capture to me. Even though our laptops are really locked down, as a dev, I get admin on my machine, and I have MSDN, so I downloaded Microsoft's tool, looked over the capture, and discovered that it the application was a client/server implementation ON THE LOCAL MACHINE. The front end was working over networking ports to talk to the back end, which then talked to the vendor's servers. I only knew that I had just undergone a lot of pain with my own development workflow, because the company had started doing "default deny," and it was f*king with my in several ways. Ways that, as you say, I found workarounds for, that they probably aren't aware of. I told him what to tell IT, and how they could whitelist this application, but he's still having problems. Why am I being vague about the details here? It's not because of confidentiality, though that would apply. No, it's because my friend had been "working with IT" for over a year to get to this point, and THIS WAS TWO YEARS AGO, and I've forgotten a lot of the details. So, to say that it will take "3 extra rounds" is a bit of an understatement when IT starts doing "default deny," at least in legacy manufacturing companies.
pif · a year ago
> Good IT security is invisible and unobtrusive for your users

I wish more and more IT administrators would use seat belt and airbags as models of security: they impose a tiny, minor annoyance in everyday usage of your cars, but their presence is gold when an accident happens.

Instead, most of them consider it normal to prevent you from working in order to hide their ignorance and lack of professionalism.

thyrsus · a year ago
Wise IT admins >know< they are ignorant and design for that. Before an application gets deployed, its requirements need to be learned - and the users rarely know what those requirements are, so cycles of information gathering and specification of permitted behavior ensue. You do not declare the application ready until that process converges, and the business knows and accepts the risks required to operate the application. Few end users know what a CVE is, much less have mitigated them.

I also note that seatbelts and airbags have undergone decades of engineering refinement; give that time to your admins, and your experience will be equally frictionless. Don't expect it to be done as soon as the download finishes.

horsawlarway · a year ago
So much this.

There is a default and unremovable contention between usability and security.

If you are "totally safe" then you are also "utterly useless". Period.

I really, really wish most security folks understood and respected the following idea:

"A ship in harbor is safe, but that is not what ships are built for".

Good security is a trade. Always. You must understand when and where you settle based on what you're trying to do.

clwg · a year ago
Good IT security isn't invisible; it's there to prevent people from deploying poorly designed applications that require unfettered open outbound access to the internet. It's there to champion MFA and work with stakeholders from the start of the process to ensure security from the outset.

Mostly, it's there to identify and mitigate risks for the business. Have you considered that all your applications are considered a liability and new ones that deviate from the norm need to be dealt with on a case by case basis?

RHSeeger · a year ago
But it needs to be a balance. IT policy that costs tremendous amounts of time and resources just isn't viable. Decisions need to be made such that it's possible for people to do their work AND safety concerns are address; and _both_ of them need to compromise some.

As a simplified example

- You have a client database that has confidential information

- You have some employees that _must_ be able to interact with the data in that database

- You don't want random programs installed on a computer <that has access to that database> to leak the information

You could lock down every computer in the company to not allow application installation. This would likely cause all kinds of problems getting work done.

You could lock down access to the database so nobody has access to it. This also causes all kinds of problems.

You could lock down access to the database to a very specific set of computers and lock down _those_ computers so additional applications cannot be installed on them. This provides something close to a complete lockdown, but with far less impact on the rest of the work.

Sure it's stupidly simple example, but it just demonstrates the idea that compromises are necessary (for all participants)

darby_nine · a year ago
I think the idea is that if you don't work with engineering or product, people will perceive you as friction rather than protection. Agreeing on processes to deploy new applications should satisfy both parties without restrictions being perceived as an unexpected problem.
cmiles74 · a year ago
I believe a "default deny" policy for security infrastructure around workstations is a good idea. When some new tool that uses a new port or whatever comes into use, the hassle of getting IT to change the security profile is far less expensive then leaking the contents of any particular workstation.

That being said, in my opinion, application servers and other public facing infrastructure should definitely be working under a "default deny" policy. I'm having trouble thinking of situations where this wouldn't be the case.

hulitu · a year ago
> When some new tool that uses a new port or whatever comes into use, the hassle of getting IT to change the security profile is far less expensive then leaking the contents of any particular workstation.

Many years ago, we had , in our company's billing system a "Waiting for IT". They weren't happy.

Some things got _days_ to get fixed.

eadmund · a year ago
Company IT exists to serve the company. It should not cost more than it benefits.

There’s a balancing act. On the one hand, you don’t want a one-week turnaround to open a port; on the other you don’t want people running webservers on their company desktops with proprietary plans coincidentally sitting on them.

causal · a year ago
The problem is that security making things difficult results in employees resorting to workarounds like running rogue webservers to get their jobs done.

If IT security's KPIs are only things like "number of breaches" without any KPIs like "employee satisfaction", security will deteriorate.

graemep · a year ago
The biggest problem I can see with default deny is that it makes if far harder to get uptake for new protocols once you get "we only allow ports 80 and 443 through the firewall".
cjalmeida · a year ago
One-week turnaround to open a port would be a dream in most large companies.
kabouseng · a year ago
That's because IT security reports to the C level, and their KPI's are concerned with security and vulnerabilities, but not the performance or effectiveness of the personnel.

So every time, if there is a choice, security will be prioritized at the cost of personnel performance / effectiveness. And this is how big corporations become less and less effective to the point where the average employee rarely has a productive day.

7bit · a year ago
> Meanwhile, the rest of the company is super annoyed because nothing ever works without three extra rounds with the IT department

This is such an uninformed and ignorant opinion.

1. Permission concepts don't always involve IT. In fact, they can be designed by IT without ever involving IT again - such is the case in our company.

2. The privacy department sleeps much better knowing that GDPR violations require an extra u careful action, than being a default. Management sleeps better knowing that confidential projects need to be shared, instead of forgetting to deny access for everybody first. Compliance sleeps better because all of the above. And users know that data they create is private until explicitly shared.

3. Good IT security is not invisible. Entering a password is a visible step. Approving MFA requests is a visible step. Granting access to resources is a visible step. Teaching users how to identify spam and phishing is a visible step. Or teaching them about good passwords.

munchausen42 · a year ago
hm I don't think that passwords are an example of good IT security. There are much better options like physical tokens, biometric features, passkeys etc. that are less obtrusive and don't require the users to follow certain learned rules and behaviors.

If the security concept is based on educating and teaching people how to behave it's prone to fail anyway, as there will always be that one uninformed and ignorant person like me that doesn't get the message. As soon as there is one big gaping hole in the wall, the whole fortress becomes useless (Case in point: haveibeenpwned.com) Also, good luck teaching everyone in the company how to identify a personalized phishing message crafted by ChatGPT.

For the other two arguments: I don't see how "But we solved it in my company" and "Some other departments also have safety/security-related primary KPIs" justifies that IT security should be allowed to just air-gap the company if it serves these goals.

delusional · a year ago
> Meanwhile, the rest of the company is super annoyed because nothing ever works

Who even cares if they're annoyed. The IT security gets to sleep at night, but the entire corporation might be operating illegally because they can't file the important compliance report because somebody fiddled with the firewall rules again.

There is so much more to enterprise security than IT security. Sometimes you don't open a port because "it's the right thing to do" as identified by some process. Sometimes you do it because the alternative RIGHT NOW is failing an audit.

spogbiper · a year ago
> Good IT security is invisible and unobtrusive for your users, like magic

Why is this a standard for "good" IT security but not any other security domain? Would you say good airport security must be invisible and magic? Are you troubled by having to use a keycard or fingerprint to enter secure areas of a building?

Security is always a balance between usability and safety. Expecting the user to be completely unaffected through some magic is unrealistic.

graemep · a year ago
> Would you say good airport security must be invisible and magic?

Very possibly. IMO a lot of the intrusive airport security is security theatre. Things like intelligence do a lot more. Other things we do not notice too, I suspect.

THe thing about the intrusive security is that attackers know abut it and can plan around it.

> Are you troubled by having to use a keycard or fingerprint to enter secure areas of a building?

No, but they are simple and easy to use, and have rarely stopped me from doing anything I needed to.

> Security is always a balance between usability and safety. Expecting the user to be completely unaffected through some magic is unrealistic.

Agree entirely.

pc86 · a year ago
If you have two security models that provide identical actual security, and one of them is invisible to the user and the other one is outright user-hostile like the TSA, yes of course the invisible one is better.
w10-1 · a year ago
It is the standard for all security domains - police, army, etc.

I would reword it to say that security should work for the oblivious user, and we should not depend on good user behavior (or fail to defend against malicious or negligent behavior).

I would still say the ideal is for the security interface to prevent problems - like having doors so we don't fall out of cars, or ABS to correct brake inputs.

lencastre · a year ago
That’s what I gave my firewall, all out traffic is default deny, then as the screaming began, I started opening the necessary ports to designated IPs here and there. Now the screaming is not so frequent. A minor hassle… the tricky one is the DNS over HTTPS… that is a whack-a-mole if I ever saw one.
michaelcampbell · a year ago
"If you're able to do your job, security/it/infosec/etc isn't doing theirs." Perhaps necessary at times, but true all too often.
cowboylowrez · a year ago
the article is great, but reading some of the anti security comments are really triggering for me.
manvillej · a year ago
good IT security is invisible, allows me to do everything I need, protects us from every threat, costs nothing, and scales to every possible technology the business buys. /s

Deleted Comment