Obscurity Is a Valid Security Layer

  • Yes... This is well known and not actually at all controversial. The only people who are against adding an additional layer of security are the ones who don't actually understand the concept, they only heard "security through obscurity is bad." Those people shouldn't be securing systems.

    For example shutting up chatty webservers is a good and well established security practice (stuff like removing x-powered-by response headers)[1]. This is one of the security policies of the government systems I work on. but... it's security through obscurity, however, it's far from the only practice a website used to keep itself secure.

    I don't know if its true but I also heard that the NSA doesn't publish some of their physical addresses and the highway exit are unmarked - that's security through obscurity. Again, that doesn't mean they go ahead and leave the doors unlocked.

    Another recommended security practice, don't use usernames like 'root,' 'admin,' etc.

    In meatspace there's the advice of "don't leave valuables in your car in plain sight," that's uncontroversial but its also security through obscurity, covering up your iPad when you leave it in the car doesn't mean you don't lock your door.

    But, the prerequisite is really, actually understanding security, as a concept, including understanding tradeoffs. Without a good understanding you aren't ever going to succeed in securing any systems.

    [1] https://www.troyhunt.com/shhh-dont-let-your-response-headers...

  • Most attacks are just scripts that constantly scan everything looking for services on well known ports. This sort of attack isn't dangerous if you've got the basics right, so obscurity gives you nothing very useful. I guess it might result in less noise in the logs which is nice but it's not 'more secure'.

    The far less common but much more dangerous attack is a malicious third party intent on gaining access to your servers specifically. Hiding a service on a different port isn't even going to slow that attacker down - they'll use a port scanner to find every port that's listening. The service is going to be found regardless of whether or not you've changed the port. You could certainly mitigate the problem by modifying the service not to output anything until the user is authenticated, and you can use a port knocking strategy to stop it connecting on the first try, but those aren't really 'obscurity' per se.

    That's not to say you shouldn't do it if you want to; I'm just not sure it actually makes anything more secure.

  • This example is good, but my problem with obscurity, especially in legacy products is this: complacence.

    A product's perceived security /= a product's actual security. Obfuscation can lead to complacence, whereas transparency leads to paranoia, which is no bad thing in this domain. By adding an obfuscation layer, we give bad code a place to hide.

  • The rule of avoiding "security through obscurity" is not 1) "you should let a potential attacker known everything about your system", but 2) "your system must be designed so that even if an attacker knows everything about it (except the keys/passwords/other secrets), still they cannot gain access". Ordinarily people should be aiming at point 2. Since occasionally it can happen that a system is found vulnerable, obscurity layers can, as others have noted, buy some time. This can be enough to restore point 2 before it is too late, so in this scenario obscurity plays a useful role.

    In other words, you should always happen that "given enough time, a determined attacker can learn anything about your system".

  • As I'm sure someone else on this thread has observed, this is a silly example, because the SSH example forgets the denominator, which would show that even with 18,000 attack requests, the probability of a compromise on a properly configured system is nonexistent --- and if your system isn't configured properly, SSH becomes an example of obscurity layered on "instead of" proper security.

  • If you like it you can put the lazyness of attackers in your threat model.

    Most attackers are just systems that are scanning parts of the internet for the low hanging fruit. They want easy targets, they don't want to spend time on your systems, and they like using their usual tools that work for everyone else. They aren't going to put in the effort to work out your slightly different hashing method and make a GPU based cracker for it. They aren't going to employ a giant network to bypass fail2ban. They arent looking for nonstandard ports. Etc, etc.

    Yes, you can hypothetically have an attacker that works around all your obfuscation, but it simply requires much more effort. By employing these kind of techniques, you beat the script kiddies and the automated systems, which in my experience is 99% of attackers.

  • It's a valid additional security layer. If it's not displacing other things you should be doing, it probably adds value.

    His example of moving ssh to a non-default port is compelling.

  • I like to say that obscurity should not be used _for_ security but in _addition_ to security.

    For example running ssh on a non-default port. It's obscurity. But it should still have correct key strengths and all the settings as if it was running on the default port. It shouldn't be weakened somehow because it is running on that port.

    So why run it on a non-default port, then? Perhaps to get less log noise. So it doesn't add to security but it makes parsing the logs easier, because it's less stuff to search through.

  • Not it is not, because obscurity usually assumes human limitations on information gathering and searching. So that layer that would bin a human to search for a life-time is non-existent for a proper machine search. The hidden folder, in a sea of thousand folders is not hidden for a machine.

    Obscurity was a valid layer, while we did not have machines to eliminate it. Now its gone, and remains a lingering illusions, created by our own limitations.

  • I think the analogies tend to confuse the difference between obscurity on one hand, and randomness in the algorithm on the other.

    With cryptography, by design, there will always be hidden "obscure" secrets that can be used to break into the system: passwords, private keys, etc. The useful mathematical insight of cryptography is to isolate the "obscurity" into these secret bits and to pick them randomly with high entropy, while not necessarily assuming the rest of the algorithm is hidden.

    The physical examples of decoy vehicles or randomizing one's route are examples of cryptographic protocols, not security via obscurity. You can tell because the algorithm is public but there are some randomly-chosen bits that are secret.

    I'm not disagreeing with the core concept, but I don't see that the ssh example is very convincing either -- it seems to also illustrate the danger of false confidence when using security by obscurity....

  • Obscurity buys you time, and that's it.

  • I think security though obscurity can be a massive deterrent for all but the most dedicated attackers.

    For example, say that I not only move ssh to port 24, but it's also completely disabled by default. Then I have a small script scanning icmp logs looking for a ping of a particular size on another obscure port, and if it gets one, it enables the ssh server for 30 seconds. If no one opens an ssh connection in that window, it re-disables.

    How would anyone besides an insider even figure out how to enable your ssh port let alone try to break in? Sure, if this method became widespread the script kiddies would adapt accordingly and it would no longer be as effective, but staying one step ahead of the kiddies is pretty easy.

  • Very valid points.

    Reminds me of the oft-repeated phrase, "Goto considered harmful!", regardless of its valid use-cases or of the context in which that original paper was published. I mean jeeze, even the Linux kernel uses GOTO on occasion for error cleanup.

  • I would say it is not only valid but a very interesting method to deal with 0-day exploits and automatic scanners.

    I have a number of services running at home, all outside the standard ports - sip is on 5099 (remote gateway is on 5088), SSH on 5225, etc - and the difference in the number of attempts to log into my box (and make international calls..) is huge - actually, I did not have a single attempt to put a call through my asterisk box since I changed the ports outside the default range.

    Of course, it's not the only security measure, but I'd argue it can be as important and as effective as any other.

  • Obscurity is a "security layer" in the same way as camouflage - that is to say, it doesn't improve security, it just "hides" the thing that you were actually supposed to secure. It can easily hurt security, too, as often people depend on obscurity as if it were a real security measure, and are defeated by a tiny amount of effort on the part of an attacker. You're an idiot if you rely on obscurity.

  • This is one of those nuanced things that can't be generally applied to everything. Operating SSH on a port other than 22 can/may protect you from random bots/scripts but won't protect you from a determined attacker. In real world, misdirection like operating services on non-standard port does not go that far.

  • > So, given this highly effective armor, would the danger to the tank somehow increase if it were to be painted the same color as its surroundings?

    If there were a crowd of script kiddies rapping on the armour of every tank they could see, then yes, making your tank less visible would endanger it. The internet is different from the battlefield.

    > Is anyone willing to argue that someone unleashing such an attack would be equally likely to launch it against non-standard port vs. port 22? If not, then your risk goes down by not being there, it’s that simple.

    Yes, I'm willing to argue that. It sounds like you were being attacked by 17,995 dumb bots and 5 somewhat less dumb bots and/or genuinely sophisticated attackers. The former aren't going to pick up the zero-day.

    > at some point of diminishing return for impact reduction it is likely to become a good idea to reduce likelihood as well.

    Disagree. Obscurity-based methods have such a poor cost/benefit that they're likely to never be a good choice.

  • undefined

  • Well duh no matter how good your lock is hiding the key hole itself will improve security.

  • There are so many problems with this piece, I hardly know where to begin.

    Kerckhoff's Principle states that a system is secure if and only if the security architecture (as in, not the keys) is publicly available and non-key-holding attackers are literally unable to successfully attack the system in spite of their knowledge of the security architecture.

    Battlefield examples are horrible counter-examples. To take an extreme example, if I drop a nuke on an enemy soldier, he's going to die. If I drop a nuke on a tank, it's going to vaporize. There is literally no amount of armor in the world that can create an unattackable battlefield-security architecture, which is the whole reason why militaries rely on camouflage. The use of camouflage is a tacit admission that "yes, in the real world, something could successfully attack us, so we need to rely on other measures."

    Modern security engineers don't mindlessly spend time and money to "improve their security posture" without an appreciation for the consequences thereof. They understand that the A in CIA stands for Availability, and that not using the default ports hurts legitimate users expecting the default and confused by the lack thereof far more than it foils attackers. They understand that security engineering is about raising the cost of mounting an attack to be more expensive than the value of the target, and worry about the cost of new security measures versus the benefit of those new security measures (because it now costs $X > $old to successfully attack the target) versus the expected resources of an attacker (by running detailed risk and threat analyses to identify potential adversaries and estimating their capabilities). If it costs $X to attack a target which is worth $Y < $X and your adversaries only have $Z < $Y < $X to attack then spending $any to further "improve your security posture" is not just irrational and indefensible but ultimately destructive to the target itself which you are supposed to be protecting, because those resources could be spent more productively elsewhere to the benefit of the target.

    Which brings me to the presidential convoy example. Which vehicle the president is in is not a secret key in the president's security architecture, because knowing which car the president is in does not easily and magically give you access to the president. The point of having the additional obfuscation of additional vehicles is about raising the cost of a successful attack. Let's say the attacker's "nuke" is a shoulder-mounted anti-tank missile which will successfully destroy the target. If there's only one vehicle, then the attacker only needs one missile. But if the convoy has three vehicles, then a successful attack will cost more than three times as much - not just the cost of the additional missiles, but also the cost of finding additional trustworthy people to carry the additional missiles and carry out the attack, plus the cost of training and coordinating the attackers to work in concert and successfully carry out the attack, plus the additional risk of the plans accidentally leaking due to additional people being involved in the planning and execution of the attack.

    Changing from port 22 to port 24 does absolutely nothing to raise the cost for anyone but the opportunistic script kiddie hacker who is paying virtually $0 to add your public IP's to a list of targets. Dedicated internal threats will be aware of the port change and dedicated external threats will become aware of the change when they swipe an unencrypted employee laptop or phish a common password, and you will not be able to change the ports on all your servers from 24 to something else without inflicting massive pain on every legitimate user whose machines are configured to expect 24 but suddenly won't successfully connect anymore.

  • Right, you shouldn't use it as your only security, but it's fine to use in conjunction with other things.

  • Now them's fighting words.

  • So is client-side validation. Anything qualifies as "Valid Security Layer" as long as it prevents your grand-ma from attacking your system. The layer that protects against the most motivated attacker is the one usually known as "security".

  • I have always been amused that folks who say "security through obscurity is stupid" are never willing to give me their passwords.

    It's all about threat prioritization and defense in depth.