Security through obscurity

Security through obscurity

In cryptography and computer security, security through obscurity (sometimes security by obscurity) is a controversial principle in security engineering, which attempts to use secrecy (of design, implementation, etc.) to provide security. A system relying on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them. The technique stands in contrast with security by design, although many real-world projects include elements of both strategies.

Background

There is scant formal literature on the issue of security through obscurity. Books on security engineering will cite Kerckhoffs' doctrine from 1883, if they cite anything at all. For example, in a discussion about secrecy and openness in Nuclear Command and Control: [See page 240 of: cite book | first=Ross | last=Anderson | coauthors= | title=Security Engineering: A Guide to Building Dependable Distributed Systems | publisher=John Wiley & Sons, Inc. | location=New York, NY | year=2001 | editor= | id=ISBN 0-471-38922-6 | url=http://www.cl.cam.ac.uk/~rja14/book.html ]

: [T] he benefits of reducing the likelihood of an accidental war were considered to outweigh the possible benefits of secrecy. This is a modern reincarnation of Kerckhoffs' doctrine, first put forward in the nineteenth century, [cite journal | author=Auguste Kerckhoffs | authorlink=Auguste_Kerckhoffs | title=La Cryptographie Militaire | journal=Journal des Sciences Militaires | date=January 9, 1883 | volume= | pages=5–38 | url=http://www.cl.cam.ac.uk/users/fapp2/kerckhoffs/ ] that the security of a system should depend on its key, not on its design remaining obscure.

In the field of legal academia, Peter Swire has written about the trade-off between the notion that "security through obscurity is an illusion" and the military notion that "loose lips sink ships" [cite journal | author=Peter P. Swire | authorlink= | title=A Model for When Disclosure Helps Security: What is Different About Computer and Network Security? | journal=Journal on Telecommunications and High Technology Law | year=2004 | volume=2 | pages= | url=http://ssrn.com/abstract=531782 ] as well as how competition affects the incentives to disclose. [cite journal | author=Peter P. Swire | authorlink= | title=A Theory of Disclosure for Security and Competitive Reasons: Open Source, Proprietary Software, and Government Agencies | journal=Houston Law Review | month=January | year=2006 | volume=42 | pages= | url=http://ssrn.com/abstract=842228 ]

The principle of security through obscurity was more generally accepted in cryptographic work in the days when essentially all well-informed cryptographers were employed by national intelligence agencies, such as the NSA. Now that cryptographers often work at universities (where researchers publish many [perhaps even nearly all] of their results and publicly test others' designs and results) or in private industry (where results are more often controlled by patents and copyrights than by secrecy), the argument has lost some of its former popularity. An example is PGP released as source code, and generally regarded (when properly used) as a military-grade cryptosystem. The wide availability of high quality cryptography was disturbing to the US government, which seems to have been using a security through obscurity analysis to support its opposition to such work. Indeed, such reasoning is very often used by lawyers and administrators to justify policies which were designed to control or limit high quality cryptography only to those authorized.

Viewpoints

Arguments against

As mentioned above, in cryptography, the argument against security by obscurity dates back at least to Kerckhoffs' principle, put forth in 1883 by Auguste Kerckhoffs. The principle states that design of a cryptographic system should not require secrecy and should not cause inconvenience if it falls into the hands of the enemy. This principle has been paraphrased in several ways:

* System designers should assume that the entire design of a security system is known to all attackers, with the exception of the cryptographic key.
* The security of a cipher resides entirely in the cryptographic key.
* Claude Shannon rephrased it as "the enemy knows the system".

If it is true that any secret piece of information constitutes a point of potential compromise, then fewer secrets makes a more secure system. Therefore, systems that rely on secret design or operational details, apart from the cryptographic key, are inherently less secure; that is, resident vulnerabilities in any such secret details will render the choice of key (eg, short and simple vs. long and complex) largely irrelevant.

The related full disclosure philosophy suggests that security flaws should be disclosed as soon as possible because the strength of the protection provided by keeping the cryptographic key secret has become weaker. In this case there is now effectively more than one key that provides access: the old cryptographic key and a "key" composed of the newly discovered flaws.

For example, if somebody stores a spare key under the doormat, in case they are locked out of the house, then they are relying on security through obscurity. The theoretical security vulnerability is that anybody could break into the house by unlocking the door using that spare key. Furthermore, since burglars often know likely hiding places, the house owner will experience greater risk of a burglary by hiding the key in this -- not so secure -- way. The owner has in effect added another key—the fact that the entry key is stored under the doormat—to the system, and a very easy to guess one. The cryptographic key is no longer simply "the actual possession of the physical key that is used to open the door", but also it is now "the knowledge of the physical key's location".

In the past, several algorithms, or software systems with secret internal details, have seen those internal details become public. Accidental disclosure has happened several times, for instance in the notable case of GSM confidential cipher documentation being contributed to the University of Bradford. Furthermore, vulnerabilities have been discovered and exploited in software, even when the internal details remained secret. Taken together, these examples suggest that it is difficult or ineffective to keep the details of systems and algorithms secret.

*The A5/1 cipher for GSM mobile telephones became public knowledge partly through reverse engineeringFact|date=February 2008
*Details of the RSADSI ( [http://www.rsasecurity.com/ RSA Data Security, Inc.] ) cryptographic algorithm software were revealed, probably deliberately, through publication of alleged RC4 source on Usenet.Fact|date=February 2008
*Vulnerabilities in various versions of Microsoft Windows, its default web browser Internet Explorer, and its mail applications Outlook and Outlook Express have caused worldwide problems when computer viruses, Trojan horses, or computer worms have exploited them.
*The Kernel Patch Protection (also called PatchGuard) in Microsoft Windows Vista was cracked and exploited within one week of the Vista launch, rendering it useless until fixed.Fact|date=June 2007
*Cisco router operating system software was accidentally exposed on a corporate network.
*Details of Diebold Election Systems voting machine software were published on a publicly accessible Web site.Fact|date=November 2007
*The once open source "Doom" port, ZDaemon, had been renowned for security through obscurity; binary cheats were released and the source was closed because of this. Though this may have reduced the number of cheats, It still remains possible and several cheats exist.

Linus's law that "many eyes make all bugs shallow" also suggests improved security for algorithms and protocols whose details are published. More people can review the details of such algorithms, identify flaws, and fix the flaws sooner. We would thus expect, and Linus Torvalds claims that it is actually true, the frequency and severity of security compromises will be less severe for open than for proprietary or secret software.

Operators and developers/vendors of systems that rely on security by obscurity may keep the fact that their system is broken secret to avoid destroying confidence in their service or product and thus its marketability, and this may amount to fraudulent misrepresentation of the security of their products. Instances have been known, from at least the 1960s, of companies delaying release of fixes or patches to suit their corporate priorities rather than customer concerns or risks. Application of the law in this respect has been less than vigorous, in part because vendors almost universally impose terms of use as a part of licensing contracts in order to disclaim their apparently existing obligations under statutes and common law that require fitness for use or similar quality standards.

Arguments for

Perfect or "unbroken" solutions provide security, but absolutes may be difficult to obtain. Although relying solely on security through obscurity is a very poor design decision, keeping secret some of the details of an otherwise well-engineered system may be a reasonable tactic as part of a defense in depth strategy. For example, security through obscurity may (but cannot be guaranteed to) act as a temporary "speed bump" for attackers while a resolution to a known security issue is implemented. Here, the goal is simply to reduce the short-run risk of exploitation of a vulnerability in the main components of the system.

Security through obscurity can also be used to create a risk that can detect or deter potential attackers. For example, consider a computer network that appears to exhibit a known vulnerability. Lacking the security layout of the target, the attacker must consider whether to attempt to exploit the vulnerability or not. If the system is set to detect this vulnerability, it will recognize that it is under attack and can respond, either by locking the system down until proper administrators have a chance to react, by monitoring the attack and tracing the assailant, or by disconnecting the attacker. The essence of this principle is that raising the time or risk involved, the attacker is denied the information required to make a solid risk-reward decision about whether to attack in the first place.

A variant of the defense in the previous paragraph is to have a double-layer of detection of the exploit; both of which are kept secret but one is allowed to be "leaked". The idea is to give the attacker a false sense of confidence that the obscurity has been uncovered and defeated. An example of where this would be used is as part of a honeypot. In neither of these cases is there any actual reliance on obscurity for security; these are perhaps better termed obscurity bait in an active security defense.

However, it can be argued that a sufficiently well-implemented system based on security through obscurity simply becomes another variant on a key-based scheme, with the obscure details of the system acting as the secret key value.

There is a general consensus, even among those who argue in favor of security through obscurity, that security through obscurity should never be used as a primary security measure. It is, at best, a secondary measure; and disclosure of the obscurity should not result in a compromise.

Open source repercussions

Software which is deliberately released as open source can never be said, certainly in theory, and in practice as well, to be relying on security through obscurity (the design being publicly available), but it can nevertheless also experience security debacles (e.g., the Morris worm of 1988 spread through some obscure—if widely visible to those who bothered to look—vulnerabilities). An argument sometimes used against open-source security is that developers tend to be less enthusiastic about performing deep reviews as they are about contributing new code. Such work is sometimes seen as less interesting and less appreciated by peers, especially if an analysis, however diligent and time-consuming, does not turn up much of interest. Combined with the fact that open-source is dominated by a culture of volunteering, security sometimes receives less thorough treatment than it might in an environment in which security reviews were part of someone's job description. [See [http://www.eweek.com/article2/0,1895,1536427,00.asp "How Closely Is Open Source Code Examined?"] by Larry Seltzer, Feb. 22, 2004, eWeek.com. Retrieved 2008-05-01.]

ecurity through minority

One version of Security through obscurity is to use a product which is not widely adopted, in order to lower the attack profile against random attacks. This does not currently appear to have a single defining term, "minority" [http://geeksaresexy.blogspot.com/2006/12/mac-users-finally-waking-up-to.html "Mac Users Finally Waking Up to Security"] by Kiltak, December 19, 2006, at [Geeks are Sexy] Technology News. Retrieved 2008-05-01.] being the most common but "rarity", [ [http://www.schneier.com/crypto-gram-0308.html Crypto-Gram Newsletter, August 15, 2003] , by Bruce Schneier. Retrieved 2008-05-01.] "unpopularity", [ [http://slashdot.org/article.pl?sid=01/07/23/2043209&mode=thread&threshold=1 "When 'Security Through Obscurity' Isn't So Bad"] by CmdrTaco, July 23, 2001, Slashdot. Retrieved 2008-05-01.] "scarcity", "lack of interest", and others also being used.

This concept is most commonly encountered in explanations why the number of known vulnerability exploits for products with the largest market share tends to be higher than a linear relationship to market share would indicate, but is also a factor in product choice for large organisations.

Security through minority is good for organisations who will not be subject to targeted attacks, suggesting the use of a product in the long tail. However, finding a new vulnerability in a market leading product is harder, as the low hanging fruit vulnerabilities are more likely to have already been caught, which suggests these products are better for organisations who expect to receive many targeted attacks. The issue is further confused by the fact that new vulnerabilities in minority products cause all known users of that product to become targets. With market leading products, the likelihood of being randomly targeted with a new vulnerability may be lower.

This is closely linked with, and depends upon, the more well-documented term Security through diversity - the wide range of "long tail" minority products is clearly more diverse than a single-entity monolithic market leader, so any random attack will be less likely to succeed.

Historical notes

There are conflicting stories about the origin of this term. Fans of MIT's ITS say it was coined in opposition to Multics users down the hall, for whom security was far more an issue than on ITS. Within the ITS culture the term referred, self-mockingly, to the poor coverage of the documentation and obscurity of many commands, and to the attitude that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community.

One instance of deliberate security through obscurity on ITS has been noted: the command to allow patching the running ITS system (altmode altmode control-R) echoed as ##^D. Typing alt alt ^D set a flag that would prevent patching the system even if the user later got it right.

References & notes

ee also

* Inside job
* Secure by design
* Obfuscated code
* Code morphing
* Need to know

External links

* [http://lwn.net/Articles/85958/ Eric Raymond on Cisco's IOS source code 'release' v Open Source]
* [http://www.eplaw.us/data/ComputerSecurityPublications.pdf Computer Security Publications: Information Economics, Shifting Liability and the First Amendment] by Ethan M. Preston and John Lofton
* [http://web.archive.org/web/20070202151534/http://www.bastille-linux.org/jay/obscurity-revisited.html "Security Through Obscurity" Ain't What They Think It Is] by Jay Beale
* [http://www.schneier.com/crypto-gram-0205.html#1 Secrecy, Security and Obscurity] by Bruce Schneier
* [http://www.linux.com/articles/23313 "Security through obsolescence", Robin Miller, "linux.com", June 6, 2002]


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Security through Obscurity — oder security by obscurity (engl. „Sicherheit durch Unklarheit“) bezeichnet ein kontroverses Prinzip in der Computer und Netzwerksicherheit, nach dem versucht wird, Sicherheit durch Verschleierung bzw. Geheimhaltung zu erreichen. Ein System… …   Deutsch Wikipedia

  • Security through obscurity — oder security by obscurity (engl. „Sicherheit durch Unklarheit“) bezeichnet ein kontroverses Prinzip in der Computer und Netzwerksicherheit, nach dem versucht wird, Sicherheit durch Verschleierung bzw. Geheimhaltung zu erreichen.… …   Deutsch Wikipedia

  • security through obscurity — noun Relying on an adversarys lack of knowledge to provide security …   Wiktionary

  • Job security through obscurity — (sometimes job security by obscurity) is a controversial tactic or strategy for career development, which attempts to achieve job security by keeping a low profile. A person relying on job security through obscurity may be incompetent, but… …   Wikipedia

  • Security by Obscurity — Security through obscurity oder security by obscurity (engl. „Sicherheit durch Unklarheit“) bezeichnet ein kontroverses Prinzip in der Computer und Netzwerksicherheit, nach dem versucht wird, Sicherheit durch Verschleierung bzw. Geheimhaltung zu… …   Deutsch Wikipedia

  • Security by obscurity — Security through obscurity oder security by obscurity (engl. „Sicherheit durch Unklarheit“) bezeichnet ein kontroverses Prinzip in der Computer und Netzwerksicherheit, nach dem versucht wird, Sicherheit durch Verschleierung bzw. Geheimhaltung zu… …   Deutsch Wikipedia

  • Obscurity — may refer to: Obscurity (band), German melodic metal band Security through obscurity, a controversial principle in security engineering which attempts to use secrecy to provide security See also Obscure (disambiguation) This disambiguation page… …   Wikipedia

  • Security engineering — is a specialized field of engineering that focuses on the security aspects in the design of systems that need to be able to deal robustly with possible sources of disruption, ranging from natural disasters to malicious acts. It is similar to… …   Wikipedia

  • Security Audit — Als IT Sicherheitsaudit (englisch IT Security Audit; von lateinisch audit: „er/sie hört“; sinngemäß: „er/sie überprüft“) werden in der Informationstechnik (IT) Maßnahmen zur Risiko und Schwachstellenanalyse (engl. Vulnerability Scan) eines IT… …   Deutsch Wikipedia

  • Security Scan — Als IT Sicherheitsaudit (englisch IT Security Audit; von lateinisch audit: „er/sie hört“; sinngemäß: „er/sie überprüft“) werden in der Informationstechnik (IT) Maßnahmen zur Risiko und Schwachstellenanalyse (engl. Vulnerability Scan) eines IT… …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”