Security through obscurity is a philosophy that suggests obfuscation and secrecy as a primary means of ensuring system or information security. The underlying assumption is that if only a few trusted people understand the workings of a security system, the system is generally simple. Some methods commonly used for security through obscurity include encoding data or creating proprietary information through copyright protection. Some experts suggest, however, that this method is simply an illusion, and may actually render computer programs and systems more vulnerable to hackers.
The basic principle of security through obscurity is quite simple: if data is kept secret, no one outside the protection of the secret can find it. Somewhat akin to hiding money under a mattress, this concept works admirably as long as no untrustworthy adversaries know that the money is in the mattress. Using techniques that obscure data, or allowing only cleared individuals to access coding or security algorithms can help protect the knowledge from becoming public, and thus open to defeat.
Some of the methods used for security through obscurity include disguising data. For instance, if a file is named “company passwords,” it is vulnerable to easy attacks. Changing file names to innocuous or coded terms may help add a small measure of security. Similar methods may include the use of obfuscated code, which disguises protected information by encoding it in an unusual format. One common method includes hiding the fact that a computer or server even exists, allowing only designated users to access it. Since the existence of the computer is unknown, it is generally hoped that a hacker will not know to look for it.
Proprietary techniques are common means of protecting software and operating systems through obscurity. By legally and practically limiting access to program data to designated individuals, some software developers hope to deter hackers and frighten off any person who tries to expose security information. In some cases, a user may legitimately discover a security flaw and ask the company to provide a patch, only to receive threats of legal action should he or she expose the flaw to the public. In this way, a developer may be able to keep knowledge of security flaws from spreading, thus providing some means of protection. Workers entrusted with security information may also have to sign non-disclosure agreements, which can legally forbid them to release security information even after leaving a job.
While security through obscurity may be useful as part of an overall security system, on its own, it may lead to staggering vulnerabilities. Using basic obscurity methods, such as file and user name protection, may work best when in conjunction with methods such as password protection and strong firewalls. Some computer experts also tout the value of transparent security, suggesting that a strong security system that is completely open to users means that weaknesses will be quickly detected and guarded against.