Kerckhoffs' Principle: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Sandy Harris
(link)
mNo edit summary
 
(80 intermediate revisions by 8 users not shown)
Line 1: Line 1:
{{subpages}}
{{subpages}}
{{TOC|Right}}
{{TOC|Right}}
 
In 1883 [[Auguste Kerckhoffs]]<ref name=Kahn>{{citation
In [[Auguste Kerckhoffs]]'
<ref name=Kahn>{{citation
  | first = David | last = Kahn
  | first = David | last = Kahn
  | title = The Codebreakers: the story of secret writing
  | title = The Codebreakers: the story of secret writing  
  | date = second edition, 1996
  | date = second edition, 1996
  | publisher = Scribners}} p.235 </ref>
  | publisher = Scribners}} p.235</ref>
1883 book, ''La Cryptographie Militaire''
wrote two journal articles on ''La Cryptographie Militaire'',<ref>{{citation
<ref>{{citation
  | url = http://petitcolas.net/fabien/kerckhoffs/
  | url = http://petitcolas.net/fabien/kerckhoffs/
  | title = la cryptographie militaire
  | title = electronic version and English translation of "La cryptographie militaire"
  | first = Fabien | last = Peticolas
  | first = Fabien | last = Peticolas
}}</ref>,
}}</ref>
he stated six axioms of [[cryptography]]. Some are no longer relevant given the ability of computers to perform complex encryption, but the second is the most critical, and, perhaps, counterintuitive.
in which he stated six axioms of [[cryptography]]. Some are no longer relevant given the ability of computers to perform complex encryption, but his second axiom, now known as '''Kerckhoffs' Principle''', is still critically important:


{{cquote|Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi.}}
{{cquote|Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi.}}
{{cquote|The method must not need to be kept secret, and having it fall into the enemy's hands should not cause problems.}}
{{cquote|The method must not need to be kept secret, and having it fall into the enemy's hands should not cause problems.}}


Another English formulation
The same principle is also known as '''Shannon's Maxim''' after [[Claude Shannon]] who formulated it as "'''The enemy knows the system.'''" [[Ron Rivest]] gives "Compromise of the system should not inconvenience the correspondents."<ref>{{citation
<ref>{{citation
| url = http://people.csail.mit.edu/rivest/Rivest-Cryptography.pdf
| url = http://www.quadibloc.com/crypto/mi0611.htm
| title = Cryptology
| contribution = The Ideal Cipher
| author = Ronald Rivest
| title = A Cryptographic Compendium
| first = John J. G. | last = Savard
}}</ref>
}}</ref>
is:


{{cquote|If the '''method''' of encipherment becomes known to one's adversary, this should not prevent one from continuing to use the cipher as long as the '''key remains unknown'''}}
That is, the security should depend ''only'' on the secrecy of the key, ''not'' on the secrecy of the methods employed. Keeping keys secret, and changing them from time to time, are reasonable propositions. Keeping your methods secret is more difficult, perhaps impossible in the long term against a determined enemy. Changing the methods once a system is deployed is also difficult, sometimes impossible. The solution is to '''design the system assuming the enemy will know how it works'''.


The same principle is sometimes called "Shannon's Maxim" after [[Claude Shannon]] who formulated it as:
Any serious enemy &mdash; one with strong motives and plentiful resources &mdash; ''will'' learn all the internal details of any widely used system. In war, the enemy will capture some of your equipment and some of your people, and will use spies. If your method involves software, enemies can do memory dumps, run it under the control of a debugger, and so on. If it is hardware, they can buy or steal some of the devices and build whatever programs or gadgets they need to test them, or dismantle them and look at chip details with microscopes. They may bribe, blackmail or threaten your staff or your customers. The enemy may ''be'' a customer if your product is used by two rival organisations and one wants to spy on the other, or if it is available so a potential attacker can buy a copy for analysis. One way or another, sooner or later they ''will'' know exactly how it all works.


{{cquote|The enemy knows the system.}}
Using secure cryptography is supposed to replace the difficult problem of keeping messages secure with a much more manageable one, keeping relatively small keys secure. A system that requires long-term secrecy for something large and complex &mdash; the whole design of a cryptographic system &mdash; obviously cannot achieve that goal. It only replaces one hard problem with another. However, if you can design a system that is '''secure even when the enemy knows everything except the key''', then all you need to manage is keeping the keys secret.


A Cold War formulation was:
==Implications for analysis==
<ref name=Bellovin>{{citation
For purposes of analysing [[cipher]]s, Kerckhoffs' Principle neatly divides any design into two components. The key can be assumed to be secret for purposes of analysis; in practice various measures will be taken to protect it. Everything else is assumed to be knowable by the opponent, so '''everything except the key should be revealed to the analyst'''. Perhaps not all opponents will know everything, but the analyst should because the goal is to create a system that is secure against ''any'' enemy except one that learns the key.
| url = http://catless.ncl.ac.uk/Risks/25.71.html
| contribution = Security through obscurity
| title = Risks Digest
| first = Steve | last = Bellovin
| date = June, 2009
}}</ref>


{{cquote|A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.}}
{{quote|That the security of a cipher system should depend on the key and not the algorithm has become a truism in the computer era, and this one is the best-remembered of Kerckhoff's dicta. ... Unlike a key, an algorithm can be studied and analyzed by experts to determine if it is likely to be secure. An algorithm that you have invented yourself and kept secret has not had the opportunity for such review.<ref name=Savard>{{citation
| url = http://www.quadibloc.com/crypto/mi0611.htm
| contribution = The Ideal Cipher
| title = A Cryptographic Compendium
| first = John J. G. | last = Savard
}}</ref>}}


That is, the security should depend ''only'' on the secrecy of the key.
Using this distinction is the '''only known method of building ciphers that it is reasonable to trust''' &mdash; everything except the key is published and analysed, so we can be reasonably confident that it is secure, and [[Key management|keys are carefully managed]] so we can reasonably hope they are secret.


==Implications for analysis==
'''Cryptographers will generally dismiss out-of-hand all security claims for a system whose internal details are kept secret'''. Without analysis, no system should be trusted, and without details, it cannot be properly analysed.  
Is your system secure when the enemy knows everything except the key? If not, then at some point it is certain to become worthless. Since a security analyst cannot know when that point might come, the analysis can be simplified to '''The system is insecure if it cannot withstand an attacker that knows all its internal details'''.
Of course, there are some exceptions; if a major national intelligence agency claims that one of their secret systems is secure, the claim will be taken seriously because they have their own cipher-cracking experts. However, no-one else making such a claim is likely to be believed.


Any serious enemy &mdash; one with strong motives and plentiful resources &mdash; ''will'' learn all the other details. In war, the enemy will capture some of your equipment and some of your people, and will use spies. If your method involves software, enemies will do memory dumps, run it under the control of a debugger, and so on. If it is hardware, they will buy or steal some and build whatever programs or gadgets they need to test them, or dismantle them and look at chip details with microscopes. Or in any of these cases, they may bribe, blackmail or threaten your staff or your customers. One way or another, sooner or later they ''will'' know exactly how it all works.
If you want your system trusted &mdash; ''or even just taken seriously'' &mdash; the first step is to publish all the internal details. Anyone who makes security claims for some system without providing complete details is showing that he is unaware of one of the basic principles of cryptography, so most experts will assume the system is worthless. Sensational claims about a system whose details are secret are one of the common indicators of cryptographic [[Snake (animal) oil (cryptography)|snake oil]].


From the defender's point of view, using secure cryptography is supposed to replace a difficult problem &mdash; keeping messages secure &mdash; with a much more manageable one &mdash; keeping relatively small keys secure. A system that requires long-term secrecy for something large and complex &mdash; the whole design of a cryptographic system &mdash; obviously cannot achieve that goal. It only replaces one hard problem with another.
In many cases, auditing is an issue &mdash; for example, a financial institution's auditors should want to know if the security systems in place are adequate. In other situations, approval may be required &mdash; for example, a military organisation generally will not use any cryptosystem until their signals intelligence people have given it the nod. Such auditing or analysis absolutely requires full details of the system; they need not be made public but they ''must'' be revealed to the auditor or analyst.
Because of this, any competent person asked to analyse a system will first ask for all the internal details. An enemy will have them, so the analyst should if the analysis is to make sense.


Cryptographers will generally '''dismiss out-of-hand any security claims made for any system whose internal details are kept secret'''. Without analysis, no system should be trusted. Without details, it cannot be properly analysed. If you want your system trusted &mdash; or even just taken seriously &mdash; the first step is to publish all the internal details. Of course, there are some exceptions; if a major national intelligence agency claims that one of their secret systems is secure, the claim will be taken seriously because they have their own cipher-cracking experts. However, no-one else making such a claim is likely to be believed.
== Security through obscurity==
It is moderately common for companies &mdash; and sometimes even standards bodies as in the case of the [[Digital_rights_management#CSS_analysis|CSS encryption on DVDs]] &mdash; to keep the inner workings of a system secret. Some even claim this '''security by obscurity''' makes the product safer. Such claims are utterly bogus; of course keeping the innards secret may improve security in the short term, but in the long run '''only systems which have been published and analyzed should be trusted'''.


== Security through obscurity==
[[Steve Bellovin]] commented:
[[Steve Bellovin]] writes:


{{quotation|The subject of security through obscurity comes up frequently.  I think
{{quotation|The subject of security through obscurity comes up frequently.  I think
Line 78: Line 69:
German Engima system was simple: they didn't know the unkeyed mapping
German Engima system was simple: they didn't know the unkeyed mapping
between keyboard keys and the input to the rotor array.)  But -- *don't rely
between keyboard keys and the input to the rotor array.)  But -- *don't rely
on secrecy*.<ref name="Bellovin"/>}}
on secrecy*.
<ref name=Bellovin>{{citation
| url = http://catless.ncl.ac.uk/Risks/25.71.html
| contribution = Security through obscurity
| title = Risks Digest
| first = Steve | last = Bellovin
| date = June, 2009
}}</ref>}}


That is, "security through obscurity" does not work. Anyone who claims something is secure (except perhaps in the very short term) because its internals are secret is either clueless or lying, perhaps both. Such claims are one of the common indicators of cryptographic [[Snake oil (cryptography)|snake oil]].
That is, '''it is an error to rely on the secrecy of a system'''. In the long run, security through obscurity cannot possibly be an effective technique.


==References==
==References==
{{reflist|2}}
{{reflist|2}}[[Category:Suggestion Bot Tag]]

Latest revision as of 07:00, 8 September 2024

This article has a Citable Version.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article has an approved citable version (see its Citable Version subpage). While we have done conscientious work, we cannot guarantee that this Main Article, or its citable version, is wholly free of mistakes. By helping to improve this editable Main Article, you will help the process of generating a new, improved citable version.

In 1883 Auguste Kerckhoffs[1] wrote two journal articles on La Cryptographie Militaire,[2] in which he stated six axioms of cryptography. Some are no longer relevant given the ability of computers to perform complex encryption, but his second axiom, now known as Kerckhoffs' Principle, is still critically important:

Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi.
The method must not need to be kept secret, and having it fall into the enemy's hands should not cause problems.

The same principle is also known as Shannon's Maxim after Claude Shannon who formulated it as "The enemy knows the system." Ron Rivest gives "Compromise of the system should not inconvenience the correspondents."[3]

That is, the security should depend only on the secrecy of the key, not on the secrecy of the methods employed. Keeping keys secret, and changing them from time to time, are reasonable propositions. Keeping your methods secret is more difficult, perhaps impossible in the long term against a determined enemy. Changing the methods once a system is deployed is also difficult, sometimes impossible. The solution is to design the system assuming the enemy will know how it works.

Any serious enemy — one with strong motives and plentiful resources — will learn all the internal details of any widely used system. In war, the enemy will capture some of your equipment and some of your people, and will use spies. If your method involves software, enemies can do memory dumps, run it under the control of a debugger, and so on. If it is hardware, they can buy or steal some of the devices and build whatever programs or gadgets they need to test them, or dismantle them and look at chip details with microscopes. They may bribe, blackmail or threaten your staff or your customers. The enemy may be a customer if your product is used by two rival organisations and one wants to spy on the other, or if it is available so a potential attacker can buy a copy for analysis. One way or another, sooner or later they will know exactly how it all works.

Using secure cryptography is supposed to replace the difficult problem of keeping messages secure with a much more manageable one, keeping relatively small keys secure. A system that requires long-term secrecy for something large and complex — the whole design of a cryptographic system — obviously cannot achieve that goal. It only replaces one hard problem with another. However, if you can design a system that is secure even when the enemy knows everything except the key, then all you need to manage is keeping the keys secret.

Implications for analysis

For purposes of analysing ciphers, Kerckhoffs' Principle neatly divides any design into two components. The key can be assumed to be secret for purposes of analysis; in practice various measures will be taken to protect it. Everything else is assumed to be knowable by the opponent, so everything except the key should be revealed to the analyst. Perhaps not all opponents will know everything, but the analyst should because the goal is to create a system that is secure against any enemy except one that learns the key.

That the security of a cipher system should depend on the key and not the algorithm has become a truism in the computer era, and this one is the best-remembered of Kerckhoff's dicta. ... Unlike a key, an algorithm can be studied and analyzed by experts to determine if it is likely to be secure. An algorithm that you have invented yourself and kept secret has not had the opportunity for such review.[4]

Using this distinction is the only known method of building ciphers that it is reasonable to trust — everything except the key is published and analysed, so we can be reasonably confident that it is secure, and keys are carefully managed so we can reasonably hope they are secret.

Cryptographers will generally dismiss out-of-hand all security claims for a system whose internal details are kept secret. Without analysis, no system should be trusted, and without details, it cannot be properly analysed. Of course, there are some exceptions; if a major national intelligence agency claims that one of their secret systems is secure, the claim will be taken seriously because they have their own cipher-cracking experts. However, no-one else making such a claim is likely to be believed.

If you want your system trusted — or even just taken seriously — the first step is to publish all the internal details. Anyone who makes security claims for some system without providing complete details is showing that he is unaware of one of the basic principles of cryptography, so most experts will assume the system is worthless. Sensational claims about a system whose details are secret are one of the common indicators of cryptographic snake oil.

In many cases, auditing is an issue — for example, a financial institution's auditors should want to know if the security systems in place are adequate. In other situations, approval may be required — for example, a military organisation generally will not use any cryptosystem until their signals intelligence people have given it the nod. Such auditing or analysis absolutely requires full details of the system; they need not be made public but they must be revealed to the auditor or analyst.

Security through obscurity

It is moderately common for companies — and sometimes even standards bodies as in the case of the CSS encryption on DVDs — to keep the inner workings of a system secret. Some even claim this security by obscurity makes the product safer. Such claims are utterly bogus; of course keeping the innards secret may improve security in the short term, but in the long run only systems which have been published and analyzed should be trusted.

Steve Bellovin commented:

The subject of security through obscurity comes up frequently. I think

a lot of the debate happens because people misunderstand the issue.

It helps, I think, to go back to Kerckhoffs' second principle, translated as

"The system must not require secrecy and can be stolen by the enemy without causing trouble", per http://petitcolas.net/fabien/kerckhoffs/). Kerckhoffs said neither "publish everything" nor "keep everything secret"; rather, he said that the system should still be secure *even if the enemy has a copy*.

In other words -- design your system assuming that your opponents know it in

detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.) After that, though, there's nothing wrong with trying to keep it secret -- it's another hurdle factor the enemy has to overcome. (One obstacle the British ran into when attacking the German Engima system was simple: they didn't know the unkeyed mapping between keyboard keys and the input to the rotor array.) But -- *don't rely on secrecy*. [5]

That is, it is an error to rely on the secrecy of a system. In the long run, security through obscurity cannot possibly be an effective technique.

References

  1. Kahn, David (second edition, 1996), The Codebreakers: the story of secret writing, Scribners p.235
  2. Peticolas, Fabien, electronic version and English translation of "La cryptographie militaire"
  3. Ronald Rivest, Cryptology
  4. Savard, John J. G., The Ideal Cipher, A Cryptographic Compendium
  5. Bellovin, Steve (June, 2009), Security through obscurity, Risks Digest