April 1 1999
I am responding to the consultation document as an independent expert; I am not speaking on behalf of my employer.
The length of the consultation period (approximately three weeks) was too short. I understand that in consultation exercises of this nature it is customary for the government to allow 8 weeks for responses.
The consultation document raises many difficult, complex and highly technical issues which take time to study in detail. Given that the DTI has been working on this for the past few years, surely they could have released the consultation document just a few weeks earlier to give people more time to comment.
At a public meeting Michael Wills MP asked (perhaps rhetorically) if we would have wanted more time to comment if this could only be achieved at the expense of the electronic commerce bill being delayed by a year. I consider it to be much more important that the bill be got right well considered, well-drafted and with proper public consultation than that it be passed into law quickly.
As the consultation period is so short, I have concentrated on the parts of the consultation document which I consider the most important, and have not been able to look in detail at the other issues. For example, I have not considered the issue of unsolicited e-mail, raised in clauses 28 to 31 of the consultation document.
The previous DTI consultation document proposed a form of mandatory key escrow: specifically, it proposed that Trusted Third Parties would only be granted a license on condition that they retained copies of their customer's confidentiality keys, for the purposes of assisting law enforcement (and intelligence agencies) in carrying out wiretaps.
There were numerous reasons for objecting to that proposal, which were described at length in many people's responses to the previous consultation document. The current consultation document has dropped the mandatory escrow requirement (see paragraph 37). This is a most welcome change, and it is good to see that the government has responded to the many serious objections that were raised against mandatory key escrow. As these objections are well known, and the government appears to have accepted them, I will not repeat them in detail here.
HOWEVER, in the current consultation document there are many serious inconsistencies between the body of the document and Annex A, the suggested licensing criteria. While the body of the document has dropped the key escrow requirement, the suggested licensing conditions in Annex A are strongly suggestive of the previous policy, that key escrow would be a condition of obtaining a license.
Admittedly, these are only a few badly worded clauses in an annex but it is often the case that the most critical and sensitive issues in a document are relegated to an annex in just this way. It is vitally important that these inconsistencies be eliminated. It is totally unacceptable for the government to make a pretense of dropping the mandatory key escrow requirement while in reality retaining it. (e.g. introducing the escrow requirement in DTI or OFTEL regulations, instead of directly in primary legislation).
Annex A, part II, suggests that the following would be a condition of a Certification Authority being granted a license:
"An applicant will need to demonstrate that the certificates it issues have all the following information:
This is a very curious form of words, which admits several possible interpretations, none of which are very reassuring.
Firstly, it might mean that the certificate must include an indication that the public key contained within the certificate should only be used for verifying digital signatures, and not for public key encryption. A second interpretation, based on a different idea of what "validate a key" means in this context, is even more restrictive: As well as prohibiting direct use of the certified key for encryption, it also prohibits using the certified key to verify a digital signature on any document or message that refers to a confidentiality key. (If I digitally sign a contract saying that I will use a particular key for confidentiality, and someone verifies my digital signature on that contract to prove that I wrote it, are they using my signature public key to "validate a key which is being used to secure the confidentiality of information"?)
Even the least restrictive reading of this clause is a form of mandatory key escrow: it prohibits all licensed certification authorities from offering a useful service (certifying confidentiality keys as well as signature keys) , with the clear intent of making it harder to users to gain access to unescrowed confidentiality services.
Under the second interpretation, it is a very severe constraint.. It effectively prohibits many technological solutions, some of which are currently in common use. That is, people using those technologies would be forced to use unlicensed Certification Authorities, because the above condition would prohibit a licensed CA from issuing them with appropriate certificates.
This condition only makes sense in the context of a mandatory key escrow regime: it clearly has the intent of prohibiting someone who uses a licensed TTP for authentication purposes simultaneously using an unlicensed (and hence possibly unescrowed) TTP for confidentiality services.
This license condition doesn't work, in that it only prohibits the most obvious means by which someone could combine licensed authentication with unlicensed confidentiality, while neglecting to prohibit a whole host of others.
More importantly, given that the mandatory escrow requirement has been dropped, this should not be a license condition! If users are free to choose whether or not to use key escrow/recovery services, why
should licensed certification authorities be required to tell their customers that they may not use their authentication certificate to authenticate a confidentiality key? (Or that they may not purchase certificates for their confidentiality keys?)
This license condition is so restrictive it even prohibits many forms of key recovery/escrow. Suppose that a user wishes to obtain key recovery services from one TTP and authentication services from another. These are very different services, and it is reasonable to suppose that the supplier of the "best deal" depends on which service you're buying. (e.g. I might escrow my confidentiality key with TTP X, then sign it with my authentication key, backed by a certificate from TTP Y, to inform people that this is my escrowed confidentiality key, as opposed to someone else's). Note that the key escrow/recovery proposals suggested by other governments (e.g. the Clipper chip) separate the escrow TTP and the authentication TTP. The proposed license conditions appear to prohibit this form of separation: this is unacceptable, and is not technology-neutral.
Annex A II goes on to say:
"Private Signature Key: An applicant must provide details of the mechanism it uses to ensure that the private signature key is only known to the client on issue. It will be a breach of the license to disclose the key to anyone but the intended owner."
This condition assumes that Certification Authorities will generate signature keys for their clients, which effectively contradicts paragraph 35 in the main body of the document:
"Licensed Certification Authorities will not be allowed to store the private key of a key pair that is issued solely for electronic signature purposes. The responsibility for protecting a private signature key will therefore fall unambiguously on its owner.
Throughout the industry, it is considered very bad practice for a CA to generate a user's private key. Key pairs should be generated by their owner, not the CA. Only the public key should be transferred to the CA. In this way, the CA cannot disclose the private key because it never knows it. To do otherwise undermines confidence in electronic signatures, by dividing the responsibility for protecting keys and thus allowing the key owner to deny responsibility for their digital signature. As evidence that this is considered the proper way for a certification authority to operate, I offer the following examples:
Netscape, one of the most popular Web browsers, works this way. If a user wants a certificate for use with Netscape, they get Netscape to generate the key pair and send the public component to the CA.
De facto standards, such as RSA Data Security Inc's "PKCS", advocate a method of operation in which the user generates the key pair and sends the public component to the certification authority.
Internet standards, such as RFC 1424 or the more recent PKIX work, also advocate this way of working.
The license condition is in one sense correct, in that if a CA knows a client's private key it clearly ought not to improperly disclose it. However, the license conditions ought to go further and mandate (or at least strongly encourage) the approach in which a CA never knows a client's private signature key.
This is another example of the licensing conditions in Annex A being aligned with the policy of the previous consultation document, and at variance with the body of the current document.
The condition on "Generation of Key Pair" in A II suffers from the same contradiction. This condition ought to be imposed on the supplier of an "approved signature generation product", not on the CA. Suppliers of "approved signature products" should indeed have to explain how their product generates keys in order to obtain "approval". However, the quality of the software which the user runs is the responsibility of the supplier of that software, not the CA (who may not even know which vendor's software the user has purchased).
Most home computers have very poor security. Most users choose computers and operating systems on the basis of low cost and lots of features, and have little concern for security. Computer vendors know this, and design products which are cheap and full of exciting features, but with minimal security.
This lack of security matters if home computers are going to be used to generate digital signatures which are legally equivalent to handwritten signatures. Put bluntly, digital signatures created using home computers do not meet the requirements for an "advanced electronic signature" (to use the EU terminology) because they are not created using means which the user maintains under his sole control. Although the user may be the only person with physical access to the machine, if it is connected to the Internet there are many ways in which a hacker could remotely access it and steal the user's private key. If a user of such a home computer tries to repudiate their digital signature on the basis that someone stole their key and forged their signature, then the sensible conclusion is that may well be right: the signature isn't valid.
"Smart cards" solve some, but not all of the problem. If the user's private key is stored on a separate card, not in the computer's main memory or disc, this makes it harder for a hacker to steal it. However, a hacker could still modify the software that interacts with the smart card. For example, subverting it so that the user thinks they are writing an electronic cheque for 10, but the modified software turns it into a cheque for 10000. The only way to prevent these attacks is to make secure the entire system which produces the digital signature, including the keyboard and display.
In order to protect consumers, it is vital that the legislation this into account. It is unacceptable to put consumers in a position where their private key can be easily stolen, and yet they have unlimited legal liability when this happens.
There are two important measures which should be taken to protect consumers.
These restrictions should be acknowledged by the legislation, and should be encouraged by the licensing conditions. If a certificate says that is only good for transactions up to 50, but it supports a digital signature on a disputed transaction of greater value, then the signature should not be legally binding. Clearly the merchant made an error in accepting the transaction in the first place. What the merchant should have done is to have refused the purchase on the grounds that the customer's "credit limit" was exceeded. At the time of the transaction it was obvious to the merchant that the certificate was not suitable for the type of transaction that was being attempted.
Licensed certification authorities should be encouraged to issue certificates which have these type of restrictions: they are vital for everyone's protection.
It is worth noting that in one of the most popular e-commerce schemes (credit card numbers protected using "Secure Sockets Layer"), only merchants need to have certificates: customers do not. This works extremely well, and gets around the problem that customers do not necessarily have secure equipment to produce "advanced digital signatures". With this approach to e-commerce, the customer's liability is defined by the existing credit card legislation: this liability regime seems to be working well to the satisfaction of all parties.
It is inevitable that eventually someone will attempt to repudiate a digital signature by claiming that the certification authority acted improperly. A certification authority needs to be able to defend itself against this accusation. It is not good enough to say "I am a Trusted Third Party; because I am "trusted" it is totally impossible that I could have acted improperly or have made a mistake". The certification authority needs to be able to convince a court of law that it hasn't made a mistake, and that the signature really is valid. Passing legislation which decrees that the certification authority is always right doesn't help either, because there is always the case where the certification authority really has made a mistake, and the signature really is forged.
Fortunately, a certification authority can easily defend itself against these allegations by following properly designed procedures for issuing certificates. Unfortunately, at the moment some certification authorities are using badly designed procedures which may lead to trouble in future.
The critical point is that a user's request for a certificate must be something which the certification authority can keep on file, which contains the value of the user's public key, and which is "signed" using a means which is legally equivalent to a hand-written signature.
For example, a perfectly good procedure is for the user to sign (on paper, in handwriting) a contract with the CA which contains the user's public key. If the user denies ever having asked for a certificate (or claims that they asked for a certificate with a different key), then the CA can retrieve the contract from its files and produce it is court. This avoids all arguments about whether the certification authority's computer might have been hacked or whether their personnel might have been bribed: it is quite clear that the user asked for a certificate with that key and got one. The CA has done its job properly and the validity of the signature is upheld. On the other hand, if a certification authority has a policy of always signing a contract with the user, but when asked it cannot find the contract in its files, then this is strong evidence that something has gone badly wrong with the CA's internal security procedures. If this happens, the CA might loose its license.
An example of a bad procedure is where a certification authority identifies the user in person, and issues them a certificate without getting any form of receipt for it. Suppose the user accuses the CA of being negligent, and of issuing a certificate in the user's name to an impostor. How can the CA defend itself? Well, the CA can says that all its personnel are security vetted, that they check each customer's identity carefully, and so on, but in this case it is very much harder for the CA to defend itself. Worse yet, if the CA really has made a mistake (perhaps one of the CA's clerks was a little inattentive that day), then there is no way to establish this. This is a bad procedure.
Now, it would be inconvenient if all certificates had to be collected in person. However, if qualified digital signatures are legally equivalent to handwritten ones, there is a solution to this. The customer obtains their very first certificate by asking for it in writing. They can then ask for subsequent certificates electronically, using their first certificate to back the digital signature on the certificate request. The great thing about this is it also eliminates all concerns about the quality of the CA's internal security. If there is a dispute, the CA produces the original paper contract and the subsequent electronic contracts. These can be verified electronically, and it is clear that they are valid. A breach of the CA's internal security could not possibly have resulted in those electronic contracts being forged, because the CA never had the information needed to forge them. This avoids the court case being tied up in complex and protracted arguments about the quality of the CA's internal security, by providing firm independent evidence that the digital signature is valid.
My recommendation is this: all licensed certification authorities should be required to use a procedure in which all requests for certificates are "signed" by the certificate subject, either in handwriting or with a digital equivalent. This will make it so much easier to resolve court cases in which digital signatures are called into doubt.
The consultation document suggests that advances in technology have made it harder for law enforcement to gather evidence, and that consequently new powers are needed. I would argue that the opposite is the case: the increasing use of computers and computer networks provides law enforcement with many new sources of evidence. Even if unescrowed encryption becomes widespread, law enforcement will have access to a wealth of intelligence and information that they never had before:
I would like to suggest that law enforcement has never had it so good. The problems caused by use of encryption are very minor compared to the wealth of new evidence that is made available by our societys increasing use of computers. Therefore no additional powers are needed. In particular, a specific criminal offense of failing to assist police with decryption is not necessary. The fact that a suspect refused to decrypt can be presented to a jury in evidence, and a jury might well draw conclusions from it. (Of course, the suspect might well have a perfectly good and reasonable reason for not being able to decrypt, which could also be presented to the jury. After all, people really do forget their passwords sometimes).
Key escrow schemes are not the answer, because criminals won't use them. The real answer is better training for police computer forensics experts: it is amazing what you get can out of a computer if you know how.
Go back to the start of this document.
Go to the library of current responses.
Go to FIPR home page.
Last Revised: April 12 1999