scotartt on Sun, 16 Dec 2001 17:14:12 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> [schneier@counterpane.com: CRYPTO-GRAM, December 15, 2001]



A security analysis of ID cards as proposed for the USA in light of recent
events. Shows that from a technical point of view that ID cards are
ineffective at achieving what is claimed for them.  Also a little piece
about legal enforcement of internet security. Plus some other security
related matters.

scot.

----- Forwarded message from Bruce Schneier <schneier@counterpane.com> -----

Mailing-List: contact crypto-gram-help@chaparraltree.com; run by ezmlm
Precedence: bulk
Delivered-To: moderator for crypto-gram@chaparraltree.com
X-Sender: schneier@counterpane.com (Unverified)
X-Mailer: QUALCOMM Windows Eudora Pro Version 4.2.2 
Date: Sat, 15 Dec 2001 13:57:12 -0600
To: crypto-gram@chaparraltree.com
From: Bruce Schneier <schneier@counterpane.com>
Subject: CRYPTO-GRAM, December 15, 2001

                  CRYPTO-GRAM

               December 15, 2001

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.
            schneier@counterpane.com
          <http://www.counterpane.com>


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on computer security and cryptography.

Back issues are available at 
<http://www.counterpane.com/crypto-gram.html>.  To subscribe, visit 
<http://www.counterpane.com/crypto-gram.html> or send a blank message to 
crypto-gram-subscribe@chaparraltree.com.

Copyright (c) 2001 by Counterpane Internet Security, Inc.


** *** ***** ******* *********** *************

In this issue:
      National ID Cards
      Judges Punish Bad Security
      Crypto-Gram Reprints
      Computer Security and Liabilities
      News
      Counterpane News
      The Doghouse:  The State of Nevada
      AES
      Fun with Vulnerability Scanners
      Comments from Readers


** *** ***** ******* *********** *************

               National ID Cards



There's loose talk in Washington about national ID cards.  Although the 
Bush administration has said that it is not going to pursue it, enough 
vendors are scurrying to persuade Congress to adopt the idea that it is 
worth examining the security of a mandatory ID system.

A national ID card system would have four components.  First, there would 
be a physical card that contains information about the person: name, 
address, photograph, maybe a thumbprint, etc.  To be effective as a 
multi-purpose ID, of course, the card might also include place of 
employment, birth date, perhaps religion, perhaps names of children and 
spouse, and health-insurance coverage.  The information might be in text on 
the card and might be contained on a magnetic strip, a bar code, or a 
chip.  The card would also contain some sort of anti-counterfeiting 
measures: holograms, special chips, etc.  Second, there would be a database 
somewhere of card numbers and identities.  This database would be 
accessible by people needing to verify the card in some circumstances, just 
as a state's driver-license database is today.  Third, there would be a 
system for checking the card data against the database.  And fourth, there 
would be some sort of registration procedure that verifies the identity of 
the applicant and the personal information, puts it into the database, and 
issues the card.

The way to think about the security of this system is no different from any 
other security countermeasure.  One, what problem are IDs trying to 
solve?  Two, how can IDs fail in practice?  Three, given the failure modes, 
how well do IDs solve the problem?  Four, what are the costs associated 
with IDs?  And five, given the effectiveness and costs, are IDs worth it?

What problem are IDs trying to solve?  Honestly, I'm not too 
sure.  Clearly, the idea is to allow any authorized person to verify the 
identity of a person.  This would help in certain isolated situations, but 
would only have a limited affect on crime.  It certainly wouldn't have 
stopped the 9/11 terrorist attacks -- all of the terrorists showed IDs to 
board their planes, some real and some forged -- nor would it stop the 
current anthrax attacks.  Perhaps an ID card would make it easy to track 
illicit cash transactions, to discover after the fact all persons at the 
scene of a crime, to verify immediately whether an adult accompanying a 
child is a parent or legal guardian, to keep a list of suspicious persons 
in a neighborhood each night, to record who purchased a gun or knife or 
fertilizer or Satanic books, to determine who is entitled to enter a 
building, or to know who carries the HIV virus.  In any case, let's assume 
that the problem is verifying identity.

We don't know for sure whether a national ID card would allow us to do all 
these things.  We haven't had a full airing of the issue, ever.  We do know 
that a national ID document wouldn't determine for sure whether it is safe 
to permit a known individual to board an airplane, attend a sports event, 
or visit a shopping mall.

How can IDs fail in practice?  All sorts of ways.  All four components can 
fail, individually and together.  The cards themselves can be 
counterfeited.  Yes, I know that the manufacturers of these cards claim 
that their anti-counterfeiting methods are perfect, but there hasn't been a 
card created yet that can't be forged.  Passports, drivers licenses, and 
foreign national ID cards are routinely forged.  I've seen estimates that 
10% of all IDs in the US are phony.  At least one-fourth of the president's 
own family has been known to use phony IDs.  And not everyone will have a 
card.  Foreign visitors won't have one, for example.  (Some of the 9/11 
terrorists who had stolen identities stole those identities overseas.) 
About 5% of all ID cards are lost each year; the system has to deal with 
the problems that causes.

Identity theft is already a problem; if there is a single ID card that 
signifies identity, forging that will be all the more damaging.  And there 
will be a great premium for stolen IDs (stolen U.S. passports are worth 
thousands of dollars in some Third World countries).  Biometric 
information, whether it be pictures, fingerprints, retinal scans, or 
something else, does not prevent counterfeiting; it only prevents one 
person from using another's card.  And this assumes that whoever is looking 
at the card is able to verify the biometric.  How often does a bartender 
fail to look at the picture on an ID, or a shopkeeper not bother checking 
the signature on a credit card?  How often does anybody verify a telephone 
number presented for a transaction?

The database can fail.  Large databases of information always have errors 
and outdated information.  If ID cards become ubiquitous and trusted, it 
will be harder than ever to rectify problems resulting from erroneous 
information.  And there is the very real risk that the information in the 
database will be used for unanticipated, and possibly illegal, 
purposes.  There have been several murders in the U.S. that have been aided 
by information in motor vehicle databases.  And much of the utility of the 
national ID card assumes a pre-existing database of bad guys.  We have no 
such database.  The U.S. criminal database is 33% inaccurate and out of 
date.  "Watch Lists" of suspects from abroad have surprisingly few people 
on them, certainly not enough to make a real-time match of these lists 
worthwhile.  They have no identifiers, except name and country of origin, 
and many of the names are approximated versions or phonetic 
spellings.  Many have only approximated names and no other identifiers.

Even riskier is the mechanism for querying the database.  In this country, 
there isn't a government database that hasn't been misused by the very 
people entrusted with keeping that information safe.  IRS employees have 
perused the tax records of celebrities and their friends.  State employees 
have sold driving records to private investigators.  Bank credit card 
databases have been stolen.  Sometimes the communications mechanism between 
the user terminal -- maybe a radio in a police car, or a card reader in a 
shop -- has been targeted, and personal information stolen that way.

Finally, there are insecurities in the registration mechanism.  It is 
certainly possible to get an ID in a fake name, sometimes with insider 
help.  Recently in Virginia, several motor vehicle employees were issuing 
legitimate drivers licenses in fake names for money.  (Two suspected 
terrorists were able to get Virginia drivers' licenses even though they did 
not qualify for them.)  Similar abuses have occurred in other states, and 
with other ID cards.  A lot of thinking needs to go into the system that 
verifies someone's identity before a card is issued; any system I can think 
of will be fraught with these sorts of problems and abuses.  Most 
important, the database has to be interactive so that, in real time, 
authorized persons may alter entries to indicate that an ID holder is no 
longer qualified for access -- because of death or criminal activity, or 
even a change of residence.  Because an estimated five percent of identity 
documents are reported lost or stolen, the database must be designed to 
re-issue cards promptly and reconfirm the person's identity and continued 
qualification for the card.

Given the failure modes, how well do IDs solve the problem?  Not very 
well.  They're prone to errors and misuse, and are likely to be blindly 
trusted even when wrong.

What are the costs associated with IDs?  Cards with a chip and some 
anti-counterfeiting features are likely to cost at least a dollar each, 
creating and maintaining the database will cost a few times that, and 
registration will cost many times that -- multiplied by 286 million 
Americans.  Add database terminals at every police station -- presumably 
we're going to want them in police cars, too -- and the financial costs 
easily balloon to many billions.  As expensive as the financial costs are, 
the social costs are worse.  Forcing Americans to carry something that 
could be used as an "internal passport" is an enormous blow to our rights 
of freedom and privacy, and something that I am very leery of but not 
really qualified to comment on.  Great Britain discontinued its wartime ID 
cards -- eight years after World War II ended -- precisely because they 
gave unfettered opportunities for police "to stop or interrogate for any 
cause."

I am not saying that national IDs are completely ineffective, or that they 
are useless.  That's not the question.  But given the effectiveness and the 
costs, are IDs worth it? Hell, no.


Privacy International's fine resource on the topic.  Their FAQ is excellent:
<http://www.privacyinternational.org/issues/idcard/>

EPIC's national ID card site:
<http://www.epic.org/privacy/id_cards/>

Other essays:
<http://www.csl.sri.com/users/neumann/insiderisks.html#138>
<http://www.cato.org/tech/tk/010928-tk.html>
<http://www.aclu.org/library/aaidcard.html>
<http://slate.msn.com/?id=2058321>
<http://www.cato.org/pubs/pas/pa237.html>
<http://members.aol.com/_ht_a/xowie/idcard.htm>
<http://www.free-market.net/spotlight/idcards/>
<http://www.securityfocus.com/news/286>
<http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2830138,00.html>


** *** ***** ******* *********** *************

          Judges Punish Bad Security



I have two stories with a common theme.

The first involves the U.S. Department of Interior.  There's an ongoing 
litigation between Native Americans and the U.S. Government regarding 
mishandling of funds.  After seeing for himself how insecure the 
Department's computers were, and that it was possible for someone to alter 
records and divert funds, a U.S. District Judge ordered the department to 
disconnect its computers from the Internet until its network is secured.

The second involves a couple of Web hosting companies.  One day, C.I. Host 
was hit with a denial-of-service attack.  They traced at least part of the 
attack to companies hosted by Exodus Communications.  C.I. Host filed an 
injunction against Exodus, alleging that they committed or allowed a third 
party to commit a DOS attack.  A Texas judge issued a temporary restraining 
order against three of Exodus's customers, forcing them to disconnect from 
the Internet until they could prove that the vulnerabilities leading to the 
DOS attack had been fixed.

I like this kind of stuff.  It forces responsibility.  It tells companies 
that if they can't make their networks secure, they have no business being 
on the Internet.  It may be Draconian, but it gets the message across.

On the Internet, as on any connected system, security has a ripple 
effect.  Your security depends on the actions of others, often of others 
you can't control.  This is the moral of the widely reported distributed 
denial-of-service attacks in February 2000: the security of the computers 
at eBay, Amazon, Yahoo, and CNN.com depended on the security of the 
computers at the University of California at Santa Barbara.  If Eli Lilly 
has bad computer security, then your identity as a Prozac user may be 
compromised.  If Microsoft can't keep your Passport data secure, then your 
online identify can be compromised.  It's hard enough making your own 
computers secure; now you're expected to police the security of everyone 
else's networks.

This is where the legal system can step in.  I like to see companies told 
that they have no business putting the security of others at risk.  If a 
company's computers are so insecure that hackers routinely break in and use 
them as a launching pad for further attacks, get them off the Internet.  If 
a company can't secure the personal information it is entrusted with, why 
should it be allowed to have that information?  If a company produces a 
software product that compromises the security of thousands of users, maybe 
they should be prohibited from selling it.

I know there are more instances of this happening.  I've seen it, and some 
of my colleagues have too.  Counterpane acquired two customers recently, 
both of whom needed us to improve their network's security within hours, in 
response to this sort of legal threat.  We came in and installed our 
monitoring service, and they were able to convince a judge that they should 
not be turned off.  I see this as a trend that will increase, as attacked 
companies look around for someone to share fault with.

This kind of thing certainly won't solve our computer security problems, 
but at least it will remind companies that they can't dodge responsibility 
forever.  The Internet is a vast commons, and the actions of one affect the 
security of us all.


Dept. of Interior story:
<http://www.zdnet.com/zdnn/stories/news/0,4586,5100521,00.html>
<http://www.wired.com/news/politics/0,1283,48980,00.html>

Exodus story:
<http://www.cio.com/archive/110101/court.html>


** *** ***** ******* *********** *************

             Crypto-Gram Reprints



Voting and Technology:
<http://www.counterpane.com/crypto-gram-0012.html#1>

"Security Is Not a Product; It's a Process"
<http://www.counterpane.com/crypto-gram-9912.html#SecurityIsNotaProductItsaP 
rocess>

Echelon Technology:
<http://www.counterpane.com/crypto-gram-9912.html#ECHELONTechnology>

European Digital Cellular Algorithms:
<http://www.counterpane.com/crypto-gram-9912.html#EuropeanCellularEncryption 
Algorithms>

The Fallacy of Cracking Contests:
<http://www.counterpane.com/crypto-gram-9812.html#contests>

How to Recognize Plaintext:
<http://www.counterpane.com/crypto-gram-9812.html#plaintext>


** *** ***** ******* *********** *************

       Computer Security and Liabilities



Some months ago I did a Q&A for some Web site.  One of the questions is 
worth reprinting here.

Question:  Your book, Secrets and Lies, identifies four parties who are 
largely responsible for network-based attacks: 1) individuals who purposely 
seek to cause damage to institutions, 2) "hacker wannabes" or script 
kiddies, who help proliferate the exploits of bad hackers, 3) businesses 
and governments who allow themselves to be exploited, jeopardizing the 
rights of their customers, clients, and citizens, and  4) software 
companies who knowingly fail to reform their architectures to make them 
less vulnerable.  If you were to impose blame, who -- in your opinion -- 
are the most liable of this group?

Answer: Allocating liability among responsible parties always depends on 
the specific facts; blame or liability cannot be assigned in general terms 
or in a vacuum.  For example, assume we are Coke and we have a secret 
formula.  We store it in a safe in our corporate headquarters.  We bought 
the safe from the Acme Safe Company.

One evening, the CEO of the company takes the secret formula out of the 
safe and is reviewing it on her desk.  Her phone rings and she is 
distracted, and while she is on the phone, the guy who empties the waste 
baskets sees the formula, knows what it is, steals it, and sells it to 
Pepsi for $1 billion.  Or there is a guy in the building who is an agent 
for Pepsi who has been trying for months to get the secret formula, and 
when he notices she has it out, he makes the phone call, distracts her and 
steals the formula.  Or the thief is a college kid who just wants to know 
the formula; he forms the same plan to steal the formula, reads it, and 
returns it with a note -- hah hah I know the formula -- and never sells it 
to anyone or tells anyone the formula.

Under criminal law, all three thieves are criminals.  The janitor or the 
kid may get off easier because they didn't plan the crime (often an 
aggravating factor) and the kid didn't do it for financial gain (often an 
aggravating factor).  Under tort law, the janitor and the agent for Pepsi 
would be liable to Coke for whatever damages ensued from stealing the 
formula.  It wouldn't really matter that the CEO could have been more 
careful.  Intentional torts usually trump negligence.  The kid is also 
culpable, but Coke may have no damages.  If Coke does have damages, laws 
about juveniles may protect him, but if he's an adult, he is just as liable 
as the Pepsi agent.  Thus, I see the hackers and the kids as the same in 
the question.  The only issues are the damage they cause and whether the 
kids are young enough to be protected by virtue of their age.  Furthermore, 
Coke may still want to fire the CEO for leaving the formula on her desk, 
but the thieves can't limit their liability by pointing to her negligence.

Now imagine that the formula is not stolen from her desk, but instead 
stolen from the safe supplied by the Acme Safe Company.  As above, the 
thieves should not be able to reduce their liability by saying the safe 
should have been tougher to crack.  The interesting question is, can Coke 
get money from Acme?  That depends on whether there was a defect in the 
safe, whether Acme made any misrepresentations about the safe, whether the 
safe was being used as it was intended, and whether Acme supplied any 
warranties with the safe.  If Acme's safe performed entirely as advertised, 
then they aren't liable, even if the safe was defeated.  If, on the other 
hand, Acme had warranted that the safe was uncrackable and it was cracked, 
then Acme has breached its contract.  Coke's damages, however, may be 
limited to the return of the amount paid for the safe, depending on the 
language of the contract.  Clearly, Coke would not be made whole by this 
amount.  If Acme made negligent or knowing misrepresentations about the 
qualities of its safe, then we are back in tort territory, and Acme may be 
liable for greater damages.

Figuring out who to blame is easy; assigning proportions of blame is very 
difficult.  And it can't be done in the general case.


** *** ***** ******* *********** *************

                      News



Remember the great Windows XP anti-pirating features?  Well, they didn't 
make a bit of difference:
<http://www.zdnet.com/zdnn/stories/news/0,4586,5099511,00.html>

The European Parliament is allowing anti-terrorist investigators to 
eavesdrop on private data on the Internet:
<http://www.nando.net/technology/story/169847p-1634620c.html>

Cyclone is a "safe" dialect of C.  The goal is to be as C-like as possible 
while preventing unsafe behavior (buffer overflows, dangling pointer, 
format string attacks, etc).
<http://www.research.att.com/projects/cyclone/>
<http://www.newscientist.com/news/news.jsp?id=ns99991578>

More on the full-disclosure argument:
<http://www.siliconvalley.com/docs/news/svfront/036564.htm>

This is what REAL software pirates look like:
<http://news.cnet.com/news/0-1003-200-7899626.html>
<http://www.theregister.co.uk/content/4/22908.html>

The Federal Trade Commission is cracking down on marketers of bogus 
bioterrorism defense products:
<http://www.ftc.gov/opa/2001/11/webwarn.htm>
The reverse is probably more important: consumer experts need to be warning 
government agencies to avoid bogus anti-terrorism products.

Here's some positive Microsoft news.  They have acknowledged a security 
mistake and apologized for it.  Now, if we can only get them to responsibly 
fix mistakes.
<http://news.cnet.com/news/0-1003-200-7920273.html>

The FBI is talking about its key-logging technology, called "Magic 
Lantern."  Near as I can tell, it works something like Back Orifice: it 
infects a computer remotely, and then sniffs passwords and keys and other 
interesting bits of data for the FBI.  Nothing new, here.
<http://www.zdnet.com/zdnn/stories/news/0,4586,5099906,00.html>
<http://www.theregister.co.uk/content/55/23150.html>
<http://news.excite.com/news/r/011212/18/tech-tech-magiclantern-dc>
The scariest bit of news revolves around whether anti-virus companies will 
detect Magic Lantern or ignore it.  I don't think that the anti-virus 
companies should be making decisions about which viruses and Trojans it 
detects and which it doesn't.  Aside from the obvious problems of betraying 
the trust of the user, there's the additional complexity of a mechanism for 
detecting malware and then not doing anything about it.  Any hacker who 
reverse engineers the anti-virus product can design a Trojan that looks 
like the FBI's Magic Lantern and escapes detection.
<http://www.wired.com/news/conflict/0,2100,48648,00.html>
<http://www.theregister.co.uk/content/55/23057.html>
Latest news is that the anti-virus companies will detect it.
<http://news.cnet.com/news/0-1003-200-8134814.html>

Interesting CERT white paper on trends in denial-of-service 
attacks.  Bottom lines: the tools are getting cleverer.
<http://www.cert.org/archive/pdf/DoS_trends.pdf>

Developments in quantum cryptography:
<http://www.newscientist.com/news/news.jsp?id=ns99991595>

Search engines are getting smarter and smarter, finding files in all sorts 
of formats.  These days some of them can find passwords, credit card 
numbers, confidential documents, and even computer vulnerabilities that can 
be exploited by hackers.
<http://www.zdnet.com/zdnn/stories/news/0,4586,5099914,00.html>

Thirty countries have signed the cybercrime treaty:
<http://www.securityfocus.com/news/291>

A little-noticed provision in the new anti-terrorism act imposes U.S. 
cybercrime laws on other nations, whether they like it or not.
<http://www.securityfocus.com/columnists/39>

Excellent, excellent article on Microsoft's push for bug secrecy.  So good 
I wish I had written it:
<http://www.computerworld.com/storyba/0,4125,NAV47_STO65969,00.html>

A very interesting Web page from David Wheeler, giving measurements
that suggest open source operating systems may have an advantage with
respect to security:
http://www.dwheeler.com/oss_fs_why.html#security

Interesting Q&A with Gary McGraw on software security:
<http://news.cnet.com/news/0-1014-201-8006311-0.html>

Heap overflows:
<http://www.theregister.co.uk/content/55/23075.html>

Bad news.  2600 DeCSS appeal denied; Felten SDMI suit rejected:
<http://www.theregister.co.uk/content/6/23084.html>
<http://news.cnet.com/news/0-1005-200-8011238.html>

Good story about a company's physical security:
<http://www.infosecnews.com/opinion/2001/11/28_03.htm>

This isn't big enough for a doghouse entry, but it is funny enough to 
mention.  Norton SystemWorks 2002 includes a file erasure program called 
Wipe Info.  In the manual (page 160), we learn that "Wipe Info uses 
hexadecimal values to wipe files.  This provides more security than wiping 
with decimal values."  Who writes this stuff?

Security vulnerabilities in Nokia cellphones.
<http://www.theregister.co.uk/content/55/23080.html>
<http://www.theregister.co.uk/content/55/23232.html>

Microsoft bundling anti-virus software with the OS?
<http://www.zdnet.com/zdnn/stories/comment/0,5859,2830607,00.html>

SatireWire has the best take on the recent arrests of software pirates:
<http://www.satirewire.com/news/0112/uh_oh.shtml>
There are two basic kinds of pirates.  There's the kids and hobbyists, who 
do this for fun and don't cost companies revenue.  And there are the 
professionals, that sell pirated software for profit.  My fear is that 
we're arresting the former while ignoring the latter.
<http://www.wired.com/news/technology/0,1282,49096,00.html>

Problems of PKI.  Last year the author railed against my "10 Myths of PKI" 
essay.  This is his retraction.  He learned the hard way: his PKI project 
failed and his PKI vendor is going under.
<http://www.infosecuritymag.com/articles/october01/columns_logoff.shtml>

The Goner worm has come, been over-hyped by the anti-virus companies, and 
has gone; its Israeli authors have been arrested.
<http://www.zdnet.com/zdnn/stories/news/0,4586,5100282,00.html>
<http://www.wired.com/news/technology/0,1282,48858,00.html>
<http://www.newscientist.com/news/news.jsp?id=ns99991672>
<http://www.theregister.co.uk/content/56/23278.html>
<http://www.theregister.co.uk/content/56/23292.html>
Here's an interesting footnote:  Chat channel volunteers take over the IRC 
channel.  It's the first time I've ever heard of a peacekeeping force 
moving in to keep out digital insurgents.
<http://www.newsbytes.com/news/01/172814.html>


** *** ***** ******* *********** *************

               Counterpane News



Happy Holidays to everyone.  Since I don't have your addresses, here's a 
virtual holiday card:
<http://www.counterpane.com/schneiercard.html>

Counterpane has been chosen as one of ComputerWorld magazine's "Top 100 
Emerging Companies to Watch for 2002":
<http://www.counterpane.com/pr-cw2.html>

Schneier is giving the keynote speech at the CyberCrime on Wall Street 
conference on January 10.
<http://www.cybercrime2002.com/home.html>


** *** ***** ******* *********** *************

        The Doghouse: The State of Nevada



In the spirit of the Indiana bill that would have legislated the value of 
pi, the State of Nevada has defined encryption.  According to a 1999 law, 
Nev. St. 205.4742:

"Encryption" means the use of any protective or disruptive measure, 
including, without limitation, cryptography, enciphering, encoding or a 
computer contaminant, to:
1. Prevent, impede, delay or disrupt access to any data, information, 
image, program, signal or sound;
2. Cause or make any data, information, image, program, signal or sound 
unintelligible or unusable; or
3. Prevent, impede, delay or disrupt the normal operation or use of any 
component, device, equipment, system or network.

Note that encryption may involve "cryptography, enciphering, encoding," but 
that it doesn't have to.  Note that encryption includes any "protective or 
disruptive measure, without limitation."  If you smash a computer to bits 
with a mallet, that appears to count as encryption in the state of Nevada.


** *** ***** ******* *********** *************

                      AES



We have an AES.  FIPS-197 was signed on November 26, 
2001.  Congratulations, NIST, on a multi-year job well done.

NIST special publication SP 800-38A, "Recommendation for Block Cipher Modes 
of Operation," is also available.  The initial modes are ECB, CBC, CFB, 
OFB, and CTR.  Other modes will be added at a later time.

The Second Key Management Workshop was held in early November.  Information 
is available on the NIST Web site.

AES info:
<http://csrc.nist.gov/encryption/aes/>

FIPS-197:
<http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf>

SP 800-38A:
<http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf>

Key-management info:
<http://csrc.nist.gov/encryption/kms/>


** *** ***** ******* *********** *************

        Fun with Vulnerability Scanners



It used to be that when you connected to one of Counterpane's mailers, it 
responded with a standard SMTP banner that read something like:

         220 counterpane.com ESMTP Sendmail 8.8.8/8.7.5;
         Mon, 7 May 2001 21:13:35 -0600 (MDT)

Because this information includes a sendmail version number, some people 
sent us mail that read, loosely interpreted:  "Heh heh heh.  Bruce's 
company runs a stupid sendmail!"

Until recently, the standard response of Counterpane's IT staff was to 
smile and say "Yes, that certainly is what the banner says," leaving the 
original respondent to wonder why we didn't care.  (There are a bunch of 
reasons we don't care, and explaining them would take both the amusement 
and the security out of it all.)

However, we were getting a bit tired of the whole thing.  We have companies 
run penetration tests against us on a regular basis, and more often than 
not they complained that every one of our publicly available SMTP servers 
had the same stupid version of sendmail on it.

Then, we got the results of a vulnerability scanner run against our Sentry, 
a special-purpose device.  The scanner complained that 1) the Sentry's SMTP 
service produced a banner, and 2) SMTP banners usually contain version 
information.  Hence, there was a potential security vulnerability.  The 
banner in question was:

         220 natasha ESMTP Sentry

As you can tell, this banner contains no version information at all.  The 
scanner blindly alerts every time an SMTP server returns a banner.  This is 
the equivalent of those envelopes that says "YOU MAY ALREADY HAVE WON!" in 
big red letters on the outside.  You might have a vulnerability.  Probably 
not; but you never know, you're telling people something and they might be 
able to get information out of it.

Unfortunately, RFC 821 *requires* an SMTP server to return a banner.  The 
original RFC calls for a banner that starts with 220 and the official name 
of the machine; the rest of the banner is up to the user.  It's traditional 
for servers that support ESMTP to mention this in their banner.  Now, many 
RFCs are more honored in the breach than in the observance, but in pure 
practical terms, if your SMTP server doesn't say something that starts with 
220, it won't work.  No banner, no mail.

This means that it's impossible to avoid setting off the vulnerability 
scanner.  It is, however, possible to avoid actual giving out useful 
information.  There are a lot of approaches to this: the strong, silent 
type that our second example almost achieves (220 hostname ESMTP); the 
deceptive type, which our first example achieves (give out a banner that 
implies vulnerabilities you don't have -- for maximum amusement value, pick 
one from ancient times); the confusing type, which gives a different banner 
every time (some hosts do really funny versions of this).  However, none of 
these solves the basic problem of getting people to stop complaining, and 
the complainers are a much bigger problem for us than the attackers.

Attackers are going to figure out what SMTP server is running, regardless 
of its banner.  They can simply try all the vulnerabilities.  Therefore, 
you get rid of attackers by getting rid of vulnerabilities.  A lot of 
attackers are just running scripts, so you can reduce the annoyance value 
by running a banner that doesn't match the script, but almost any approach 
will achieve that.

The human beings who complain, however, are unwilling to beat on your SMTP 
server to figure out what it is.  Deceptive banners fool them reliably, 
wasting your precious time dealing with them.  Empty banners don't get rid 
of them reliably.  We have therefore moved to the amusing defense; our new 
banners read:

	220 This ESMTP banner intentionally left blank.

Scanners will still go off, but pretty much anybody can tell that this 
doesn't contain useful information.

Technically, this isn't RFC compliant, unless you name your host 
"This".  We would worry about this more if we hadn't already been running a 
single host under multiple IP addresses, each with a different name 
attached, each with exactly the same banner, hostname and all.  Nothing 
ever complained that the name in the banner didn't match the hostname.  No 
penetration test ever even noticed that all these "different" machines were 
the same.  Even when they complained about the informative banner that told 
them that.  We figure "This" will do us just as well as a name.  (You could 
just as well put the hostname after the 220 if you feel compliant or use a 
mail system that cares.)

This is all very amusing, and it reduces letters of complaint that people 
actually bother to write, but it doesn't do a thing for scanners.  And it 
isn't just SMTP that the scanners complain about; no, they complain about 
SSH (it has a banner too, which is equally required), and they complain 
about our mail server accepting EXPN (it doesn't return an error; it 
doesn't return any information, either, but you have to look at the output 
to tell that).  They often complain that the Sentry accepts EXPN, even 
though it doesn't respond to the command at all.  All in all, the scanner 
output is all too much like my mail; the important bills are in danger of 
being buried by the junk mail.


This was written with Elizabeth Zwicky.  It appeared in the Nov 2001 Dr. 
Dobb's Journal.


** *** ***** ******* *********** *************

             Comments from Readers



From: Nathan Myers <ncm@nospam.cantrip.org>
Subject: Full Disclosure

Your message is consistent, effective, and helpful.  However, one remark 
you often repeat is being used to justify harmful practices, and even 
harmful legislation.  It plays into the hands of Microsoft and those like them.

In your essay on full disclosure, you wrote: "the sheer complexity of 
modern software and networks means that vulnerabilities, lots of 
vulnerabilities, are inevitable."  Microsoft's Scott Culp had written, "all 
non-trivial software contains bugs."  The difference between the two 
statements is probably too subtle for most of your readers.  As you say, 
almost all software vendors do very shoddy work, and most large systems are 
riddled with holes.  Still, the step from "almost all" to "all" is much 
larger than it might seem.

 From the standpoint of a judge or legislator, this makes all the 
difference in the world.  If reliable software really cannot be written, 
then Microsoft and its ilk must be forgiven their sloppiness at the outset; 
it would be wrong to hold them to an impossible standard.  If in fact 
reliable software can be written, then such ilk are negligent in failing to 
produce it.

This is not an academic point.  It affects your argument, and 
Microsoft's.  If a software system will always be full of holes no matter 
how many patches are applied, publicizing holes just makes it harder for 
network administrators to keep up.  It is the availability of reliable 
alternatives that cinches the full disclosure argument: users can get off 
the patch treadmill by switching to software that's not buggy.  The extra 
work done to ensure reliability pays off when users switch, or 
needn't.  Full disclosure punishes the sloppy (and their customers) and 
rewards the careful (and their customers).

It doesn't take many examples of truly reliable software to make the point, 
in principle.  How many bugs remain in Donald Knuth's TeX?  In Dan 
Bernstein's qmail?  These were not billion-dollar efforts.

Once it's demonstrated that reliability is possible, getting it becomes a 
matter of economics.  Microsoft, rather than saying reliable software is 
impossible, is forced to admit instead ($40 billion in the bank 
notwithstanding) that they simply cannot afford to write reliable software, 
or that their customers don't want it, or, more plausibly, that they just 
can't be bothered to write any, customers be damned.

Instead of promoting a destructive fatalism about the software components 
we rely on, you would do better to say simply that current economic 
conditions lead most organizations to deploy systems known to be full of 
vulnerabilities.  Leave open the possibility that slightly different 
circumstances would allow for a reliable infrastructure.  Reliability is no 
substitute for effective response, but it just might be what it takes to 
make effective response possible.



From: "John.Deters" <John.Deters@target.com>
Subject: Full Disclosure

There are known cases of thieves stealing credit card information from 
e-commerce Web sites.  The criminal who calls himself Maxus tried 
blackmailing CD Universe (and allegedly others) into paying him to not 
reveal the credit card data he had stolen; when that failed he used those 
stolen account numbers to commit theft.  He was never caught.  What if the 
exploit he used was the same exploit that was used by Code Red, or Nimda, 
or any of the other worms-of-the-week?  He's now out of business, as are 
any other un-named, un-identified, un-caught e-commerce thieves.

I think that exploits such as Code Red, while harmful in the short term, 
cause appropriate fires to be lit beneath the appropriate sysadmins.  They 
bring security issues to the radar screens of upper management.  I'm not 
condoning the use of malicious worms as security testers, but rather 
recognizing that their existence causes people to reevaluate their systems' 
security.  Maxus didn't release info regarding his exploit -- it was more 
valuable for him to steal.  How many other thieves were using these same 
exploits?  Without Code Red, perhaps hundreds of e-commerce Web sites would 
have remained unpatched, and would still be vulnerable to the professional 
thieves.  Perhaps many still are, but certainly not as many as there would 
have been without Code Red.



From: "Gregory H Westerwick" <gregoryw@us.ibm.com>
Subject: Full Disclosure

Your discussion in the latest newsletter is really a special case of a 
debate that has been going on in democratic and open societies since their 
inception.  Recent examples in the national press are the FBI's warnings 
about impending terrorist activities and Governor Davis's warning about 
possible attacks on bridges in California.  Was the public better served by 
knowing of credible evidence of terrorist plans?  Did this elicit 
unnecessary panic?  What would have been the political fallout if there had 
been an attack, the state knew that it was likely and didn't warn the 
public?  Did the warning change the terrorist plans?  I don't think we will 
ever have a good answer to these questions, and will probably be debating 
the merits of each side into the next century.

Problem denial is not restricted to the computer industry.  Several years 
ago, I almost swallowed a piece of plastic while drinking a Pepsi.  It 
looked like a small piece of nozzle about the size of a fingernail.  I 
called the Pepsi consumer 800 number, told them about it and gave them the 
batch number from the can so they could go back through the quality records 
and determine if they had an endemic problem with their filling 
machinery.  The next day I got a very apologetic Pepsi rep on the phone 
thanking me for the information, and a week later received an envelope full 
of free six-pack coupons in the mail.

A few months later, the country had another round of "things in the can" 
lawsuits, where crackpots were allegedly finding mice, syringes, and other 
assorted evil junk in drink cans.  Shortly after that, at the behest of the 
bottling industry, came laws that made it a felony to falsely claim finding 
foreign objects in drink cans.  If I found that same piece of plastic in a 
can today, I'm not sure I would let Pepsi know, and they would have one 
less piece of information with which to improve their processes and 
machinery.  They would probably have one less customer as well.



From: "Charles L. Jackson" <chuck@jacksons.net>
Subject: Full Disclosure

I think that the responsibility for reliability and security should fall on 
the user.  If the lawyer wants to be protected against crashes, he could go 
to MS and negotiate a contract outside the shrink-wrap license.  He could 
go to a trusted supplier who would design a more reliable system (mirrored 
disks, off-site backups, version-by-version backups, automatic save) and 
who would warrant that system.  He could buy insurance.

MS's disclaimer of liability is necessary.  Maybe we should change the law 
and not let them disclaim the first $1,000 or first $1,000,000 of a loss.

Similarly, I think that much of the current and proposed law regarding 
hacking and intrusion into computer systems is counterproductive.  Today 
network administrators and CEOs can say "Our system was penetrated, and the 
FBI is after the criminals."  CEO's cannot say, "We left the merger plans 
and our secret list of herbs and spices on the table in the restaurant 
while we went to the bathroom, and the FBI is after the criminals who read 
the secrets."  I think that the current set of incentives lets the people 
with the final authority and the responsibility for security off the hook.

Thus, I propose that the law should be that, "If an outsider breaks into 
your computer system over the Internet or via a dial-up connection and 
steals something, they get to keep it."

A weakened variant my rule might be that: "If a script kiddie breaks into 
your system and steals something using a hole for which a patch has been 
available for 30 days, they get to keep it."

The third weakest variant is, "If someone logs into your system using 
either a manufacturer's default password or the user name, password pair 
(guest, guest) and steals something, they get to keep it."

My purpose with these policy proposals is twofold.  One, strengthen the 
incentives for individuals and firms to both buy and use good 
security.  Two, to create an environment where the user or user 
organization understands that it bears the responsibility for security.

Notice that my argument does not apply to DOS attacks, physical intrusion,
etc.  But, if somebody can telnet to your server, log in, and have the
system mail them 500 copies of AC, 2nd Ed., that should be your problem, 
not Microsoft's or Oracle's.



From: Pekka Pihlajasaari <Pekka@data.co.za>
Subject: Full Disclosure

I feel that it is simplistic to consider vulnerabilities to be the result 
of a programming mistake.  This reduces the complex problem of correct 
systems development to an interpretation error in implementation.  I would 
suspect that more vulnerabilities are a result of incorrect requirements 
capture through specifications development, into design, and occasionally a 
programming bug.  Relegating the problem to programming means that the bug 
is identifiable through correct testing to meet requirements, whereas a 
higher-level error will not become visible without actually questioning 
system requirements.  The acceptance of a more holistic source of 
vulnerabilities will move emphasis away from the developer responsible for 
implementation and more into the hands of management responsible for a project.



From: Greg Guerin <glguerin@amug.org>
Subject: Full Disclosure

Your article reminded me of two quotations:

"No one pretends that democracy is perfect or all-wise. Indeed, it has been 
said that democracy is the worst form of Government except all those other 
forms that have been tried from time to time."

and:

"I know no safe depository of the ultimate powers of the society but the 
people themselves; and if we think them not enlightened enough to exercise 
their control with a wholesome discretion, the remedy is not to take it 
from them, but to inform their discretion."

I would argue that full disclosure is not perfect, and is even the worst 
form of reporting security flaws, except for all the others.  I would 
further argue that if ordinary people are unable to handle full disclosure 
with the necessary understanding and discretion, the remedy is not to keep 
the information from them, but to enlighten them.

And the authors of those two quotations?  None other than those infamous 
troublemakers and malcontents, Winston Churchill and Thomas Jefferson, 
respectively.



From: "Marcus de Geus" <marcus@degeus.com>
Subject: Full Disclosure

I feel there is one aspect that is receiving too little attention in this 
debate: the effect that full public disclosure of software bugs would have 
on buyers of new computer software.

The current debate focuses on the security risk (whether perceived or 
actual) that results from publishing a security-related bug rather than 
keeping its existence under wraps for as long as possible.  Consequently, 
the central issue of the debate has become the risk that full disclosure 
brings *to* existing systems, rather than the risk constituted *by* these 
systems.

Whereas the proponents of full disclosure consider the benefits of forced 
changes, i.e., software evolution, the champions of bug secrecy like to 
point out the threat that full disclosure brings to the large installed 
base.  The latter is a bit like arguing that the use of tanks in WWI was a 
waste of perfectly good trenches.

I have a sneaky suspicion that one of the parties in this debate has a 
hidden agenda, which is to divert attention from the effect that full 
disclosure of security bug information to *the general public* (as opposed 
to systems administrators, knowledgeable users, crackers, and script 
kiddies) would have on the sales figures of certain software.  In other 
words, the object of the current exercise is to prevent "evolution through 
natural selection" by avoiding normal market mechanisms.  (Now where did 
this come up before?)

Consider a scenario in which notices of defective software are routinely 
published in the daily press, the way recall notices from manufacturers of 
motorcars or household appliances are.  Considering the economic 
consequences of certain security defects, not to mention the risk to life 
and limb (e.g., in hospital IT systems), there is a case to be made for 
compulsory publication of such notices.

I suspect that the manufacturer mentioned most often in such notices (say, 
twice a week; nice pointer to http://www.vnunet.com/Analysis/1126488) would 
find himself having to either face plummeting sales figures or rapidly 
improve the quality of his products.  (In either case, the security 
problems at issue would be resolved, by the way.)

I wonder, could the studious avoidance of this subject be the result of 
fears that evolution may just prefer quality over quantity?



From: Mike Bursell <mike@p2ptrust.org>
Subject: Window of Exposure -- area to volume

 >You can think of this as a graph of danger versus time,
 >and the Window of Exposure as the area under the graph.

It occurred to me that if we add another dimension -- "number of systems 
affected" or "vulnerable install base" -- we can move to volume, rather 
than just area.  This could be useful for corporations and communities to 
allow some degree of risk management via scenario analysis, particularly if 
trend analysis of how quickly companies get to the inflection point (if at 
all) is taken into account.

Although we hear lots about the low maintenance costs of homogeneous 
networks or administrative domains, one way of attempting to reduce the 
risk on your systems is to spread the risk by having heterogeneous systems 
-- but quantifying this can be difficult.  It might just be that using this 
sort of analysis might be useful.



From: Martin Dickopp <firefly-mail@gmx.net>
Subject: Linux and the DMCA

On Thu, Nov 15, 2001 at 01:45:27AM -0600, Bruce Schneier wrote:
 > A new version of Linux is being released without security
 > information, out of fear of the DMCA.  Honestly, I don't see how
 > the DMCA applies here, but this is a good indication of the level
 > of fear in the community.

One of the bugs being fixed in the Linux kernel allowed users to circumvent 
local file permissions.  Kernel programmer Alan Cox probably assumes that, 
since local file permissions can be used to protect copyrighted material, 
disclosing details about the bug would be illegal under the DMCA.

Cox is a U.K. citizen who wants to able to enter the U.S. without becoming 
a second Sklyarov.  Why should he undergo the trouble and cost to consult 
an expert on U.S. law about the issue at hand? Instead, he just stays on 
the safe side, understandably.

There seems to be little public awareness of the implications of the DMCA, 
because most people don't see themselves affected.  Therefore, the right 
reaction to the DMCA is not to ignore it, but to abide to it, even in cases 
where it can just reasonably be assumed (without taking legal advise) to 
apply.  This is what Cox did, and I fully appreciate his reaction.



From: tgreen@cix.co.uk (Terence Green)
Subj:  Microsoft Security Patches

XP is not the first time Microsoft has bundled a stealth update in a 
security patch.  Such actions seriously undermine confidence in Microsoft's 
understanding of security but rarely receive the attention they deserve.

Microsoft Security Bulletin MS01-046 (Access Violation in Windows 2000 IrDA 
Driver Can Cause System to Restart) dated August 21, 2001 patches a 
vulnerability in Windows 2000 (an unchecked buffer).

<http://www.microsoft.com/technet/security/bulletin/MS01-046.asp>

The new functionality is not mentioned in the bulletin itself but the link 
to the patch leads to a page bearing the following note:

"Note: This update also includes functionality that allows Windows 2000 to 
communicate with infrared-enabled mobile devices in order to establish a 
dial-up networking connection via an infrared port.  For more information 
about this issue, read Microsoft Knowledge Base (KB) Article Q252795."

<http://support.microsoft.com/support/kb/articles/Q252/7/95.asp>

Q252795 explains how, when releasing Windows 2000, Microsoft removed the 
ability to support virtual serial ports so that Windows 2000 would not 
"inherit limitations."

One effect was to orphan IrDA-enabled mobile phones with modems that could 
otherwise have been used with Windows 2000 to make dial-up connections.  It 
also prevented Palm organizers that were able to connect with Windows 98 
via IrDA from doing the same with Windows 2000.  The functionality in 
security patch MS01-046 addresses the mobile phone issue.


** *** ***** ******* *********** *************


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, 
insights, and commentaries on computer security and cryptography.  Back 
issues are available on <http://www.counterpane.com/crypto-gram.html>.

To subscribe, visit <http://www.counterpane.com/crypto-gram.html> or send a 
blank message to crypto-gram-subscribe@chaparraltree.com.  To unsubscribe, 
visit <http://www.counterpane.com/unsubform.html>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will 
find it valuable.  Permission is granted to reprint CRYPTO-GRAM, as long as 
it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and CTO of 
Counterpane Internet Security Inc., the author of "Secrets and Lies" and 
"Applied Cryptography," and an inventor of the Blowfish, Twofish, and 
Yarrow algorithms.  He is a member of the Advisory Board of the Electronic 
Privacy Information Center (EPIC).  He is a frequent writer and lecturer on 
computer security and cryptography.

Counterpane Internet Security, Inc. is the world leader in Managed Security 
Monitoring.  Counterpane's expert security analysts protect networks for 
Fortune 1000 companies world-wide.

<http://www.counterpane.com/>

Copyright (c) 2001 by Counterpane Internet Security, Inc.

----- End forwarded message -----

-- 
                                                  F
   [[ From: scot@autonomous.org ]]                |
+--[[ NERVE AGENT AUDIO SYSTEMS ]]--+--(CH3)2CH-O-P=O--+
   [[ http://mp3.com/nerveagent ]]                |
                                                  CH3

#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: majordomo@bbs.thing.net and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net