Future
Imperfect
By
David
D. Friedman
Draft: 4/13/02
Part
I: Prolog
Chapter
I: Introduction
I recently attended an event where the guest speaker was a cabinet
member. In conversation afterwards, the subject of long term
petroleum supplies came up. He warned that at some point, perhaps a
century or so in the future, someone would put his key in his car's
ignition, turn it, and nothing would happen–because there would
be no more gasoline.
What shocked me was not his ignorance of the relevant economics,
which imply that if we ever do run out of gasoline it will be a long
slow process of steadily rising prices, not a sudden surprise. It was
the astonishing conservatism of his view of the future. It was as if
a similar official, a hundred years earlier, had warned that sometime
around the year 2000 we were going to open the door of the carriage
house only to find that the horses had starved to death for want of
hay. I do not know what the world will be like a century hence. But
it is not likely to be a place where the process of getting from here
to there begins by putting a key in an ignition, turning it, and
starting an internal combustion engine burning gasoline.
This book is about technological change, its consequences and how to
deal with them. In this chapter I briefly survey the technologies. In
the next I discuss how to adjust our lives and institutions to their
consequences.
I am not a prophet; any one of the technologies I discuss may turn
out to be a wet firecracker. It only takes one that does not to
remake the world. Looking at some candidates should make us a little
better prepared if one of those revolutions happens. Perhaps more
important, after we have thought about how to adapt to any of ten
possible revolutions, we will at least have a head start when the
eleventh drops on us out of the blue.
Much of the book grew out of a seminar I teach at the law school of
Santa Clara University. Each Thursday we discuss a technology that I
am willing to argue, at least for a week, will revolutionize the
world. On Sunday students email me legal issues that revolution will
raise, to be put on the class web page for other students to read.
Tuesday we discuss the issues and how to deal with them. Next
Thursday a new technology and a new revolution. Nanotech has just
turned the world into gray goo; it must be March.
Since the book was conceived in a law school, many of my examples
deal with the problem of adapting legal institutions to new
technology. But that is accident, not essence. The technologies that
require changes in our legal rules will also affect marriage,
parenting, political institutions, businesses, life, death and much
else.
Possible
Futures
We start with three technologies relevant to privacy–one
that radically increases it, two that radically decrease it.
Privacy x 3
or
Now You Have It, Now You Don't
Public Key encryption makes possible untraceable
communications intelligible only to the intended recipient. My
digital signature demonstrates that I am the same online persona you
dealt with yesterday and your colleague dealt with last year, with no
need for either of you to know such irrelevant details as age, sex,
or what continent I am living on. Hence the combination of computer
networking and public key encryption makes possible a level of
privacy humans have never known, an online world where people have
both identity and anonymity–simultaneously. One implication is
free speech protected by the laws of mathematics–arguably more
reliable and certainly with broader jurisdiction than the Supreme
Court. Another is the possibility of criminal enterprises with brand
name reputation–online archives selling other people's
intellectual property for a penny on the dollar, temp agencies
renting out the services of forgers and hit men.
On the other hand ...
In the not too distant future you may be able to buy an inexpensive
video camera with the size and aerodynamic characteristics of a
mosquito. Even earlier, we will see–are already seeing–the
proliferation of cameras on lamp posts designed to deter crime.
Ultimately this could lead to a society where nothing is private.
Science fiction writer David Brin has argued that the best solution
available will be not privacy but universal transparency–a
world where everyone can watch everyone else. The police are watching
you–but someone is watching them.
It used to be that a city was more private than a village, not
because nobody could see what you were doing but because nobody could
keep track of what everybody was doing. That sort of privacy cannot
survive modern data processing. The computer on which I am writing
these words has sufficient storage capacity to hold at least a modest
amount of information about every human being in the U.S. and enough
processing power to quickly locate any one of those by name or
characteristics. From that simple fact arises the issue of who has
what rights with regard to information about me presently in the
hands, and minds, of other people.
Put all of these technologies together and we may end up with a world
where your realspace identity is entirely public, with everything
about you known and readily accessible, while your cyberspace
activities, and information about them, are entirely private, since
you control the link between your cyberspace persona and your
realspace identity.
Commerce in
Cyberspace
The world that encryption and networking creates requires a
way of making payments–ideally without having to reveal the
identity of payer or payee. The solution, already worked out in
theory but not yet fully implemented, is ecash–electronic
money, privately produced, potentially untraceable. One minor
implication is that money laundering laws become unenforceable, since
large sums can be transferred by simply sending the recipient an
email.
A world of strong privacy requires some way of enforcing contracts;
how do you sue someone for breach when you have no idea who he is?
That and related problems lead us to a legal technology in which
legal rules are privately created and enforced by reputational
sanctions. It is an old technology, going back to the privately
enforced Lex Mercantoria from which modern commercial law
evolved. But for most modern readers, including most lawyers and law
professors, it will be new.
Property online is largely intellectual property, which raises the
problem of how to protect it in a world where copyright law is
becoming unenforceable. One possibility is to substitute
technological for legal protection. A song or database comes inside a
piece of software–Intertrust calls it a digibox–that
regulates its use. To play the song or query the database costs ten
cents of ecash, instantly transmitted over the net to the copyright
owner.
Crime, Cops
and Computers
Some technologies make the job of law enforcement harder.
Others make it easier–even too easy. A few years ago, when the
FBI was pushing the digital wiretap bill
[1]
through Congress, critics pointed out that the capacity they were
demanding the phone companies provide added up to the ability to tap
more than a million telephones–simultaneously.
We still do not know if they intend to do it, but it is becoming
increasingly clear that if they want to, they can. The major cost of
a wiretap is labor. As software designed to let people dictate to
their computers gets better, that someone can be a computer
converting conversation to text, searching the text for key words or
phrases, and reporting the occasional hit to a human being. Computers
work cheap.
In addition to providing police a tool for enforcing the law,
computers also raise numerous problems for both defining and
preventing crimes. Consider the question of how the law should
classify a "computer break-in"–which consists, not of anyone
actually breaking into anything, but of one computer sending messages
to another and getting messages in reply. Or consider the potential
for applying the classical salami technique–stealing a very
small amount of money from each of a very large number of people–in
a world where tens of millions of people linked to the internet have
software on their computers designed to pay bills online.
Designer
Kids, Long Life and Corpsicles
The technologies in our next cluster are biological. Two–paternity
testing and in vitro fertilization–have already abolished most
of the facts on which the past thousand years of family law are
based. It is no longer only a wise child who knows his father–any
child can do it, given access to tissue samples and a decent lab. And
it is no longer the case that the woman from whose body an infant is
born is necessarily its mother. The law has begun to adjust. One
interesting question that remains is to what degree we will
restructure our mating patterns to take advantage of the new
possibilities.
A little further into the future are technologies to give us control
over our children's genetic heritage. My favorite is the libertarian
eugenics sketched decades ago by science fiction author Robert
Heinlein–technologies that permit each couple to choose, from
among the children they might have, which ones they do have,
selecting the egg that does not carry the mother's tendency to
nearsightedness to combine with the sperm that does not carry the
father's heritage of a bad heart. Run that process through five or
ten generations, with a fair fraction of the population
participating, and you might get a substantial change in the human
gene pool. Alternatively, if we learn enough to do real genetic
engineering, we can forget about the wait and do the whole job in one
generation.
Skip next from the beginning of life to the end. Given the rate of
progress in biological knowledge over the past century, there is no
reason to assume that the problem of aging will remain insoluble.
Since the payoff is not only enormously large but goes most
immediately to the currently old, some of whom are also rich and
powerful, if it can be solved it is likely that it will be.
In one sense the problem of aging has already been solved–which
brings us to our next technology. There are currently several hundred
people whose bodies are not growing older–because they are
frozen, held at the temperature of liquid nitrogen. All of them are
legally dead. But their hope in arranging their current status was
that it would not be permanent–that with sufficient medical
progress it will some day be possible to thaw them out, curing both
the diseases that killed them and the damage done by freezing. If it
begins to look as though they are going to win their bet, we will
have to think seriously about adapting laws and institutions to a
world where there is an intermediate state between alive and dead and
quite a lot of people are in it.
The Real
Science Fiction
Finally we come to three technologies whose effects, if they
occur, are sufficiently extreme that all bets are off, with both the
extinction and the radical alteration of our species real
possibilities within the lifespan of most of the people reading this
book.
One such is nanotechnology–the ability to engineer objects at
the atomic scale, to build machines whose parts are single atoms.
That is the way living things are engineered: A DNA strand or an
enzyme is a molecular machine. If we get good enough at working with
very small objects to do it ourselves, possibilities range from
microscopic cell repair machines that go through a human body fixing
everything that is wrong to microscopic self-replicating creatures
dedicated to turning the entire world into copies of themselves–known
in nanocircles as the "gray goo" scenario.
Artificial intelligence might beat nanotech in the annihilation
stakes–or in making heaven on earth. Raymond Kurzweil, a well
informed computer insider, estimates that in about thirty years there
will be programmed computers with human level intelligence. At first
glance that suggests a world of science fiction robots–if we
are lucky, obeying us and doing the dirty work. But if in thirty
years computers are as smart as we are and if current rates of
improvement–for computers but not for humans–continue,
that means that in forty years we will be sharing the planet with
beings at least as much smarter than we are as we are smarter than
chimpanzees.
[2]
Kurzweil's solution is for us to get smarter too–to learn to do
part of our thinking in silicon. That could give us a very strange
world–populated by humans, human/machine combinations, machines
programmed with the contents of a human mind that think they are that
human, machines that have evolved their own intelligence, and much
else.
The final technology is virtual reality. Present versions use the
brute force approach: feed images through goggles and headphones to
eyes and ears. But there is a more elegant version that most of us
experience daily, or rather nightly. If we can crack the dreaming
problem, figure out how our nervous system encodes the data that
reaches our minds as sensory perceptions, goggles and headphones will
no longer be necessary. Plug a cable into a socket at the back of
your neck for full sense perception of a reality observed by
mechanical sensors, generated by a computer, or recorded from another
brain.
The immediate payoff is that the blind will see–through video
cameras–and the deaf hear. In the longer run we may get a world
where most of the important stuff consists of signals moving from one
brain to another over a network, with physical acts by physical
bodies playing only a minor role. To visit a friend in England there
is no need to move either his body or mine–being there is as
easy as dialing the phone. That is one of many reasons why I do not
expect gasoline powered automobiles to play a major role in
transportation a century from now.
A few pages back, we were considering a world where realspace was
entirely public, cyberspace entirely private. As things presently
are, that would be a very public world, since most of us live most of
our lives in realspace. But if deep VR reverses the ratio, giving us
a world where all the interesting stuff happens in cyberspace and
realspace activity consist of little more than keeping our bodies
alive, it will be a very private world.
Alternatives
Any of the futures I have just sketched might happen, but not
all. If nanotech turns the world into gray goo in 2030, it will also
turn into gray goo the computers on which artificial super
intelligences would have been developed in 2040. If nanotech bogs
down and A.I. does not, the programmed computers that rule the world
of 2040 may be more interested in their own views of how the human
species should evolve than in our view of what sort of children we
want to have. And, closer to home, if strong private encryption is
built into our communication systems, with the encryption and
decryption under the control not of the network but of the
individuals communicating with each other–the National Security
Agency's nightmare for the past twenty years or so–it won't
matter how many telephone lines the FBI can tap.
That is one reason this book is not an attempt at prophecy. I think
it likely that some parts of what I describe will happen but I do not
know which. My purpose is not to predict which future we will get but
to use possible futures to think through how technological change
affects us and how we can and should change our lives and
institutions to adapt to it.
That is also one reason why, with a few exceptions, I have limited my
discussion of the future to the next thirty years or so. Thirty years
is roughly the point at which both A.I. and nanotech begin to matter.
It is also long enough to permit technologies that have not yet
attracted my attention to start to play an important role. Beyond
that my crystal ball, badly blurred at best, becomes useless; the
further future dissolves into mist.
Chapter
II
Living
With Change
New technologies change what we can do. Sometimes the result is to
make what we want to do easier; after writing a book with a word
processor, one wonders how it was ever done without one. Sometimes it
is to make what someone else is doing easier–and preventing him
from doing it harder. Enforcing copyright law became more difficult
when photo typesetting made the fixed cost of a pirate edition much
lower than the fixed cost of the authorized edition it competed with,
and more difficult again when inexpensive copying put the tools of
piracy in the hands of any college professor desirous of providing
his students with reading material. As microphones and video cameras
become smaller and cheaper, preventing other people from spying on me
becomes harder.
The obvious response is to try to keep doing what we have been doing.
If that is easier; good. If it is harder, too bad. The world must go
on, the law must be enforced. "Damn the torpedoes, full speed
ahead."
Obvious–and wrong. The laws we have, the ways we do things, are
not handed down from heaven on tablets of stone. They are human
contrivances, solutions to particular problems, ways of accomplishing
particular ends. If technological change makes a law hard to enforce,
the best solution is sometimes to stop enforcing it. There may be
other ways of accomplishing the same end–including some enabled
by the same technological change. The question is not "how do we
continue to do what we are doing" but "how do we best achieve our
objectives under new circumstances?"
Insofar as this book has a theme, that is it. "Full speed ahead; damn
the torpedoes" is the wrong answer.
A Simple
Example: The Death of Copyright
Copyright law gives the author of a copyrightable work the
right to control who copies it. If copying a book requires an
expensive printing plant operating on a large scale, copyright law is
reasonably easy to enforce. If every reader owns equipment that can
make a perfect copy of a book at negligible cost, enforcing the law
becomes very nearly impossible.
So far as printed material is concerned, copyright law has become
less enforceable over the past century, but not yet unenforceable.
The copying machines that most of us have ready access to can
reproduce a book, but the cost is comparable to the cost of buying
the book and the quality somewhat worse. Copyright law in printed
works can still be enforced, even if less easily than in the
past.
That is not true for intellectual property in digital form. Anyone
with a computer equipped with a floppy drive can copy a hundred
dollar program onto a one dollar floppy. Anyone with a CDR drive can
copy a four hundred dollar program onto a one dollar CD. And anyone
with a reasonably fast internet connection can copy anything
available online, anywhere in the world, to his hard drive.
Under those circumstances, enforcing copyright law against individual
users is very nearly impossible. If my university decides to save on
its software budget by buying one copy of Microsoft Office and making
lots of copies, it is at serious risk; a discontented employee with
Bill Gates' email address could get us in a lot of trouble. But if I
choose to provide copies to my wife and children–which under
Microsoft's license I am not permitted to do–or even to a dozen
of my friends, there is in practice little that Microsoft can do
about it.
[3]
That could be changed. If one wanted to enforce present law badly
enough, one could do it–with suitable revisions on the
enforcement end. Every computer in the country would be subject to
random search. Anyone found with an unlicensed copy of software would
go straight to jail. Silicon valley would empty and the prisons would
fill with geeks, teenagers, and children.
Nobody regards that as a tolerable solution to the problem, although
there has been some shift in the direction of expanded criminal
liability for copyright infringement.
[4]
In practice, software companies take it for granted that they cannot
use the law to prevent individual copying of their programs and so
fall back on other ways of getting rewarded for their efforts.
Holders of music copyrights face similar problems. As ownership of
tape recorders became common, piracy became easier. Shifting to CD's
temporarily restored the balance, since they provided higher quality
than tape and were expensive to copy–but then cheap CD
recorders and digital audio tape came along. Most recently, as
computer networks have gotten faster, storage cheaper, and digital
compression more efficient, the threat has been from online
distribution of MP3 files encoding copyrighted songs.
Faced with the inability to enforce copyright law against
individuals, what are copyright holders to do? There are at least
three answers:
1. Substitute technological protection for legal protection.
In the early days of home computers, some companies sold their
programs on disks designed to be uncopyable. Consumers found that
inconvenient, either because they wanted to make copies for their
friends or because they wanted to make backup copies for themselves.
So other software companies sold programs designed to copy the copy
protected disks. One company produced a program–SuperUtility
Plus–designed to do a variety of useful things, including
copying other companies' protected disks. It was itself copy
protected. So another company produced a program–SuperDuper–whose
sole function in life was to make copies of SuperUtility Plus.
Technological protection continues in a variety of forms, some of
which will be discussed in more detail in a later chapter. All face a
common problem. It is fairly easy to provide protection sufficient to
keep the average user from using software in ways in which the
producer does not want him to use it. It is very hard to provide
protection adequate against an expert. And one of the things experts
can do is to make their expertise available to the average user in
the form of software designed to defeat protection schemes.
This suggests that the best solution might be technological
protection backed up by legal protection against software designed to
defeat it. In the early years, the providers of copy protection tried
that approach. They sued the makers of software designed to break the
protection, arguing that they were guilty of contributory
infringement (helping other people copy copyrighted material), direct
infringement (copying and modifying the protection software in the
process of learning how to defeat it) and violation of the licensing
terms under which the protection software was sold. They
lost.
[5]
More recently, owners of intellectual property successfully supported
new legislation– [Section ??? of] the Digital
Millennium Copyright Act–which reverses that result, making it
illegal to produce or distribute software whose primary purpose is
defeating technological protection. It remains to be seen whether or
not that restriction will itself prove enforceable.
2. Control only large scale copying:
Anyone with a video recorder can copy videos for his friends
[check this–how effective is current protection?].
Nonetheless, video rental stores exist in large numbers. They provide
the customer with an enormously larger selection than he could get by
copying his friends' cassettes, and they do so at a relatively low
cost. The video stores themselves cannot safely violate copyright
law, buying one cassette for a hundred outlets, because they are
large, visible organizations; it only takes one disgruntled customer
who notices that the video he rented is on a generic cassette to blow
the whistle. So producers of movies continue to get revenue from
video cassettes, despite the ability of customers to copy them.
There is no practical way for music companies to prevent one teenager
from making copies of a CD or a collection of MP3's for his friends–but
consumers of music are willing to pay to get the much wider range of
choice available from a store. The reason Napster threatened the
music industry was that it provided a similar range of choice at a
very much lower cost. Similarly for software. As long as copyright
law can be used to prevent large scale piracy, customers will be
willing to pay for the convenience provided by a legal, hence large
scale and easily findable, source for their software. In both cases,
the ability of owners of intellectual property to make piracy
inconvenient enough to keep themselves in business is threatened by
the internet, which offers the possibility of large scale public
distribution of pirated music and software.
3. Permit copying; get revenues in other ways:
"Most successful lecturers will in whispered tones confide to you
that there is no other journalistic or pedagogical activity more
remunerative–a point made by Mark Twain and Winston
Churchill."
(William F. Buckley, Jr.)
[6]
A century ago, prominent authors such as Mark Twain got a
good deal of their income from public lectures. Judging by the quote
from Buckley—and my own observations–some still do. That
suggests that, in a world without enforceable copyright, an author
could write books, provide them online to anyone who wanted them, and
make his living by selling services to his readers–public
lectures, consulting services, or the like. This is not a purely
conjectural possibility. Currently I provide the full text of three
books and numerous articles on my web page, for free–and
receive a wide range of benefits, monetary and non-monetary, by doing
so.
This is one example of a more general strategy: Give away the
intellectual property and get your income from it indirectly. That is
how both of the leading web browsers are provided. Netscape gives
away Navigator and sells the server software that Navigator interacts
with; Microsoft follows a similar strategy. It is also how radio and
television programs pay their bills; give away the program and get
revenue from the ads.
As these examples show, the death of copyright need not mean the
death of the intellectual property that copyright protects. What it
does mean is that the producers of that intellectual property must
find other ways of getting paid for their work. The first step is
recognizing that, in the long run, simply enforcing existing law is
not an option.
Defamation
Online: A Less Simple Example
A newspaper publishes an article asserting that I am a wanted
criminal, having masterminded several notorious terrorist attacks.
Colleagues find themselves engaged when I propose going out to
dinner. My department chair assigns me to teach a course on Sunday
mornings with an enrollment of one. I start getting anonymous phone
calls. My recourse under current law is to sue the paper for libel,
forcing them to retract their false claims and compensate me for
damage done.
Implicit in this approach to the problem of defamation are two
assumptions. One is that when someone makes a false statement to
enough people to do serious damage, the victim can usually identify
either the person who made the statement or someone else arguably
responsible for his making it–the newspaper if not the author
of the article. The other is that at least one of the people
identified as responsible will have enough assets to be worth
suing.
In the world of twenty years ago, both assumptions were usually true.
The reporter who wrote a defamatory article might be too poor to be
worth suing, but the newspaper that published it was not–and
could reasonably be held responsible for what it printed. It was
possible to libel someone by a mass mailing of anonymous letters, but
a lot of trouble to do it on a large enough scale to matter to most
victims.
Neither is true any longer. It is possible, with minimal ingenuity,
to get access to the internet without identifying yourself. With a
little more technical expertise, it is possible to communicate online
through intermediaries–anonymous remailers–in such a way
that the message cannot be linked to the sender. Once online, there
are ways to communicate with large numbers of people at near zero
cost: mass email, posts on Usenet news, a page on the worldwide web.
And if you choose to abandon anonymity and spread lies under your own
name, access to the internet is so inexpensive that it is readily
available to people without enough assets to be worth suing.
One possible response is that we must enforce the law–whatever
it takes. If the originator of the defamation is anonymous or poor,
find someone else, somewhere in the chain of causation, who is
neither. In practice, that probably means identifying the internet
service provider through whom the message passed and holding him
liable. A web page is hosted on some machine somewhere; someone owns
it. An email came at some point from a mail server; someone owns
that.
That solution makes about as much sense as holding the U.S. Post
Office liable for anonymous letters. The publisher of a newspaper can
reasonably be expected to know what is appearing in his pages. But an
ISP has no practical way to monitor the enormous flow of information
that passes through its servers–and if it could, we wouldn't
want it to. We can–in the context of copyright infringement we
do–set up procedures under which an ISP can be required to take
down webbed material that is arguably actionable. But that does no
good against a Usenet post, mass email, webbed defamation hosted in
places reluctant to enforce U.S. copyright law, or defamers willing
to go to the trouble of hosting their web pages on multiple servers,
shifting from one to another as necessary. Hence defamation law is of
very limited use for preventing online defamation.
There is–has always been–another solution to the problem.
When people tell lies about me, I answer them. The same technological
developments that make defamation law unenforceable online also make
possible superb tools for answering lies, and thus provide a
substitute, arguably a superior substitute.
My favorite example is Usenet News, a part of the internet older and
less well known than the web. To the user, it looks like a collection
of online bulletin boards, each on a different topic–anarchy,
short-wave radios, architecture, cooking history. When I post a
message to a newsgroup, the message goes to a computer–a news
server–provided by my ISP. The next time that news server talks
to another, they exchange messages–and mine spreads gradually
across the world. In an hour, it may be answered by someone in
Finland or Japan. The server I use hosts nearly thirty thousand
groups. Each is a collection of conversations spread around the world–a
tiny non-geographical community united, and often divided, by common
interests.
[7]
Google, which hosts a well known web search engine, also provides a
search engine for Usenet. Using it I can discover in less than a
minute whether anyone has mentioned my name anywhere in the world any
time in the last three days–or weeks, or years–in any of
more than thirty thousand newsgroups. If get a hit, one click brings
up the message. If I am the David Friedman mentioned (the process
would be easier if my name were Myron Whirtzlburg), and if the
content requires an answer, a few more clicks let me post a response
in the same thread of the same newsgroup, where almost everyone who
read the original post will see it. It is as if, when anyone
slandered me anywhere in the world, the wind blew his words to me and
my response back to the ears of everyone who had heard them.
The protection Usenet offers against defamation is not perfect; a few
people who read the original post may miss my answer and more may
choose not to believe it. But the protection offered by the courts is
imperfect too. Most damaging false statements are not important
enough to justify the cost and trouble of a lawsuit. Many that are do
not meet the legal requirements for liability. Given the choice, I
prefer Usenet.
Suppose, however, that instead of defaming me on a newsgroup you do
it on a web page. Finding it is no trouble–Google provides a
search engine for the web too, and there are many others. But
answering it is not so easy. I can put up a web page with my answer
and hope that sufficiently interested readers will somehow find it,
but that is all I can do. The links on your web page are put there by
you, not by me–and you may be reluctant to add one to the page
that proves you are lying.
There is a solution to this problem–a technological solution.
Current web browsers show only forward links–links from the
page being read to other pages. It would be possible to build a web
browser, say Netscape Navigator 8.0, that automatically showed back
links, letting the user see not only what pages the author of this
page chose to link to but also what pages chose to link to
it.
[8] Once such
browsers are in common use, I can answer your webbed lies. I need
only put up a page with a link to yours. Anyone browsing your page
with the back link option turned on will be led to my rebuttal.
There is a problem with this solution–a legal problem. Your web
page is covered by copyright, which gives you the right to forbid
other people from making either copies or derivative works. A browser
that displays your page as you intended is making a copy, but one to
which you have given implicit authorization by putting your page on
the web. A browser that displays your page with back links added is
creating a derivative work–one that you may not have intended
and, arguably, did not authorize. To make sure your lies cannot be
answered, you notify Netscape that they are not authorized to display
your page with back links added and threaten to sue them if they
do.
The issue of when one web page is an unauthorized derivative work of
another is currently being fought out in the context of "framing"–one
web site presenting material from another within a frame with its own
advertising outside. If the analysis I have just presented is
correct, the outcome of that litigation may be important to an
entirely different set of issues. The same legal rule–a strong
reading of the right to prevent derivative works online–that
provides protection for a site worried about other people free riding
on its content would also provide protection to someone who wanted to
spread lies online.
When technological change makes a law harder to enforce, the right
question to ask is not "what will we have to do to keep enforcing the
law." The right questions are "what purpose does this law serve" and
"how can that purpose now best be achieved." If technological changes
that make it harder to control damaging false statements by suing
also make it easier to control them by answering, then we have to see
how the legal system can accommodate that change–for example,
by not interpreting copyright law in a way that blocks developments
that make it easier to answer online defamation.
Unsteady
Ground
"My mother was a test tube, my father was a
knife."
Friday, Robert A. Heinlein
Technological changes change the cost of doing things.
But there is a more subtle way in which they affect us as well–by
making obsolete the categories we use to talk and think about the
world around us.
Consider the category of "parent." It used to be that, while there
might be some uncertainty about who a child's father–more
rarely mother–was, there was no question what "father" and
"mother" meant. Laws and social norms specifying the rights and
obligations of fathers and mothers were unambiguous in meaning, if
not always in application.
That is no longer the case. With current reproductive technology
there are at least two biological meanings of "mother" and will soon
be a third. A gestational mother is the woman in whose womb a fetus
was incubated. An egg mother is the woman whose fertilized egg became
the fetus. Once human cloning becomes an established technology, a
mitochondrial mother will be the woman whose egg, with its nucleus
replaced by the nucleus of the clone donor but its own extra-nuclear
mitochondrial DNA, developed into the fetus. And once genetic
engineering becomes a mature technology and we can produce offspring
whose DNA is a patchwork from multiple donors, the concept of "a"
biological mother becomes very nearly meaningless.
Oddly enough, the first round of responses to this problem had their
source not in a new technology but a new social practice. It has
always been possible for a husband whose wife was infertile to have a
child by another woman and raise it as his own. But it is only
recently that such arrangements have become explicit and openly
recognized–in part because the development of artificial
insemination meant that the husband could get a woman he was not
married to pregnant without doing anything that would offend either
the woman he was married to or the legal and social ban on
adultery.
The existence of surrogacy contracts under which a woman agreed to
bear a child for a man and then give it up to that man and his wife
raised obvious legal issues. One, settled in the negative in the Baby
M case, was whether such a contract was legally enforceable–whether
the mother could be required to give up the child for adoption by its
intended parents.
[9]
Another, which became obvious only as reproductive technology added
additional complications, was who the mother was. Was motherhood
defined by biology–who bore the child or whose egg it developed
from–or by intent?
The Child
With Five Parents
A California couple wanted a child. The husband was sterile.
His wife was doubly sterile–she could neither produce a fertile
egg nor bring a fetus to term. They contracted with a sperm donor, an
egg donor, and a gestational mother. The donated egg was impregnated
with the donated sperm and implanted in the rented womb.
Before the child was born, the couple decided to get divorced. That
left the courts with a puzzle–what person or persons had the
legal rights and obligations of parenthood?
Under California law read literally, the answer was clear. The mother
was the woman from whose body the child was born. The father was her
husband. That was a sensible enough legal rule when the laws were
written. But it made no sense at all in a world where neither that
woman nor her husband was either related to the child or had intended
to parent it.
The court that finally decided the issue, like some but not all other
California courts presented with similar conundrums, sensibly ignored
the literal reading of the law, holding that the parents were the
couple who had set the train of events in motion, intending at that
time to be the parents.
[10]
They thus substituted for the biological definition that had become
technologically obsolete a social definition–motherhood by
neither egg nor womb but by intention.
This is a true story. If you don't believe me, go to a law library
and look up John A. B. Vs. Luanne H. B (72 Cal. Rptr. 2d 280 (Ct.
App. 1998)).
[11]
The Living
Dead
Consider someone whose body is preserved at the temperature
of liquid nitrogen while awaiting the medical progress needed to
revive and cure him. Legally he is dead; his wife is a widow, his
heirs have inherited. But if he is in fact going to be revived, then
in a very real sense he is not dead–merely sleeping very
soundly. Our legal system, more generally our way of thinking about
people, takes no account of the special status of such a person.
There is a category of alive, a category of dead, and–outside
of horror movies and computer games–nothing between them.
The absence of such a category matters. It may, quite literally, be a
matter of life and death.
You are dying of a degenerative disease that will gradually destroy
your brain. If the disease is cured today, you will be fine. If it is
cured a year later, your body may survive but your mind will not.
After considering the situation, you decide that you are more than
willing to trade a year of dying for a chance of getting back your
life. You call up the Alcor Foundation and ask them to arrange to
have your body frozen–tomorrow if possible.
They reply that, while they agree with your decision, they cannot
help you. As long as you are legally alive, freezing you is legally
murder. You will simply have to wait another year until you are
declared legally dead–and hope that somehow, some day, medical
science will become capable of reconstructing you from what by that
time is left.
This too is, allowing for a little poetic license, a true story. In
Donaldson v. Van de Kamp[12],
Thomas Donaldson went to court in an unsuccessful attempt to get
permission to be frozen before, rather than after, his brain was
destroyed by a cancerous tumor.
The issues raised by these cases–the meaning of parenthood and
of death–will be discussed at greater length in later chapters.
Their function here is to illustrate the way in which technological
change alters the conceptual ground under our feet.
All of us deal with the world in terms of approximations. We describe
someone as tall or short, kind or cruel, knowing that the former is a
matter of degree and the latter both of degree and of multiple
dimensions. We think of the weather report as true, although it is
quite unlikely that it provides a perfectly accurate description of
the weather, or even that such a description is possible–when
the weather man says the temperature is 70 degrees in the shade, just
which square inch of shade is he referring to? And we classify a
novel as "fiction" and this book as "nonfiction," although quite a
lot of the statements in the former are true and some in the latter
are false.
Dealing with the world in this way works because the world is not a
random assemblage of objects–there is pattern to it.
Temperature varies from one patch of shade to another, but not by
very much, so while a statement about "the" temperature in the shade
may not be precisely true, we rarely lose much by treating it as if
it were. Similarly for the other useful simplifications of reality
that make possible both thought and communication.
When the world changes enough, some of the simplifications cease to
be useful. It was always true that there was a continuum between life
and death; the exact point at which someone is declared legally dead
is arbitrary. But, with rare exceptions,
[13]
it was arbitrary to within seconds, perhaps minutes–which
almost never mattered. When it is known that, for a large number of
people, the ambiguity not only exists but will exist for decades, the
simplification is no longer useful. It may, as in the case of Thomas
Donaldson, become lethal.
It's
Not Just Law, It's Life
So far my examples have focused on how legal rules should respond to
technological change. But similar issues arise for each of us in
living his own life in a changing world. Consider, for a story now in
part played out, the relations between men and women.
The Decline
of Marriage
For a very long time, human societies have been based on
variants of the sexual division of labor. All started with a common
constraint–women bear and suckle children, men do not. For
hunter gatherers, that meant that the men were the hunters and the
women, kept relatively close to camp by the need to care for their
children, the gatherers. In more advanced societies, that became,
with many variations, a pattern where women specialized in household
production and men in production outside the household.
A second constraint was the desire of men to spend their resources on
their own children rather than on the children of other men–a
desire ultimately rooted in the fact that Darwinian selection has
designed organisms, including human males, to be good at passing down
their own genes to future generations. Since the only way a man could
be reasonably confident that a child was his was for its mother not
to have had sex with other men during the period when it was
conceived, the usual arrangement of human societies, with a few
exceptions, gave men sexual exclusivity. One man might under some
circumstances sleep with more than one woman, but one woman was
supposed to, and most of the time did, sleep with only one
man.
[14]
Over the past few centuries, two things have sharply altered the
facts that led to those institutions. One is the decline in infant
mortality. In a world where producing two or three adult children
required a woman to spend most of her fertile years bearing and
nursing children, the sexual division of labor was sharp–one
profession, "mother," absorbed something close to half the labor
force. And in that world, with each woman specialized to the job of
being the mother of certain children–each of whom was, among
other things, the child of a particular man–there were very
large advantages to treating marriage as a lifetime
contract.
[15]
In today's world, a woman need bear only two babies in order to end
up with two adult children. And the increased division of labor has
drastically reduced the importance of household production; you may
still wash your own clothes, but most of the work was done by the
people who built the washing machine. You may still cook your own
dinner, but you are very unlikely to cure your own ham or make your
own soap.
As being a wife and mother went from a full to a part time job, human
institutions adjusted. Market employment of women increased. Divorce
became more common. The sexual division of labor, while it still
exists, is much less sharp–many women do jobs that used to be
done almost exclusively by men, some men do jobs that used to be done
almost exclusively by women. All of this represented adaptation to a
set of technological changes–in medicine and in the production
of goods and services. And while some of the adaptations were legal,
most were social or individual.
The Future
of Marriage
One consequence of married women working largely outside of
the home is to make the enforcement of sexual exclusivity, never
easy,
[16] very
nearly impossible. There is, however, a technological alternative.
Modern paternity testing means that a husband can know whether his
wife's children are his even if he is not confident that he is her
only sexual partner.
This raises some interesting possibilities. We could have–are
perhaps moving towards–a variant of conventional marriage
institutions in which paternal obligations are determined by biology,
not marital status. We could have a society with group marriages but
individual parental responsibilities, since a woman would know which
of her multiple husbands had fathered any particular child. We could
have a society with casual sex but well defined parental obligations–although
that raises some practical problems, since it is much easier for a
couple to share parental duties if they are also living together, and
the fact that two people enjoy sleeping together is inadequate
evidence that they will enjoy living together.
All of these mating patterns probably exist already–for a
partial sample, see the Usenet newsgroup alt.polyamory. Whether any
become common will depend in large part on the nature of male sexual
jealousy. Is it primarily a learned pattern, designed to satisfy an
instinctual preference for one's own children? Or is it itself
instinctual–hard wired by evolution as a way of improving the
odds that the children a male contributes resources to carry his
genes? If the former, then once the existence of paternity testing
makes jealousy obsolete we can expect its manifestations to vanish,
permitting a variety of new mating patterns. If the latter, jealousy
is still obsolete but, given the slow pace of evolutionary change,
that fact will be irrelevant to behavior for a very long time, hence
we can expect to continue with some variant of monogamy, or at least
serial polygamy, as the norm.
The basic principle here is the same as in earlier examples of
adjusting to technological change. Our objective is not to save
marriage. It is to accomplish the purposes that marriage was designed
to serve. One way of doing that is to continue the old pattern even
though it has become more difficult–as exemplified by the
movement for giving couples the option of covenant marriage, marriage
on something more like the old terms of "till death do us part."
Another is to take advantage of technological change to accomplish
the old objective–producing and bringing up children–in
new ways.
Doing
Business Online
Technology affects law and love. Also business. Consider the
problem of contract enforcement online.
Litigation has always been a clumsy and costly way of enforcing
contractual obligations. It is even more so when geography vanishes.
It is possible to sue someone in another state, even another country–but
the more distant the jurisdiction, the harder it is. If online
commerce eventually dispenses with not only geography but real world
identity, so that much of it occurs between parties linked only to an
identity defined by a digital signature, enforcing contracts in the
courts becomes harder still. It is difficult to sue someone if you do
not know who he is.
There is an old solution–reputation. Just as in the case of
defamation, the same technology that makes litigation less practical
makes private action more practical.
Ebay provides a low tech example. When you win an auction and take
delivery of the goods, you are given an opportunity to report on the
result–did the seller deliver when and as scheduled, were the
goods as described? The reports on all past auctions by a given
seller are available, both in full and in summary form, to anyone who
might want to bid on that seller's present auctions. In a later
chapter we will discuss more elaborate mechanisms, suitable for
higher stakes transactions, by which modern information technology
can make reputational enforcement a good substitute for legal
enforcement.
Brakes? What
Brakes?
When reading about the down side of technologies–Murder
Incorporated in a world of strong privacy or some future James Bond
villain using nanotechnology to convert the entire world to gray goo–your
reaction may be "Stop the train, I want to get off. " In most cases,
that is not an option; this particular train is not equipped with
brakes.
Most of the technologies we will be discussing can be developed
locally and used globally. Once one country has a functional
nanotechnology, permitting it to build products vastly superior to
those made with old technologies, there will be enormous pressure on
other countries to follow suit. It is hard to sell glass windshields
when the competition is using structural diamond. It is even harder
to persuade cancer patients to be satisfied with radiation therapy
when they know that, elsewhere in the world, microscopic cell repair
machines are available that simply go through your body and fix
whatever is wrong.
For an example already played out, consider surrogacy contracts–agreements
by which a woman bears a child, either from her own or another
woman's egg, for another couple to rear as its own. The Baby M case
established that such contracts are not in general enforceable. State
legislation followed, with the result that in at least one state
merely signing such a contract is a felony [check Lee Silver for
facts].
None of this mattered very much. Someone who could afford the costs
of hiring a host mother, still more someone who could afford the cost
necessary to arrange for one mother to incubate another's egg, could
almost certainly afford the additional cost of doing it in a friendly
state. As long as there was one state that approved of such
arrangements, the disapproval of others had little effect even on
their own citizens. And even if the contracts were legally
unenforceable, it was only a matter of time before people in the
business of arranging them learned to identify and avoid potential
host mothers likely to change their mind after the child was born.
The attempt at legal regulation of this particular use of technology
had very little effect.
[17]
Another and ongoing example is the case of aging research. Many
people believe (I think mistakenly) that the world suffers from
serious problems of overpopulation. Others argue (somewhat more
plausibly) that a world without aging would risk political
gerontocracy and cultural stasis.
[18]
Hence many would–some do–argue that even if the problem
of aging can be solved, it ought not to be.
Such arguments become less convincing the older you get. Old people
control very large resources, both economic and political. While
arguments against aging research may win out somewhere, it is
unlikely that they will win out everywhere–and the cure only
has to be found once.
For a more disturbing example, consider artificial intelligence–a
technology that might well make human beings obsolete. At each stage,
doing it a little better means being better able to design products,
predict stock movements, win wars. That almost guarantees that at
each stage, someone will take the next step.
Even if it is possible to block or restrict a potentially dangerous
technology, as in a few cases it may be, it is not clear that we
should do it. We might discover that we had missed the disease and
banned the cure. If an international covenant backed by overwhelming
military power succeeds in restricting nanotechnological development
to government approved labs, that might save us from catastrophe. But
since government approved labs are the ones most likely to be working
on military applications of new technology, while private labs mostly
try to produce what individual customers want, the effect might also
be to prevent the private development of nanotechnological
countermeasures to government developed mass destruction. Or it might
turn out that our restrictions had slowed the development of
nanotechnology by enough to leave us unable to defend against the
result of a different technology–a genetically engineered
plague, for example.
There are legitimate arguments for trying to slow or prevent some of
these technological developments. Those arguments will be made–but
not here. For my purposes, it is more interesting to assume that such
attempts, if made, will fail, and try to think through the
consequences–how new technologies will change things, how human
beings will and should adapt to those changes.
Part
II: Privacy and Technology
Chapter
III: A World of Strong Privacy
There has been a lot of concern in recent years about the end
of privacy. As we will see in the next two chapters, such fears are
not entirely baseless; the development of improved technologies for
surveillance and data processing does indeed threaten individual
privacy. But a third and less familiar technology is working in
precisely the opposite direction. If the arguments of this chapter
are correct we will soon be experiencing in part of our life–an
increasingly important part–a level of privacy that human
beings have never known before. It is a level of privacy that not
only scares the FBI and the National Security Agency, two
organizations whose routine business involves prying into other
people's secrets, it sometimes even scares me.
We start with an old problem: How to communicate with someone without
letting other people know what you are saying. There are a number of
familiar solutions. If you are worried about eavesdroppers, check
under the eaves before saying things you do not want the neighbors to
hear. To be safer still, hold your private conversation in the middle
of a large, open field, or a boat in the middle of a lake. The fish
are not interested and there is nobody else within hearing.
That approach no longer works. Even the middle of a lake is within
range of a shotgun mike. The eaves do not have to contain
eavesdroppers–just a mike and a transmitter. If you check for
bugs, someone can still bounce a laser beam off your window pane and
use it to pick up the vibration from your voice. I am not sure that
satellite observation is good enough yet to read lips from orbit–but
if not, it soon will be.
A different set of old technologies were used for written messages. A
letter sealed with the sender's signet ring could not protect the
message, but at least it let the recipient know if it had been opened–unless
the spy was very good with a hot knife. A letter sent via a trusted
messenger was safer still–provided he deserved the trust.
A more ingenious approach was to protect not the physical message but
the information it contained, by scrambling the message and providing
the intended recipient with the formula for unscrambling. A simple
version was a substitution cipher, in which each letter in the
original message was replaced by a different letter. If we replace
each letter with the next one in the alphabet, we get "mjlf uijt"
from the words "like this."
"mjlf uijt" does not look much like "like this," but it is not very
hard, if you have a long message and patience, to deduce the
substitution and decode the message. More sophisticated scrambling
schemes rearrange the letters according to an elaborate formula, or
convert letters into numbers and do complicated arithmetic with them
to convert the message (plaintext) into its coded version
(ciphertext). Such methods were used, with varying degrees of
success, by both sides in World War II.
There were two problems with this way of keeping secrets. The first
was that it was slow and difficult–it took a good deal of work
to convert a message into its coded form or to reverse the process.
It was worth the cost if the message was the order telling your fleet
when and where to attack, but not for casual conversations among
ordinary people.
That problem has been solved. The computers most of us have on our
desktops can scramble messages, using methods that are probably
unbreakable even by the NSA, faster than we can type them. They can
even scramble–and unscramble–the human voice as fast as
we can speak. Encryption is now available not merely to the Joint
Chiefs of Staff but to you and me for our ordinary conversation.
The second problem is more difficult. To unscramble my scrambled
message, you need the key–the formula describing how I
scrambled it. But if I do not have a safe way of sending you
messages, I may not have a safe way of sending you the key either. If
I sent it by a trusted messenger but made a small mistake as to who
was entitled to trust him, someone else now has a copy and can use it
decrypt my future messages to you. This may not be too much of a
problem for governments, willing and able to send information back
and forth in briefcases handcuffed to the wrists of military
attaché's, but for the ordinary purposes of ordinary people
that is not a practical option.
About twenty-five ago, this problem was solved. The solution was
public key encryption, a new way of scrambling and unscrambling
messages that does not require a secure communication channel for
either the message or the key.
[19]
Public key encryption works by generating a pair of keys–call
them A and B–each of which is simply a long number, and each of
which provides the information needed to unscramble what the other
has scrambled. If you encrypt a message with A, someone who possesses
only A cannot decrypt it–that requires B. If you encrypt a
message with B, on the other hand, you have to use A to decrypt it.
If you send a friend key A (your public key) while keeping key B
(your private key) secret, your friend can use A to encrypt messages
to you and you can use B to decrypt them. If a spy gets a copy of key
A, he can send you secret messages too. But he still cannot decrypt
the messages from your friend. That requires key B, which never
leaves your possession.
How can one have the information necessary to encrypt a message yet
be unable to decrypt it? How can one be able to produce two keys with
the necessary relationship but, if you have one key, be unable to
calculate the other? The answer to both questions depends on the fact
that there are some mathematical processes that are much easier to do
in one direction than another.
Most of us can multiply 293 by 751 reasonably quickly, using nothing
more sophisticated than pencil and paper, and get 220043. Starting
with 220043 and finding the only pair of three digit numbers that can
be multiplied together to give it takes a lot longer. The most widely
used version of public key encryption depends on that asymmetry–between
multiplying and factoring–using very much larger numbers.
Readers who are still puzzled may want to look at appendix I of this
chapter, where I describe a very simple form of public key encryption
suited to a world where people know how to multiply but have not yet
learned how to divide, or check one of the webbed descriptions of the
mathematics of the RSA algorithm, the most common form of public key
encryption.
[20]
When I say that encryption is in practice unbreakable, what I mean is
that it cannot be broken at a reasonable cost in time and effort.
There is a sense in which almost all encryption schemes, including
public key encryption, are breakable
[21]
given an unlimited amount of time. If, for example, you have key A
and a message a thousand characters long encrypted with it, you can
decrypt the message by having your computer create every possible
thousand character message, encrypt each with A, and find the one
that matches. Alternatively, if you know that key B is a number a
hundred digits long, you could try all possible hundred digit
numbers, one after another, until you found one that correctly
decrypted a message that you had encrypted with key A.
Both of these are what cryptographers describe as "brute force"
attacks. To implement the first of them, you should first provide
yourself with a good supply of candles–the number of possible
thousand character sequences is so astronomically large that, using
the fastest available computers, the sun will have burned out long
before you finish. The second is workable if key B is a sufficiently
short number–which is why people who are serious about
protecting their privacy use long keys, and why people who are
serious about violating privacy–the National Security Agency,
for example–try to make laws restricting the length of the keys
that encryption software uses.
Encryption
Conceals ...
Imagine that everyone has an internet connection and suitable
encryption software, and that everyone's public key is available to
everyone else–published in the phone book, say. What
follows?
What I say
One obvious result is that we can have private
conversations. If I want to send you a message that nobody else can
read, I first encrypt it with your public key. When you respond, you
encrypt your message with my public key. The FBI, or my nosy
neighbor, is welcome to tap the line–everything he gets will be
gibberish to anyone who does not have the corresponding private
key.
To Whom
Even if the FBI does not know what I am saying, it can
learn a good deal by watching who I am saying it to–known in
the trade as "traffic analysis." That problem too can be solved using
public key encryption and an anonymous remailer, a site on the
internet that forwards email. When I want to communicate with you, I
send the message to the remailer, along with your email address. The
remailer sends it to you.
If that was all that happened, someone tapping the net could follow
the message from me to the remailer and from the remailer to you. To
prevent that, the message to the remailer, including your email
address, is encrypted with the remailer's public key. When he
receives it he uses his private key to strip off that layer of
encryption, revealing your address, and forwards the decrypted
message. Our hypothetical spy sees a thousand messages go into the
remailer and a thousand go out, but he can neither read the email
addresses on the incoming messages–they are hidden under a
layer of encryption–nor match up incoming and outgoing
message.
What if the remailer is a plant–a stooge for whoever is spying
on me? There is a simple solution. The email address he forwards the
message to is not actually yours–it is the email address of a
second remailer. The message he forwards is your message plus your
email address, the whole encrypted with the second remailer's public
key. If I am sufficiently paranoid, I can bounce the message through
ten different remailers before it finally gets to you. Unless all ten
are working for the same spy, there is no way anyone can trace the
message from me to you.
Readers who want a more detailed description of how remailers work
and are comfortable with mathematical symbolism will find it in
appendix II.
We now have a way of corresponding that is doubly private–nobody
can know what we are saying and nobody can find out whom we are
saying it to. But there is still a problem.
Who I Am
When interacting with other people, it is helpful to be
able to prove your identity–which can be a problem online. If I
am leading a conspiracy to overthrow an oppressive government, I want
my fellow conspirators to be able to tell which messages are coming
from me and which from the secret police pretending to be me. If I am
selling my consulting services online, I need to be able to prove my
identity in order to profit from the reputation earned by past
consulting projects and make sure that nobody else free rides on that
reputation by masquerading as me.
That problem too can be solved by public key encryption. In order to
digitally sign a message, I encrypt it using my private key. I then
send it to you with a note telling you who it is from. You decrypt it
with my public key. The fact that what comes out is a message and not
gibberish tells you that it was encrypted with the matching private
key. Since I am the only one who has that private key, the message
must be from me.
My digital signature not only demonstrates that I sent the signed
message, it does so in a form that I cannot later disavow. If I try
to deny having sent it, you point out that you have a copy of the
message encrypted with my private key–something that nobody but
I could have produced. Thus a digital signature makes it possible for
people to sign contracts that they can be held to–and does so
in a way much harder to forge than an ordinary signature.
And Who I Pay
If we are going to do business online, we need a way of
paying for things. Checks and credit cards leave a paper trail. What
we want is an online equivalent of currency–a way of making
payments that cannot later be traced, either by the parties
themselves or anyone else.
The solution, discussed in some detail in a later chapter, is
anonymous ecash. Its essential feature is that it permits people to
make payments to each other by sending a message, without either
party having to know the identity of the other and without any third
party having to know the identity of either of them. One of the many
things it can be used for is to pay for the services of an anonymous
remailer, or a string of anonymous remailers, thus solving the
problem of how to keep remailers in business without sacrificing
their customers' anonymity.
Combine and
Stir
Combine public key encryption, anonymous remailers, digital
signatures and ecash, and we have a world where individuals can talk
and trade with reasonable confidence that no third party is observing
them.
A less obvious implication is the ability to combine anonymity and
reputation. You can do business online without revealing your
realworld identity, hence with complete anonymity. At the same time,
you can prove you are the same person who did business yesterday, or
last year, by digitally signing your messages. Your online persona is
defined by its public key. Anyone who wants to communicate with you
privately uses that key to encrypt his messages; anyone who wants to
be sure you are the person who sent a message uses it to check your
digital signature.
With the exception of fully anonymous ecash, all of these
technologies already exist, implemented in software that is currently
available for free.
[22]
At present, however, they are mostly limited to the narrow bandwidth
of email–sending private text messages back and forth. As
computers and computer networks get faster, that will change.
Twice in the past month I traveled several hundred miles–once
by car, once by air–in order to give a series of talks. With
only mild improvements in current technology I could have given them
from my office. Both I and my audience would have been wearing
virtual reality goggles–glasses with the lenses replaced by
tiny computer screens. My computer would be drawing the view of the
lecture room as seen from the podium–including the faces of my
audience–at sixty frames a second. Each person in the audience
would have a similar view, from his seat, drawn by his computer.
Earphones would take care of sound. The result would be the illusion,
for all of us, that we were present in the same room seeing and
hearing each other.
Virtual reality not only keeps down travel costs, it has other
advantages as well. Some lecture audiences expect a suit and tie–and
not only do I not like wearing ties, all of the ties I own possess a
magnetic attraction for foodstuffs in contrasting colors. To give a
lecture in virtual reality, I don't need a tie–or even a shirt.
My computer can add both to the image it sends out over the net. It
can also remove a few wrinkles, darken my hair, and cut a decade or
so off my apparent age.
As computers get faster, they can not only create and transmit
virtual reality worlds, they can also encrypt them. That means that
any human interaction involving only sight and sound can be moved to
cyberspace and protected by strong privacy.
Handing out the Keys: A Brief
Digression
In order to send an encrypted message to a stranger or
check the digital signature on a message from a stranger, I need his
public key. Some pages back, I assumed that problem away by assuming
that everyone's public key was published in the phone book. While
that is a possible solution, it is not a very satisfactory one.
A key published in the phone book is only as reliable as whoever is
publishing it. If our hypothetical bad guy can arrange for his public
key to be listed under my name, he can read intercepted messages
intended for me and sign bogus messages from me with a digital
signature that checks against my supposed key.
[23]
A phone book is a centralized system, hence vulnerable to failures at
the center, whether due to dishonesty or incompetence. There is,
however, a simple decentralized solution; as you might guess, it too
depends on public key encryption.
Consider some well known organization, say American Express, which
many people know and trust. American Express arranges to make its
public key very public–posted in the window of every American
Express office, printed on every American Express credit card,
included in the margin of every American Express ad. It then goes
into the business of acting as a certifying authority.
To take advantage of its services, I use my software to create a
public key/private key pair–two long numbers, each of which can
be used to decrypt messages encrypted with the other. I then go to an
American Express office, bringing with me my passport, driver's
license and public key. After establishing my identity to their
satisfaction, I hand them a copy of my public key and they create a
message saying, in language a computer can understand, "The public
key of David D. Friedman, born on 2/12/45 and employed by Santa Clara
University, is 10011011000110111001010110001101000... ." They
digitally sign the message, using American Express's private key,
copy the signed message to a floppy disk, and give it to me.
To prove my identity to a stranger, I send him a copy of the digital
certificate from American Express. He now knows my public key–allowing
him to send encrypted messages that only David Friedman can read and
check digital signatures to see if they are really from David
Friedman. Someone with a copy of my digital certificate can use it to
prove to people what my public key is, but he cannot use it to
masquerade as me because he does not possess the matching private
key.
So far this system has the same vulnerability as the phone book; if
American Express or one of its employees is working for the bad guy,
they can create a bogus certificate identifying someone else's public
key as mine. But nothing in a system of digital certificates requires
trust in any one organization. I can email you a whole pack of
digital certificates–one from American Express, one from the
U.S. Post Office, one from the Catholic Church, one from my
university, one from Microsoft, one from Apple, one from AOL–and
you can have your computer check all of them and make sure they all
agree. It is unlikely that a single bad guy has infiltrated all of
them.
A World of
Strong Privacy
The world created by these technologies has a variety of
features. One is free speech. If I communicate online under my own
name, using encryption, I can be betrayed only by the person I am
communicating with. If I do it using an online persona, with
reputation but without any link to my realspace identity, not even
the people I communicate with can betray me. Thus strong privacy
creates a world which is, in important ways, safer than the one we
now live in–a world where you can say things other people
disapprove without the risk of punishment, legal or otherwise.
Which brings me to another digression–one directed especially
at my friends on the right wing of the political spectrum.
The Virtual
Second Amendment
The second amendment to the U.S. constitution guarantees
Americans the right to bear arms. A plausible interpretation
[24]
views it as a solution to a problem of considerable concern to
18
th century thinkers–the problem of standing
armies.
[25]
Everyone knew that professional armies, on average, beat
amateur armies. Everyone also knew–with Cromwell's dictatorship
still fairly recent history–that a professional army posed a
serious risk of military takeover.
The Second Amendment embodied an ingenious solution to that problem.
Combine a small professional army under the control of the federal
government with an enormous citizen militia–every able bodied
adult man. Let the Federal government provide sufficient
standardization so that militia units from different states could
work together but let the states appoint officers–thus making
sure that the states and their citizens maintained control over the
militia. In case of foreign invasion, the militia would provide a
large, if imperfectly trained and disciplined, force to supplement
the small regular army. In case of an attempted coup by the Federal
government, the federal army would find itself outgunned a hundred to
one.
The beauty of this solution is that it depends, not on making a
military takeover illegal, but on making it impossible. In order for
that takeover to occur, it would first be necessary to disarm the
militia. But until the takeover had occurred, the second Amendment
made disarming the militia impossible, since any such attempt would
be seen as a violation of the Constitution and resisted with
force.
It was an elegant solution two hundred years ago, but I am less
optimistic than some of my friends about its relevance today. The
U.S. has a much larger professional military, relative to its
population, than it did then, the states are much less independent
than they were, and the gap between civilian and military weaponry
has increased enormously.
Other things have changed as well over two hundred years. In a world
of broad based democracy and network television, conflicts between
the U.S. government and its citizens are likely to involve
information warfare, not guns. A government that wants to do bad
things to its citizens will do them by controlling the flow of
information in order to make them look like good things.
In that world, widely available strong encryption functions as a
virtual second amendment. As long as it exists, the government cannot
control the flow of information. And once it does exist, eliminating
it, like disarming an armed citizenry, is extraordinarily difficult–especially
for a government that cannot control the flow of information to its
citizens.
If You Work
for the IRS, Stop Here
Freedom of speech is something most people, at least in this
country, are in favor of. But strong privacy will also reduce the
power of government in less obviously desirable ways. Activities that
occur entirely in cyberspace will be invisible to outsiders–including
ones working for the federal government. It is hard to tax or
regulate things you cannot see.
If I earn money selling services in cyberspace and spend it buying
goods in realspace, the government can tax my spending. If I earn
money selling goods in realspace and spend it buying services in
cyberspace, they can tax my income. But if I earn money in cyberspace
and spend it in cyberspace, they cannot observe either income or
expenditure and so will have nothing to tax.
Similarly for regulation. I am, currently, a law professor but not a
member of the California bar, hence it is illegal for me to sell
certain sorts of legal services in California. Suppose I wanted to do
so anyway. If I do it as David D. Friedman I am likely to get in
trouble. But if I do it as Legal Eagle Online, taking care to keep
the real world identity of Legal Eagle a secret, there is not much
the California Bar can do about it.
In order to sell my legal services I have to persuade someone to buy
them. I cannot do that by pointing potential customers at my books
and articles, because they were all published under my real name.
What I can do is to start by giving advice for free and then, when
the recipients find that the advice is good–perhaps by checking
it against the advice of their current lawyers–raise my price.
Thus over time I establish an online reputation for an online
identity guaranteed by my digital signature.
Legal advice is an example; the argument is a general one. Once
strong privacy is well established, legal regulation of information
services can no longer be enforced. Governments may still attempt to
maintain the quality of professional services by certifying
professionals–providing information as to who they believe is
competent. But it will no longer be possible to force customers to
act on that information–to legally forbid them from using
uncertified providers, as they currently are legally forbidden to use
unlicensed doctors or lawyers who have not passed the bar.
[26]
The Down Side of Strong Privacy
Reducing the government's ability to collect taxes and
regulate professions is in my view a good thing, although some will
disagree. But the same logic also applies to government activities I
approve of, such as preventing theft and murder. Online privacy will
make it harder to keep people from sharing stolen credit card numbers
or information on how to kill people, or organizing plots to steal
things or blow things up.
This is not a large change; the internet and strong encryption merely
make it somewhat easier for criminals to share information and
coordinate their activities. A more serious problem is that, by
making it possible to combine anonymity and reputation, strong
privacy makes possible criminal firms with brand name reputation.
Suppose you want to have someone killed. The big problem is not the
cost; so far as I can gather from public accounts, a hit man costs
less than a car, and most of us can afford a car. The big problem–assuming
you have already resolved any moral qualms–is finding a
reliable seller of the service you want to buy.
That problem, in a world of widely distributed strong encryption, we
can solve. Consider my four step business plan for Murder
Incorporated:
1. Arrange for mystery billboards on major highways. Each contains a
single long number and the message "write this down." Display ads
with the same message appear in major newspapers.
2. Put a full page ad in the New York Times, apparently written in
gibberish.
3. Arrange a multiple assassination with high profile targets, such
as film stars or major sports figures–perhaps a bomb at the
Academy Awards.
4. Send a message to all major media outlets, pointing out that the
number on all of those bulletin boards is a public key; if they use
it to decrypt the New York Times Ad they will get a description of
the assassination–published the day before it happened.
You have now made sure that everyone in the world has, or can get,
your public key–and knows that it belongs to an organization
willing and able to kill people. Once you have taken steps to tell
people how to post messages where you can read them, everyone on the
world will know how to send you messages that nobody else can read
and how to identify messages that can only have come from you. You
are now in business as a middleman selling the services of hit men.
Actual assassinations still have to take place in realspace, so being
a hit man still has risks. But the problem of locating a hit man–when
you are not yourself a regular participant in illegal markets–has
been solved.
Murder Incorporated is a particularly striking example of the problem
of criminal firms with brand name reputations, operating openly in
cyberspace while keeping their realspace identity and location
secret, but there are many others. Consider "Trade Secrets Inc.–We
Buy and Sell." Or an online pirate archive, selling other people's
intellectual property in digital form, computer programs, music, and
much else, for a penny on the dollar, payable in anonymous digital
cash.
Faced with such unattractive possibilities, it is tempting to
conclude that the only solution is to ban encryption. A more
interesting approach is to find ways of achieving our objectives–preventing
murder, providing incentives to produce computer programs–that
are made easier by the same technological changes that make the old
ways harder.
Anonymity is the ultimate defense. Not even Murder Incorporated can
assassinate you if they do not know who you are. If you plan to do
things that might make people want to kill you–publish a book
making fun of the prophet Mohammed, say, or revealing the true crimes
of Bill (Gates or Clinton)–it would be prudent not to do it
under a name linked to your realspace identity. That is not a
complete solution–the employer of the hit man might, after all,
be your wife, and it is hard to conduct a marriage entirely in
cyberspace–but it at least protects some potential victims.
Similarly for the more common, if less dramatic, problems of
protecting intellectual property online. Copyright law will become
largely unenforceable, but there are other ways of protecting
property. One–using encryption to provide the digital
equivalent of a barbed wire fence protecting your property–will
be discussed at some length in a later chapter.
Why It Will
Not Be Stopped
For the past two decades powerful elements in the U.S.
government, most notably the NSA and FBI, have been arguing for
restrictions on encryption designed to maintain their ability to tap
phones, read seized records, and in a variety of other ways violate
privacy for what they regard as good purposes. After my description
of the down side of strong privacy, readers may think there is a good
deal to be said for the idea.
There are, however, practical problems. The most serious is that the
cat is already out of the bag–has been, indeed, for more than
twenty-five years. The mathematical principles on which public key
encryption is based are public knowledge. That means that any
competent computer programmer with an interest in the subject can
write encryption software. Quite a lot of such software has already
been written and is widely distributed. And given the nature of
software, once you have a program you can make an unlimited number of
copies. It follows that keeping encryption software out of the hands
of spies, terrorists, and competent criminals is not a practical
option. They probably have it already; if not, their friends do.
[Add paragraph or two on
steganography]
It is still possible to put limits on the encryption software
publicly marketed and publicly used–to insist, for example,
that if AOL or Microsoft builds encryption into their programs it
must contain a back door permitting properly authorized persons–a
law enforcement agent with a court order, say–to read the
message without the key. The problem with such an approach is that
there is no way of giving law enforcement what it wants without
imposing very high costs on the rest of us.
To see why, consider the description of adequate regulation given by
Louis Freeh, who was at the time the head of the FBI. He said that
what he needed was the ability to decrypt any encrypted message in
half an hour.
The equivalent in realspace would be legal rules that let properly
authorized law enforcement agents open any lock in the country in
half an hour. That includes not only the lock on your front door but
the locks protecting bank vaults, trade secrets, lawyers' records,
lists of contributors to unpopular causes, and much else. While
access would be nominally limited to those properly authorized, it is
hard to imagine any system flexible enough to meet Freeh's schedule
that was not vulnerable to misuse. If being a police officer gives
you access to locks with millions of dollars behind them, in cash,
diamonds, or information, some cops will become criminals and some
criminals will become cops. Proper authorization presumably means a
court order–but not all judges are honest, and half an hour is
not long enough for even an honest judge to verify what the officer
applying for the court order tells him.
[27]
Encryption provides the locks for cyberspace. If nobody has strong
encryption, everything in cyberspace is vulnerable to a sufficiently
sophisticated private criminal. If people have strong encryption but
it comes with a mandatory back door accessible in half an hour to any
police officer with a court order, than everything in cyberspace is
vulnerable to a private criminal with the right contacts. Those locks
have millions, perhaps billions, of dollars worth of stuff behind
them–money in banks, trade secrets in computers.
One could imagine a system for accessing encrypted documents so
rigorous that it required written permission from the President,
Chief Justice and Attorney General and only got used once every two
or three years. Such a system would not seriously handicap online
dealings. But it would also be of no real use to law enforcement,
since there would be no way of knowing which one communication out of
the billions crisscrossing the internet each day they needed to
crack.
In order for encryption regulation to be useful, it has to either
prevent the routine use of encryption or make it reasonably easy for
law enforcement agents to access encrypted messages. Doing either
will seriously handicap the ordinary use of the net. Not only will it
handicap routine transactions, it will make computer crime easier by
restricting the use of the technology best suited to defend against
it. And what we get in exchange is protection not against the use of
encryption by sophisticated criminals and terrorists–there is
no way of providing that–but only against the use of encryption
by ordinary people and unsophisticated criminals.
Readers who have followed the logic of the argument may point out
that even if we cannot keep sophisticated criminals from using strong
encryption, we may be able to prevent ordinary people from using it
to deal with sophisticated criminals--and doing so would make my
business plan for Murder Incorporated unworkable. While it would be a
pity to seriously handicap the development of online commerce, some
may think that price worth paying to avoid the undesirable
consequences of strong privacy.
To explain why that will not happen requires a brief economic
digression.
Property
Rights and Myopia
You are thinking of going into the business of growing trees–hardwoods
that grow slowly but produce valuable lumber. It will take forty
years from planting to harvest. Should you do it? The obvious
response is not unless you are confident of living at least another
forty years.
Like many obvious responses, it is wrong. Twenty years from now you
will be able to sell the land, covered with twenty year old trees,
for a price that reflects what those trees will be worth in another
twenty years. Following through the logic, it is straightforward to
show that if what you expect the trees to sell for will, after
allowing for expenses, more than repay your investment, including
forty years of compound interest, you should do it.
This assumes a world of secure property rights. Suppose we assume
instead that your trees are quite likely, at some point during the
next forty years, to be stolen–legally via government
confiscation or illegally by someone driving into the forest at
night, cutting them down, and carrying them off. In that case you
will only be willing to go into the hardwood business if the return
from selling the trees is enough larger than the ordinary return on
investments to compensate you for the risk.
Generalizing the argument, we can see that long run planning depends
on secure property rights.
[28]
If you are sure that what you own today you will still own tomorrow–unless
you choose to sell it–you can afford to give up benefits today
in exchange for greater benefits tomorrow, or next year, or next
decade. The greater the risk that what you now own will be taken away
from you at some point in the future, the greater the incentive to
limit yourself to short term projects.
Politicians in a democratic society have insecure property rights
over their political assets; Clinton could rent out the White House
but he could not sell it. One consequence is that in such a system
government policy is dominated by short run considerations–most
commonly the effect of current policy on the outcome of the next
election. Very few politicians will accept political costs today in
exchange for benefits ten or twenty or thirty years in the future–because
they know that, when the benefits arrive, someone else will be in
power to enjoy them.
Preventing the development of strong privacy means badly handicapping
the current growth of online commerce. It means making it easier for
criminals to hack into computers, intercept messages, defraud banks,
steal credit cards. It is thus likely to be politically costly, not
ten or twenty years from now but in the immediate future.
What do you get in exchange? The benefit of encryption regulation–the
only substantial benefit, since it cannot prevent the use of
encryption by competent criminals–is preventing the growth of
strong privacy. From the standpoint of governments, and of people in
a position to control governments, that may be a large benefit, since
strong privacy threatens to seriously reduce government power,
including the power to collect taxes. But it is a long run threat,
one that will not become serious for a decade or two. Defeating it
requires the present generation of elected politicians to do things
that are politically costly for them–in order to protect the
power of whoever will hold their offices ten or twenty years from
now.
The politics of encryption regulation so far fits the predictions of
this analysis. Support for regulation has come almost entirely from
long lived bureaucracies such as the FBI and NSA. So far, at least,
they have been unable to get elected politicians to do what they want
when doing so involves any serious political cost.
If this argument is right, it is unlikely that serious encryption
regulation, sufficient to make things much easier for law enforcement
and much harder for the rest of us, will come into existence, at
least in the U.S. Hence it is quite likely that we will end up with
something along the lines of the world of strong privacy described in
this chapter.
In my view that is a good thing. The attraction of a cyberspace
protected by encryption is that it is a world where all transactions
are voluntary: You cannot get a bullet through a T1 line. It is a
world where the technology of defense has finally beaten the
technology of offense. In the world we now live in, our rights can be
violated by force or fraud; in a cyberspace protected by strong
privacy, only by fraud. Fraud is dangerous, but less dangerous than
force. When someone offers you a deal too good to be true, you can
refuse it. Force makes it possible to offer you deals you cannot
refuse.
Truth to
Tell
In several places in this chapter I have simplified the
mechanics of encryption, describing how something could be done but
not how it is done. Thus, for example, public key encryption is
usually done not by encrypting the message with the recipient's
public key but by encrypting the message with an old fashioned single
key encryption scheme, encrypting the single key with the recipient's
public key, and sending both encrypted message and encrypted key. The
recipient uses his private key to decrypt the encrypted key and uses
that to decrypt the message. Although this is a little more
complicated than the method I described, in which the message itself
is encrypted with the public key, it is also significantly
faster.
Similarly, a digital signature is actually calculated by creating a
message digest of the original message using a one way hash function
and encrypting that with your private key, then sending both message
and digest. The recipient decrypts the digest, creates a second
digest from the message using the same hash function, and compares
them to make sure they are identical, as they will be if the message
has not been changed and the public and private keys match.
Such complications make describing the mechanics of encryption more
difficult and are almost entirely irrelevant to the issues discussed
here, so I ignored them.
[Note: Both of
these could be virtual footnotes instead]
Appendix
I:
Public Key Encryption: A Very Elementary
Example
Imagine a world in which people know how to multiply
numbers but not how to divide them. Further imagine that there exists
some mathematical procedure capable of generating pairs of numbers
that are inverses of each other: X and 1/X. Finally, assume that the
messages we wish to encrypt are simply numbers.
I generate a pair X, 1/X. To encrypt the number M using the key X, I
multiply X times M. We might write
E(M,X)=MX,
Meaning “Message M encrypted using the key X is M times X.”
Suppose someone has the encrypted message MX and the key X. Since he
does not know how to divide, he cannot decrypt the message and find
out what the number M is. If, however, he has the other key 1/X, he
can multiply it times the encrypted message to get back the original
M:
(1/X)MX=(X/X)M=M
Alternatively, one could encrypt a message by multiplying it by the
other key, 1/X, giving us
E(M,1/X)=M/X.
Someone who knows 1/X but does not know X has no way of decrypting
the message and finding out what M is. But someone with X can
multiply it times the encrypted messages and get back M:
X(M/X)=M
So in this world, multiplication provides a primitive form of public
key encryption: a message encrypted by multiplying it with one key
can only be decrypted with the other.
In the real world, of course, we know how to divide. Real public key
encryption depends on mathematical operations which, like
multiplication and division in my example, are very much easier to do
in one direction than the other. The RSA algorithm, for example, at
present the most widely used form of public key encryption, depends
on the fact that it is easy to generate a large number by multiplying
together several large primes but much harder to start with the
number and factor it to find the primes that can be multiplied
together to give that number. The keys in such a system are not
literally inverses of each other, like X and 1/X, but they are
functional inverses, since either one can undo (decrypt) what the
other does (encrypts).
Appendix II:
Chaining Anonymous Remailers
M is my actual message; [M,K] means "message M
encrypted using key K." Kr is the public key of the person
I am sending the message to, Er is his email address. I am
using a total of three remailers; their public keys are
K1, K2, K3 and their email addresses
are E1, E2, E3. What I send to the
first remailer is:
[([([([M,Kr]+Er),K3]
+E3),K2]
+E2),K1]
The first remailer uses his private key to strip off the top layer of
encryption, leaving him with:
[([([M,Kr]+Er),K3]
+E3),K2] +E2
He can now read E2, the email address of the second
remailer, so he sends the rest of the message to that address. The
second remailer receives
[([([M,Kr]+Er),K3]
+E3),K2]
and uses his private key to strip off a layer of encryption, leavinge
him with:
[([M,Kr]+Er),K3]
+E3
He then sends to the third remailer
[([M,Kr]+Er),K3]
The third remailer strips the third layer of encryption off, giving
him
[M,Kr]+Er
and sends the message on the intended recipient at Er–who
then uses his private key to strip off the last level of encryption,
giving him M, the original message.
Chapter
IV: Information Processing: Threat or
Menace?
Or
If
Information is Property, who owns it?
Some years ago I decided to set up my own web site. One question was
how much of my life to include. Did I want someone looking at my
academic work–perhaps a potential employer–to discover
that I had put a good deal of time and energy into researching
medieval recipes, a subject unrelated to either law or economics,
thus (arguably) proving that I was a dilettante rather than a serious
scholar? Did I want that same potential employer to discover that I
held unfashionable political opinions, ranging from support for
legalization of all recreational drugs to support for open
immigration? And did I want someone who might be outraged at my
political views to be able to find out what I and my family members
looked like and where we lived?
I concluded that keeping my life in separate compartments was not a
practical option. I could have set up separate sites for each part,
with no links between them–but anyone with a little enterprise
could have found them all with a search engine. And even without a
web site, anyone who wanted to know about me could find vast amounts
of information by a quick search of Usenet, where I have been an
active poster for more than ten years. Keeping my virtual mouth shut
was not a price I was willing to pay, and nothing much short of that
would do the job.
This is not a new problem. Before the internet existed, I still had
to decide to what degree I wanted to live in multiple worlds–whether,
for example, I should discuss my hobbies or my political views with
professional colleagues. What has changed is the scale of the
problem. In a large world where personal information was spread
mostly by gossip and processed almost entirely by individual human
brains, facts about me were to a considerable extent under my control–not
because they were secret but because nobody had the time and energy
to discover everything knowable about everyone else. Unless I was a
major celebrity, I was the only one specializing in me.
That was not true everywhere. In the good old days–say most of
the past three thousand years–one reason to run away to the big
city was to get a little privacy. In the villages in which most of
the world lived, anyone's business was everyone's business. In Sumer
or Rome or London the walls were no more opaque and you were no less
visible than at home, but there was so much going on, so many people,
that nobody could keep track of it all.
That form of privacy–privacy through obscurity–cannot
survive modern data processing. Nobody can keep track of it all but
many of us have machines that can. The data of an individual life is
not notably more complicated than it was two thousand years ago. It
is true that the number of lives has increased thirty or forty fold
in the last two thousand years,
[29]
but our ability to handle data has increased a great deal more than
that. Not only can we keep track of the personal data for a single
city, we could, to at least a limited degree, keep track of the data
for the whole world, assuming we had it and wanted to.
The implications of these technologies have become increasingly
visible over the past ten or fifteen years. Many are highly
desirable. The ability to gather and process vast amounts of
information permits human activities that would once have been
impossible; to a considerable extent it abolishes the constraints of
geography on human interaction. Consider two examples.
Thirty some years ago, I spent several summers as a counselor at a
camp for gifted children. Many of the children, and some of my fellow
counselors, became my friends–only to vanish at the end of the
summer. From time to time I wondered what had become of them.
I can now stop wondering, at least about some. A year or so ago,
someone who had been at the camp organized an email list for
ex-campers and counselors; membership is currently approaching a
hundred [update before final draft]. That list exists because
of technologies that make possible not only easy communication with
people spread all over the country but also finding them in the first
place–searching a very large haystack for a few hundred
needles. Glancing down a page of Yahoo-Groups, I find 226 such lists,
each for a different camp; the largest has more than three hundred
members.
For a second example, consider a Usenet Newsgroup that I stumbled
across many years ago, dedicated to a technologically ingenious but
now long obsolete video game machine of which I once owned two–one
for my son and one for me. Reading the posts, I discovered that
someone in the group had located Smith Technologies, the firm that
held the copyright on the Vectrex and its games, and written to ask
permission to make copies of game cartridges. The response, pretty
clearly from the person who designed the machine, was an enthusiastic
yes. He was obviously delighted to discover that there were people
still playing with his toy, his dream, his baby. Not only were they
welcome to copy cartridges, if anyone wanted to write new games he
would be happy to provide the necessary software. It was a striking,
to me heartwarming, example of the ability of modern communications
technology to bring together people with shared enthusiasms.
"Vectrex had cheats back when they were still known as
bugs"
(from an faq by Gregg Woodcock)
The
Market for Information
So far I have been talking about small scale, non-commercial
uses of the technology–people learning other people's secrets
or getting together with old friends or strangers with shared
interests. While such uses are an increasingly important feature of
the world we live in, they are not nearly as prominent or politically
contentious as large scale commercial uses of personal information.
To see why such uses exist and why they raise important issues, it is
worth starting out by seeing why some people would want to collect
and use individual information about large numbers of strangers.
Consider two examples.
You are planning to open a new grocery store in an existing chain–a
multi-million dollar gamble. Knowledge about the people who live in
the neighborhood–how likely they are to shop at your store and
how much they will buy–is crucial. How do you get it?
The first step is to find out what sort of people shop in your
present stores and what they buy. To do that you offer customers a
shopping card. The card is used to get discounts, so shoppers pass
the card through a reader almost every time they go through the
checkout, providing you lots of detailed information about their
shopping patterns. One way you use that information is to improve the
layout of existing stores; if people who buy spaghetti almost always
buy spaghetti sauce at the same time, putting them in the same aisle
will make your store more convenient, hence more attractive, hence
more profitable.
Another way is to help you decide where to locate your new store. If
you discover that old people on average do not buy very much of what
you are selling, perhaps a retirement community is the wrong place.
If couples with young children do all their shopping on the weekend
when one parent can stay home with the kids while the other shops,
singles shop after work on weekdays (weekends are for parties), and
retired people during the working day (shorter lines), then a
location with a suitable mix of all three types will give you a more
even flow of customers, higher utilization of the store, and greater
profits. Combining information about your customers with information
about alternative locations, provided free by the U.S. census or at a
higher price by private firms, you can substantially improve the odds
on your gamble.
For a higher tech application of information technology, consider
advertising. When I read a magazine, I see the same ads as everyone
else–mostly for things I have no interest in. But a web page
can send a different response to every query, customizing the ads I
see to fit my interests. No TV ads, since I do not own a television,
lots of ads for high tech gadgets.
In order to show me the right ads, the people managing the page need
to know what I am interested in. Striking evidence that such
information is already out there and already being used appears in my
mailbox on a regular basis–a flood of catalogs.
How did the companies sending out those catalogs identify me as a
potential customer? If they could see me, it would be easy. Not only
am I wearing a technophile ID bracelet (Casio calls it a databank
watch), I am wearing the model that, in addition to providing a
calculator, database, and appointment calender, also checks in three
times a day with the U.S. atomic clock to make sure it has exactly
the right time. Sharper Image, Techno-Scout,
Innovations et. al cannot see what is on my wrist–although
if the next chapter's transparent society comes to pass that may
change. They can, however, talk to each other. When I bought my Casio
Wave Captor Databank 150 (the name would have been longer but they
ran out of room on the watch), that purchase provided the proprietors
of the catalog I bought it from with a snippet of information about
me. They no doubt resold that information to anyone willing to pay
for it. Sellers of gadgets respond to the purchase of a Casio Wave
Captor the way sharks respond to blood in the water.
As our technology gets better, it becomes possible to create and use
such information at lower cost and in much more detail. A web page
can keep track not only of what you buy but of what you look at and
for how long. Combining information from many sources, it becomes
both possible and potentially profitable to create databases with
detailed information on the behavior of a very large number of
individuals, certainly including me, probably including you.
The advantages of that technology to individual customers are fairly
obvious. If I am going to look at ads, I would prefer that they be
ads for things I might want to buy. If I am going to have my dinner
interrupted by a telephone call from a stranger, I would prefer it be
someone offering to prune my aging apricot tree–last year's
crop was a great disappointment–rather than someone offering to
refinance my nonexistent mortgage.
As these example suggest, there are advantages to individuals to
having their personal information publicly available and easy to
find. What are the disadvantages? Why are many people upset about the
loss of privacy and the misuse of "their" private information? Why
did Lotus, after announcing its plan to offer masses of such data on
a CD, have to cancel it in response to massive public criticism? Why
is the question of what information web sites are permitted to gather
about their customers, what they may do with it, and what they must
tell their customers about what they are doing with it, a live
political and legal issue?
One gut level answer is that many people feel strongly that
information about them is theirs. They should be able to decide who
gets it; if it is going to be sold, they should get the money.
The economist's response is that they already get the money. The fact
that selling me a gadget provides the seller with a snippet of
information that he can then resell makes the transaction a little
more profitable for the seller, attracts additional sellers, and
ultimately drives down the price I must pay for the gadget. The
effect is tiny–but so is the price I could get for the
information if I somehow arranged to sell it myself. It is only the
aggregation of large amounts of such information that is valuable
enough to be worth the trouble of buying and selling it.
A different response, motivated by moral intuition rather than
economics, is that the argument confuses information about me–located
in someone else's mind or database–with information that
belongs to me. How can I have a property right over the contents of
your mind? If I am stingy or dishonest, do I have an inherent right
to forbid those I treat badly from passing on the information? If
not, why should I have a right to forbid them from passing on other
information about me?
There is, however, a vaguer but more important reason why people are
upset at the idea of a world where anyone willing to pay can learn
almost everything about them. Many people value their privacy not
because they want to be able to sell information about themselves but
because they do not want other people to have it. While it is hard to
come up with a clear explanation of why we feel that way–a
subject discussed at much greater length in the final chapter of this
section–it is clear that we do. At some level, control over
information about ourselves is seen as a form of self protection. The
less other people can find out about me, the less likely it is that
they will use information about me either to injure me or to identify
me as someone they wish to injure–which brings us back to some
of the issues I considered when designing my web page.
Towards
Information as Property
Concerns with privacy apply to at least two sorts of
information. One is information generated by voluntary transactions
with some other party–what I have bought and sold, what
catalogs and magazines I subscribe to, what web pages I browse. Such
information starts out possessed by both parties to the transaction–I
know what I bought from you, you know what you sold to me–but
by nobody else. The other kind is information generated by actions I
take that are publicly visible–court records, newspaper
stories, gossip.
Ownership of the first sort of information can, at least in
principle, be determined by contract. A magazine can, and some do,
promise its subscribers that their names will not be sold. Software
firms routinely offer people registering their programs the option of
having their names made or not made available to other firms selling
similar products. Web pages can, and many do, provide explicit
privacy policies limiting what they will do with the information
generated in the process of browsing their sites.
To understand the economics of the process, think of information as a
produced good; like other such goods, who owns how much of it is
determined by agreement between the parties who produce it. When I
subscribe to a magazine, I and the publisher are jointly producing a
piece of information about my tastes–the information that I
like that kind of magazine. That information is of value to the
magazine, which may want to resell it. It is of value to me, either
because I might want to resell it or because I might want to keep it
off the market in order to protect my privacy. The publisher can, by
offering the magazine at a lower price without a privacy guarantee
than with, offer to pay me to give it control over the information.
If the information is worth more to me than he is offering, I refuse;
if it is worth less, I accept. Control over the information ends up
with whoever most values it. If no mutually acceptable terms can be
found, I do not subscribe and that bit of information does not get
produced.
Seen in this way, default rules about privacy should not matter. A
magazine subscription has one price with a privacy guarantee, another
and slightly lower price without it. If the law assumes that
magazines have the right to resell names unless they agree not to,
then the ordinary subscription price is the price without privacy,
the higher price with a guarantee the price with. If it assumes
subscribers have the right not to have their names sold unless they
agree to waive it, then the ordinary subscription price is the price
with privacy, the lower price with a waiver the price without. Either
way, control of the information goes to whichever party values it
more and the price of that control is included in the cost of the
subscription.
That would be the right answer in a world where arranging contracts
was costless–a world of zero transaction costs. In the world we
now live in, it is not. Most of us, unless we care a great deal about
our privacy, do not bother to read privacy policies. Even if I prefer
that catalogs and mailing lists not resell information about me, it
is too much trouble to check the small print on everything I might
subscribe to. It would be still more trouble if every firm I dealt
with offered two prices, one with and one without a guarantee of
privacy, and more still if the firm offered a menu of levels of
protection, each with its associated price.
The result is that most magazines and websites, at least in my
experience, offer only a single set of terms; if they allow the
subscriber some choice, it is not linked to price. The amounts
involved are usually too small to be worth bargaining over. Hence
default rules do matter and we get political and legal conflicts over
the question of who, absent any special contractual terms, has what
control over the personal information generated by transactions.
That may change. What may change it is technology–the
technology of intelligent agents. It is possible in principle, and is
becoming possible in practice, to program your web browser with
information about your privacy preferences. Using that information,
the browser can decide what different levels of privacy protection
are or are not worth to you and select pages and terms accordingly.
Browsers work cheap.
For this to happen we need a language of privacy–a way in which
a web page can specify what it does or does not do with information
generated by interactions with it in a form your browser can
understand. Once such a language exists and is in widespread use, the
transaction costs of bargaining over privacy drop sharply. You tell
your browser what you want and what it is worth to you, your browser
interacts with a program on the web server hosting the page and
configured by the page's owner. Between them they agree on mutually
satisfactory terms–or they fail to do so, and you never see the
page.
This is not a purely hypothetical idea. Its current incarnation is
The Platform for Privacy Preferences, commonly known as P3P.
[30]
Microsoft has included it in the latest version of Internet Explorer,
the most widely used web browser. Web pages provide information about
their privacy policies, users provide information about what they are
willing to accept, and the browser notifies the user if a site's
policies are inconsistent with his requirements. Presumably a web
site that misrepresented its policies could be held liable for doing
so, although, so far as I know, no such case has yet reached the
courts.
How Not to
Protect Privacy
Secret: Something known to only one
person.
Suppose we solve the transaction cost problems,
permitting a true market in personal information. There remains a
second problem–enforcing the rights you have contracted for.
You can check the contents of your safe deposit box to be sure they
are still there, but it does no good to check the contents of a
firm's database to make sure your information is still there. They
can sell your information and still have it.
The problem of enforcing rights with regard to information is not
limited to a future world of automated contracting–it exists
today. As I like to put it when discussing current privacy law, there
are only two ways of controlling information about you and one of
them doesn't work.
The way that doesn't work is to let other people have information
about you and then make rules about how they use it. That is the
approach embodied in all modern privacy law. If you disagree with my
evaluation, I suggest a simple experiment. Start with five thousand
dollars, the name of a random neighbor, and the Yellow Pages for
"Investigators." The objective is to end up with a credit report on
your neighbor–something that, under the Federal Fair Credit
Reporting Act, you are not allowed to have. If you are a competent
con man or internet guru, you can probably dispense with the money
and the phone book.
That approach to protecting privacy works poorly when enforcing terms
imposed by federal law. It probably works somewhat better for
enforcing terms agreed to in the marketplace, since in that case it
is supported by reputational as well as legal sanctions–firms
do not want the reputation of cheating their customers. But I would
still not expect it to work terribly well. Once information is out
there, it is very hard to keep track of who has it and what he has
done with it. It is particularly hard when there are many uses of the
information that you do not want to prevent–a central problem
with the Fair Credit Reporting Act. Setting up rules that permit only
people with a legitimate reason to look at your credit report is
hard; enforcing them is harder.
The other way of protecting information, the way that does work, is
not to let the information out in the first place. That is how the
strong privacy of the previous chapter was protected. You do not have
to trust your ISP or the operator of an anonymous remailer not to
tell your secrets; you haven't given them any secrets to tell.
There are problems with applying that approach to transactional
information. When you subscribe to a magazine, the publisher knows
who you are, or at least where you live–it needs that
information to get the magazine to you. When you buy something from
me, I know that I have sold it to you. The information starts in the
possession of both of us–short of controlled amnesia, how can
it end in the possession of only one?
In our present world, that is a nearly insuperable problem. But in a
world of strong privacy, you do not have to know who you are selling
to. If, at some point in the future, privacy is sufficiently
important to people, online transactions can be structured to make
one or both party anonymous to the other, with delivery either online
via a remailer (for information transactions) or the less convenient
realspace equivalent of some sort of physical forwarding system. In
such a world, we are back with one of the oldest legal rules of all–possession.
If I have not sold the information to you, you do not have it, so I
need not worry about what you are going to do with it.
Returning to something more like our present world, one can imagine
institutions that would permit a considerably larger degree of
individual control over the uses of personal information than now
exists, modeled on arrangements now used to maintain firms' control
over their valuable mailings lists. Individuals subscribing to a
magazine would send the seller not their name and address but the
name of the information intermediary they employed and the number by
which that intermediary identified them. The magazine's publisher
would ship the intermediary four thousand copies and the numbers
identifying four thousand (anonymous) subscribers, the intermediary
would put on the address labels and mail them out. The information
would never leave the hands of the intermediary, a firm in the
business of protecting privacy. To check its honesty, I establish an
identity with my own address and the name "David Freidmann,"
subscribe to a magazine using that identity, and see if David
Freidmann gets any junk mail.
Such institutions would be possible and, if widely used, not terribly
expensive; similar arrangements could be made for purchasing goods
from catalogs.
[31]
My guess is that it will not happen. The reason it will not happen is
that most people either do not want to keep the relevant information
secret (I don't, for example; I like gadget catalogs) or do not want
to enough to go to any significant trouble. But it is still worth
thinking about how they could get privacy if they wanted to, and
those thoughts may become of more practical relevance if
technological progress sharply reduces the cost.
Two Roads to
Property in Personal Information
These discussions suggest two different ways in which the
technologies that in part create the problem could be used to solve
it. Both are ways of making it possible for an individual to treat
information about himself as his property. One is to use computer
technologies, including encryption, to give me or my trusted agents
direct control over the information, permitting others to use it with
my permission--for instance, to send me information about goods they
think I might want to buy--without ever getting possession of it.
The other is to treat information as we now treat real estate--to
permit individuals to put restrictions on the use of property they
own which thereafter "run with the land"--are binding on subsequent
purchasers. If, for example, I sell you an easement permitting you to
cross my land in order to reach mine, and later sell the land, the
easement is good against the buyer. Even if he did not know it
existed, he now has no right to refuse to let you through. The
equivalent is not true for most other forms of property. If I sell
you a car with the restriction that you agree not to permit it to be
driven on Sunday, I may be able to enforce the restriction against
you, I may be able to sue you for damages if, contrary to our
contract, you sell it to someone else without requiring him to abide
by the agreement, but I have no way of enforcing the restriction on
him.
An obvious explanation of the difference is that land ownership
involves an elaborate system for recording title, including
modifications such as easements, making it possible for the
prospective purchaser to determine in advance what obligations run
with the land he is considering. We have no such system for recording
ownership, still less for recording complicated forms of ownership,
for most other sorts of property.
At first glance, personal information seems even less suitable for
the more elaborate form of property rights than pens, chairs, or
computers. In most likely uses, the purchaser is buying information
about a very large number of people. If my particular bit of
information is only worth three cents to him, a legal regime that
requires him to spend a dollar checking the restrictions on it before
he uses it means that the information will never be used, even if I
have no objection.
A possible answer is that the data processing technologies that make
it possible to aggregate and use information on that scale could also
be used to make it possible, even easy, to maintain complicated
property rights in it. One could imagine a legal regime where every
piece of personal information had to be accompanied by a unique
identification number; using that number, a computer could access
information about the restrictions on use of that information in
machine readable form at negligible cost.
Chapter
V: Surveillance Tech: The Universal
Panopticon
" The trend began in Britain a decade ago, in the city of King's
Lynn, where sixty remote controlled video cameras were installed to
scan known "trouble spots," reporting directly to police
headquarters. The resulting reduction in street crime exceeded all
predictions; in or near zones covered by surveillance, it dropped to
one seventieth of the former amount. The savings in patrol costs
alone paid for the equipment in a few months. Dozens of cities and
towns soon followed the example of King's Lynn. Glasgow, Scotland
reported a 68% drop in citywide crime, while police in Newcastle
fingered over 1500 perpetrators with taped evidence. (All but seven
pleaded guilty, and those seven were later convicted.) In May 1997, a
thousand Newcastle soccer fans rampaged through downtown streets.
Detectives studying the video reels picked out 152 faces and
published eighty photos in local newspapers. In days, all were
identified."
David Brin, The Transparent Society,
Chapter 1 p. 5.
In the early 19
th Century Jeremy Bentham, one of the
oddest and most original of English thinkers, designed a prison where
every prisoner could be watched at all times. He called it the
Panopticon. Elements of his design were later implemented in real
prisons in the hope of better controlling and reforming prisoners. If
Brin is correct, it is now being implemented on a somewhat larger
scale.
The case of video surveillance in Britain suggests one reason–it
provides an effective and inexpensive way of fighting crime. In the
U.S., such cameras have long been used in department stores to
discourage shoplifting. More recently they have begun to be used to
apprehend drivers who run red lights. While there have been
challenges on privacy grounds, it seems likely that the practice will
spread.
[32]
Crime prevention is not the only benefit of surveillance. Consider
the problem of controlling auto emissions. The current approach
imposes a fixed maximum on all cars, requires all to be inspected,
including new cars which are almost certain to pass, and provides no
incentive for lowering emissions below the required level. It makes
almost no attempt to selectively deter emissions at places and times
when they are particularly damaging.
[33]
One could build a greatly superior system using modern technology.
Set up unmanned detectors that measure emissions by shining a beam of
light through the exhaust plume of a passing automobile; identify the
automobile by a snapshot of the license plate. Bill the owner by
amount of emissions and, in a more sophisticated system, when and
where they were emitted.
[34]
None of these useful applications of technology poses, at first
glance, a serious threat to privacy. Few would consider it
objectionable to have a police officer wandering around a park or
standing on a street corner, keeping an eye out for purse snatchers
and the like. Video cameras on poles are simply a more convenient way
of doing the same thing–comfortably and out of the wet. Cameras
at red lights, or photometric monitoring of a car's exhaust plume,
are merely cheaper and more effective substitutes for traffic cops
and emission inspections. What's the problem?
The problem comes when we combine this technology with others. A cop
on the street corner may see you, he may even remember you, but he
has no way of combining everything he sees with everything that every
other cop sees and so reconstructing your daily life. A video camera
produces a permanent record. It is now possible to program a computer
to identify a person from a picture of his face. That means that the
video tapes produced by surveillance cameras will be convertible into
a record of where particular people were when. Add to that the power
of modern data processing to keep track of all the information and we
have the possibility of a world where a large fraction of your doings
are an open book to anyone with access to the appropriate
records.
So far I have been discussing the legal use of surveillance
technology, mostly by governments–something already happening
on a substantial scale and likely to increase in the near future. A
related issue is the use of surveillance technology, legally or
illegally, by private parties. Lots of people own video cameras–and
those cameras are getting steadily smaller. One can imagine, a decade
or two down the road, an inexpensive video camera with the size and
aerodynamic characteristics of a mosquito. The owner of a few dozen
of them could collect a lot of information about his neighbors–or
anyone else.
Of course technological development, in this area as in others, is
likely to improve defense as well as offense. Possible defenses
against such spying range from jamming transmissions to automated
dragon flies programmed to hunt down and destroy video mosquitoes.
Such technologies might make it possible, even in a world where all
public activities were readily observable, to maintain a zone of
privacy within one's own house.
Then again, they might not. We have already had court cases over
whether it is or is not a search to deduce marijuana growing inside a
house by using an infrared detector to measure its temperature from
the outside.
[35] We
already have technologies that make it possible to listen to a
conversation by bouncing a laser beam off a window and reconstructing
from the measured vibrations of the glass the sounds that cause them.
Even if it is not possible to spy on private life directly, further
developments along these lines may make it possible to achieve the
same objective indirectly.
Assume, for the moment, that the offense wins–that preventing
other people from spying on you becomes impractical. What options
remain?
Brin argues that privacy will no longer be one of them. More
interestingly, he argues that that may be a good thing. He proposes
as an alternative to privacy universal lack of privacy–the
transparent society. The police can watch you–but someone is
watching them. The entire system of video cameras, including cameras
in every police station, is publicly accessible. Click on the proper
web page–read, presumably, from a hand held wireless device–and
you can see anything that is happening in any public place. Parents
can keep an eye on their children, children on their parents, spouses
on each other, employers on employees and vice versa, reporters on
cops and politicians.
The Up Side
of Transparency
Many years ago my wife and I were witnesses to a shooting;
one result was the opportunity for a certain amount of casual
conversation with police officers. One of them advised me that, if I
ever happened to shoot a burglar, there were two things I should make
sure of–that he ended up dead and that the body ended up inside
my house.
The advise was well meant and perhaps sensible–under U.S. law a
homeowner is in a much stronger legal position killing an intruder
inside his house than outside, and a dead man cannot give his side of
the story. But it was also, at least implicitly, advice to commit
several felonies. That incident, and a less friendly one in another
jurisdiction where I was briefly under arrest for disturbing the
peace (my actual offense was aiding and abetting someone else in
asking a policeman for his badge number), convinced me that at least
some law enforcers, even ones who are honestly trying to prevent
crime, have an elastic view of the application of the law to
themselves and their friends. The problem is old enough to be the
subject of a Latin tag–Qui custodes ipsos custodiet. Who
shall guard the guardians?
The transparent society offers a possible solution. Consider the
Rodney King case. A group of policemen captured a suspect and beat
him up–a perfectly ordinary sequence of events in many parts of
the world, including some parts of the U.S. Unfortunately for the
police, a witness got the whole thing on video tape–with the
result that several [check the number] of the officers ended
up in prison. In Brin's world, every law enforcement agent knows that
he may be on candid camera–and conducts himself
accordingly.
It is an intriguing vision and it might actually happen. But there
are problems.
Selective
Transparency
The first is getting there. If transparency comes, as it is
coming in England, in the form of cameras on poles installed and
operated by the government, Brin's version does not seem likely. All
of the information will be flowing through machinery controlled by
some level of government. Whoever is in charge can plausibly argue
that although much of that information can and should be made
publicly accessible, there ought to be limits. And even if they do
not argue for limits, they can still impose them. If police are
setting up cameras in police stations, they can arrange for a few
areas to be accidentally left uncovered. If the FBI is in charge of a
national network it can, and on all past evidence will, make sure
that some of the information generated is accessible only to those
who can be trusted not to misuse it–most of whom are working
for the FBI.
The situation gets more interesting in a world where technological
progress enables private surveillance on a wide scale, so that every
location where interesting things might happen, including every
police station, has flies on the wall–watching what happens and
reporting back to their owners. A private individual, even a
corporation, is unlikely to attempt the sort of universal
surveillance that Brin imagines for his public system, so each
individual will be getting information about only a small part of the
world. But if that information is valuable to others, it can be
shared. Governments might try to restrict such sharing. But in a
world of strong privacy that will be hard to do, since in such a
world information transactions will be invisible to outside parties.
Combining ideas from several chapters of this section, one can
imagine a future where Brin's transparent society was produced not by
government but by private surveillance.
A key part of this story is the existence of a well organized market
for information. A universal spy network is likely to be an expensive
proposition, especially if you include the cost of information
processing–facial recognition of every image produced and
analysis of the resulting data. No single individual, probably no
single corporation, will find it in its interest to bear that cost to
produce information for its own use, although a government might. The
information will be produced privately only if there is some way in
which it can be resold, giving the producer not only the value of his
use of the information but the value of everyone's use of the
information.
The Down
Side of Transparency
Following Brin, I have presented the transparent society as a
step into the future, enabled by video cameras and computers. One
might also view it as a step into the past. The privacy that most of
us take for granted is to a considerable degree a novelty, a product
of rising incomes in recent centuries. In a world where many people
shared a single residence, where a bed at the inn was likely to be
shared by two or three strangers, transparency did not require video
cameras.
For a more extreme example, consider a primitive society–say
Samoa. Multiple families share a single house–without walls.
While there is no internet to spread information, the community is
small enough to make gossip an adequate substitute. [check]
Infants are trained early on not to make noise. Adults rarely express
hostility.
[36] Most
of the time, someone may be watching–so you alter your behavior
accordingly. If you do not want your neighbors to know what you are
thinking or feeling, you avoid clearly expressing yourself in words
or facial expression. You have adapted your life to a transparent
society.
Ultimately this comes down to two strategies, both familiar to most
of us in other contexts. One is not to let anyone know your secrets–to
live as an island. The other is to communicate in code–words or
expressions that your intimates will correctly interpret and others
will not. For a milder version of the same approach, consider parents
who talk to each other in a foreign language when they do not want
their children to understand what they are saying–or a
19
th century translation of a Chinese novel I once came
across, with the pornographic parts translated into Latin instead of
English.
[37]
In Brin's future transparent society, many of us will become less
willing to express our opinions of boss, employees, ex-wife or
present husband in any public place. People will become less
expressive and more self contained, conversation bland or cryptic. If
some spaces are still private, more of social life will shift to
them. If every place is public, we have stepped back at least several
centuries, arguably several millennia.
Say It Ain't
So
So far I have ignored one interesting problem with Brin's
world–verification. Consider the following courtroom drama:
My wife is suing me for divorce on grounds of adultery. In support of
her claim, she presents video tapes, taken by a hidden camera, that
show me making love to six different women, none of them her.
My attorney asks for a postponement to investigate the new evidence.
When the court reconvenes, he submits his own videotape. The jury
observes my wife making love, consecutively, to Humphrey Bogart,
Napoleon, her attorney and the judge. When quiet is restored in the
courtroom, my attorney presents the judge with the address of the
video effects firm that produced the tape.
With modern technology I do not, or at least soon will not, need your
cooperation to make a film of you doing things; a reasonable
selection of photographs will suffice. As Hollywood demonstrated with
"Roger Rabbit," it is possible to combine real and cartoon characters
in what looks like a single filmstrip. In the near future the
equivalent, using convincing animations of real people, will be
something that a competent amateur can produce on his desktop. We may
finally get to see Kennedy making love to Marilyn Monroe–whether
or not it ever happened.
In that world, the distinction between what I know and what I can
prove becomes critical. Our world may be filled with video
mosquitoes, each reporting to its owner and each owner pouring the
information into a common pool, but some of them, may be lying. Hence
when I pull information out of the pool I have no way of knowing
whether to believe it.
There are possible technological fixes–ways of using encryption
technology to build a camera that digitally signs its output,
demonstrating that that sequence was taken by that camera at a
particular time. But it is hard to design a system that cannot be
subverted by the camera's owner. Even if we can prove that a
particular camera recorded a tape of me making love to six women, how
do we know whether it did so while pointed at me or at a video screen
displaying the work of an animation studio? Hence the potential for
forgery significantly weakens the ability of surveillance technology
to produce verifiable information.
For many purposes, unverifiable information will do–if my wife
wants to know about my infidelity but does not need to prove it. As
long as the government running a surveillance system can trust its
own people it can use that system to detect crimes or politically
unpopular expressions of opinion. And video evidence will still be
usable in trials, provided that it is accompanied by a sufficient
evidence trail to prove where and when it was taken–and that it
has not been improved since.
Should We
Abolish the Criminal Law?
Modern societies have two different systems of legal
rules-criminal law and tort law–that do essentially the same
thing. Someone does something that (arguably) injures others, he is
charged, tried, and convicted, and something bad happens to him as a
result. In the criminal system prosecution is controlled and funded
by the state, in the tort system by the victim. In the criminal
system a compromise is called a plea bargain, in the tort system an
out of court settlement. Criminal law provides a somewhat different
range of punishments–it is not possible to execute someone for
a tort, for example, although it was possible for something very much
like a tort prosecution to lead to execution under English law a few
centuries back–and operates under somewhat different legal
rules.
[38] But in
their general outlines, the two systems are merely slightly different
ways of doing the same thing.
This raises an obvious question–is there any good reason to
have both? Would we, for example, be better off abolishing criminal
law entirely and instead having the victims of crimes sue the
criminals?
One argument against a pure tort system is that some offenses are
hard to detect. A victim may conclude that catching and prosecuting
the offender costs more than it is worth–especially if he does
not have enough assets to pay substantial damages. Hence some
categories of offense may routinely go unpunished.
In Brin's world that problem vanishes. Every mugging is on tape. If
the mugger chooses to wear a mask while committing his crime we can
trace him backwards or forwards through the record until he takes it
off. While a sufficiently ingenious criminal might find a way around
that problem, most of the offenses that our criminal law now deals
with would be cases where most of the facts are known and only their
legal implications remain to be determined. Hence the normal crime
becomes very much like the normal tort–an auto accident, say,
where (except in the case of hit and run, which is a crime) the
identity of the party and many of the relevant facts are public
information. In that world it might make sense to abolish criminal
law and shift everything to the decentralized, privately controlled
alternative. If someone steals your car you check the video record to
identify him, then sue for the car plus a reasonable payment for your
time and trouble recovering it.
Like many radical ideas, this one looks less radical if one is
familiar with the relevant history. Legal systems in which something
similar to tort law dealt with what we think of as crimes–in
which if you killed someone his kinsmen sued you–are reasonably
common in the historical record. Even as late as the 18
th
century, while the English legal system distinguished between torts
and crimes, both were in practice privately prosecuted, usually by
the victim.
[39] One
possible explanation for the shift to a modern, publicly prosecuted
system of criminal law is that it was a response to the increasing
anonymity that accompanied the shift to a more urban society in the
late 18
th and early 19
th century. Technologies
that reverse that shift may justify a reversal of the accompanying
legal changes.
Where Worlds
Collide
In the previous chapter I described a cyberspace with more
privacy than we have today. In this chapter I have described a
realspace with less. What happens if we get both?
It does no good to use strong encryption for my email if a video
mosquito is sitting on the wall watching me type and recording every
keystroke. Hence in a transparent society, strong privacy requires
some way of guarding the interface between my realspace body and
cyberspace. This is no problem in the version where the walls of my
house are still opaque. It is a serious problem in the version in
which every place is, in fact if not in law, public. A low tech
solution is to type under a hood. A high tech solution is some link
between mind and machine that does not go through the fingers–or
anything else visible to an outside observer.
The conflict between realspace transparency and cyberspace privacy
goes in the other direction as well. If we are sufficiently worried
about other people hearing what we say, one solution is to encrypt
face to face conversation. With suitable wireless gadgets, I talk
into a throat mike or type on a virtual keyboard (keeping my hands in
my pockets). My pocket computer encrypts my message with your public
key and transmits it to your pocket computer, which decrypts the
message and displays it through your VR glasses. To make sure nothing
is reading the glasses over your shoulder, the goggles get the image
to you not by displaying it on a screen but by using a tiny laser to
write it on your retina. With any luck, the inside of your eyeball is
still private space. Think of it as a high tech equivalent of talking
to your wife in French when you don't want the children to
understand.
We could end up in a world where physical actions are entirely
public, information transactions entirely private. It has some
attractive features. Private citizens will still be able to take
advantage of strong privacy to locate a hit man, but hiring him may
cost more than they are willing to pay, since in a sufficiently
transparent world all murders are detected. Each hit man executes one
commission then goes directly to jail.
What about the interaction between these technologies and data
processing? On the one hand, it is modern data processing that makes
the transparent society such a threat–without that, it would
not much matter if you videotaped everything that happened in the
world, since nobody could ever find the particular six inches of
video tape he wanted in the million miles produced each day. On the
other hand, the technologies that support strong privacy provide a
possibility of reestablishing privacy, even in a world with modern
data processing, by keeping information about your transactions from
ever getting to anyone but you. That is a subject we will return to
when we discuss digital cash–an idea dreamed up in large part
as a way of restoring transactional privacy.
Chapter
VI: Why Do We Want Privacy Anyway?
In Chapter IV, I touched briefly on the question of why people care
about their privacy; it is now time to consider that question at
greater length. The first step is to define my terms a little more
precisely.
In this chapter I use “informational privacy” as
shorthand for an individual’s ability to control other people’s
access to information about him. If I have a legal right not to have
you tap my phone but cannot enforce that right–the situation at
present for those using cordless phones without encryption–then
I have little privacy in that respect. On the other hand, I have
strong privacy with regard to my own thoughts, even though it is
perfectly legal for other people to use the available technologies–listening
to my voice and watching my facial expressions–to try to figure
out what I am thinking.
[40]
Privacy in this sense depends on a variety of things, including both
law and technology. If someone invented an easy and accurate way of
reading minds, privacy would be radically reduced even if there were
no change in my legal rights.
There are two reasons to define privacy in this way. The first is
that I am interested in its consequences, in the ways in which my
ability to control information about me benefits or harms myself and
others—whatever the source of that ability may be. The second
is that I am interested in the ways in which technology is likely to
change the ability of an individual to control information about
himself—hence in changes in privacy due to sources other than
changes in law.
What
is Informational privacy and why does it matter?
Many people go to some trouble to reduce the amount others
can find out about them, demonstrating that their own privacy has
positive value to them. Many people, sometimes the same people, make
an effort to get information about other people, demonstrating that
to them the privacy of those other people has negative value. This
leaves open the question of whether privacy on net is desirable or
undesirable—whether I gain more from being able to keep you
from finding things out about me than I lose from being unable to
find out things about you. Most people seem to think that the answer
is “yes.” It is common to see some new product,
technology, or legal rule attacked as reducing privacy, rare to see
anything attacked as increasing privacy. Why?
The reasons I value my privacy are straightforward. Information about
me in the hands of other people sometimes permits them to gain at my
expense. They may do so by stealing my property–if, for
example, they know when I will not be home. They may do so by getting
more favorable terms in a voluntary transaction–if, for
example, they know just how much I am willing to pay for the house
they are selling.
[41]
They may do so by preventing me from stealing their property–by,
for example, not hiring me as company treasurer after discovering
that I am a convicted embezzler.
Information about me in other people’s hands may also sometimes
make me better off–for example, the information that I am
honest and competent. But privacy does not prevent that. If I have
control over information about myself I can release it when doing so
benefits me and keep it private when releasing it would make me worse
off.
[42]
My examples include one–where my privacy protects me from
burglary–in which privacy produced a net benefit; the gain to a
burglar is normally less than the loss to his victim. It included one–where
my privacy permitted me to steal from others–in which privacy
produced a net loss. And it included one case–bargaining–where
the net effect appeared to be a wash—what I lost someone else
gained.
[43] So
while it is clear why I am in favor of my having privacy, it is still
unclear why I should expect privacy to, on average, produce a net
benefit—for my gains from my having privacy to outweigh my
losses from your having it.
That voluntary case is worth a little more attention. Consider a real
world example:
[44]
Before my wife and I moved from Chicago to California, we spent some
time looking for a house. We found, in the entire south bay,
precisely one house that we really liked—a lovely ninety year
old home, set in its own tiny island of green surrounded by walls and
hedges, in a neighborhood of fifties ranch houses. As an added bonus,
the current owners, having bought the house in dilapidated condition,
had put time and thought into undoing the effects of decades of
neglect. Apparently our tastes were almost as uncommon as the house—judged
by the fact that the owners were offering it at price comparable to
new houses of similar size and having a sufficiently hard time
finding a buyer to be willing to consider offers at least somewhat
below their asking price.
We did not, probably could not, conceal the fact that we liked the
house. But we did make some attempt to conceal how much we liked the
house—and how much, if necessary, we were willing and able to
pay for it. If we had had no privacy, if the sellers had been able to
listen in to all of our thoughts and conversations, we would have
ended up paying noticeably more for it than we did. Conversely, if
they had had no privacy, we might have been able to discover that
they were willing to accept a lower price than the one we eventually
paid.
So far it looks as though the effect of more or less privacy is, as I
said earlier, a wash—one side of the bargain gains what the
other side loses. So long as we end up buying the house, that may be
true.
At some stage in the bargaining, we make a final offer and they do or
do not accept. Our offer is based in part on what the house is worth
to us and in part on what we think it is worth to them—our
estimate of the lowest offer they are reasonably likely to accept.
Whether they accept it depends in part on the worth of the house to
them, in part on whether they really believe it is our final offer or
think that by refusing it they can get a better one.
If one side or the other guesses wrong, if they refuse to accept our
offer because they think we will raise it or we refuse to raise it
because we think they will accept it, the bargain falls through and
we end up with our second or third favorite house instead. Such
bargaining breakdown represents a real loss—both sides are
worse off than if they had sold us the house at some price above
their value for it and below ours. Privacy, by making it harder for
each side to correctly interpret the other’s position, makes
such breakdown more likely.
Generalizing the argument, it looks as though privacy produces, on
average, a net loss in situations where parties are seeking
information about each other in order to improve the terms of a
voluntary transaction, since it increases the risk of bargaining
breakdown.
[45] In
situations involving involuntary transactions, privacy produces a net
gain if it is being used to protect other rights (assuming that those
rights have been defined in a way that makes their protection
desirable) and a net loss if it is being used to violate other rights
(with the same assumption). There is no obvious reason why the former
situation should be more common than the latter. So it remains
puzzling why people in general support privacy rights–why they
think it is, on the whole, a good thing for people to be able to
control information about themselves.
Privacy
Rights and Rent Seeking
I have a taste for watching pornographic videos. My boss is a
puritan who does not wish to employ people who enjoy pornography. If
I know my boss is able to monitor my rentals from the local video
store I respond by renting videos from a more distant and less
convenient outlet. My boss is no better off as a result of the
reduction in my privacy; I am still viewing pornography and he is
still ignorant of the fact. I am worse off by the additional driving
time required to visit the more distant store.
Privacy—embodied in a law forbidding the video store from
telling my boss what I am renting
[46]--not
only saves me time, it also discourages my boss from spending time
and effort worming information out of the clerk at the local video
store. It thus reduces both my costs and his—mine because I can
do what I want to do more easily, his because he can’t do it at
all.
As the odd asymmetry of the example suggests—lowering the cost
of keeping my secrets reduces my expenditure, increasing the cost of
learning my secrets reduces his—suggests, the argument depends
on specific assumptions about relevant demand and supply functions.
It goes through rigorously if the demand for privacy is inelastic and
the demand for information about others is elastic. Reverse those
assumptions and it becomes an argument against privacy, not for.
Put in that form the argument sounds abstract, but the concrete
version should be obvious to anyone who has ever closed a door behind
him, loosened his tie, taken off his shoes, and put his feet up on
his desk. Privacy has permitted him to maintain his reputation as
someone who behaves properly without having to bear the cost of
actually behaving properly—which is why there is no window
between his office and the adjacent hallway.
The explanation of privacy as a way of reducing costs associated with
getting and protecting information also depends on the assumption
that the information about me starts in my control, so that
facilitating privacy means making it easier for me to protect what I
already possess. But much information about me comes into existence
in other people’s possession. Consider, for example, court
records of my conviction on a criminal charge, or a magazine’s
mailing list with my name on it. Protecting my privacy with regard to
such information requires some way of removing that information from
the control of those people who initially possess it and transferring
control to me. That is, in most cases, a costly process. There are
lots of reasons, unconnected with privacy issues, why we want people
in general to have access to court records, and there is no obvious
non-legal mechanism by which I can control such access,
[47]
so if we do nothing to give people rights over such information about
them, the information will remain public and nothing will have to be
spent to restrict access to it.
Privacy
as Property
An alternative argument in favor of making privacy easier to
obtain starts with a point that I made earlier: if I have control
over information about me but transferring that information to
someone else produces net benefits, I can give or sell that
information to him. By protecting my control over information about
me we establish a market in information. Each piece of information
moves to the person who values it most, maximizing net benefit.
So far this is an argument not for privacy but for private property
in information.
[48]
To get to an argument for privacy requires two further steps. The
first is to observe that most information about me starts out in my
possession, although not necessarily my exclusive possession. Giving
anyone else exclusive rights to it requires somehow depriving me of
it–which, given the absence of technologies to produce
selective amnesia, is difficult. It would be possible to deprive me
of control over information by making it illegal for me to make use
of it or transmit it to others, but enforcing such a restriction
would be costly, perhaps prohibitively costly.
The second step is to note that legal rules that assign control over
information to the person to whom it is most valuable save us the
transaction costs of moving it to that person. Information about me
is sometimes most valuable to me (where it protects me from a
burglar), sometimes to someone else. There are, however, a lot of
different someone else’s. So giving each person control over
information about himself, especially information that starts in his
possession, is a legal rule that should minimize the transaction cost
of getting information to its highest valued user.
Stated in the abstract, this sounds like a reasonable argument–and
it would be, if we were talking about other forms of property. But
there are problems with applying a property solution to personal
information. Transacting over information is difficult because it is
hard to tell the customer what you are selling without, in the
process, giving it to him. And information can be duplicated a cost
close to zero, so that while the efficient allocation of a car is to
the single person who has the highest value for it, the efficient
allocation of a piece of information is to everyone to whom it has
positive value.
[49]
That implies that legal rules that treat information as a commons,
free for everyone to make copies, lead to the efficient
allocation.
That conclusion must be qualified in two ways. First, legal
protection of information may be a cheaper substitute for private
protection; if the information is going to be protected because it is
in someone’s interest to do so, we might as well have it
protected as inexpensively as possible. Second, you cannot copy
information unless it exists. Thus we get the familiar argument from
the economics of intellectual property, which holds that patent and
copyright result in a suboptimal use of existing intellectual
property, since marginal cost on the margin of number of users is
zero while price is positive, but that in exchange we get a more
nearly optimal production of intellectual property.
Establishing rights to information in order to give someone an
incentive to create that information is a legitimate argument for
property rules in contexts such as copyright or patent. It is less
convincing in the context of privacy, since information about me is
either produced by me as a byproduct of other activities, in which
case I don’t need any additional incentive to produce it, or
else produced by other people about me, in which case giving property
rights in the information to me will not give them an incentive to
produce it. It does provide an argument for privacy in some contexts,
most obviously the context of trade secrets, where privacy is used to
protect produced information and so give people an incentive to
produce it.
Privacy
and Government
“It would have been impossible to proportion with
tolerable exactness the tax upon a shop to the extent of the trade
carried on in it, without such an inquisition as would have been
altogether insupportable in a free country.”
(Adam Smith’s explanation of why a sales tax is impractical;
Wealth of Nations Bk V, Ch II, Pt II, Art. II)
“The state of a man’s fortune varies from day to day,
and without an inquisition more intolerable than any tax, and renewed
at least once every year, can only be guessed at.” (Smith’s
explanation of why an income tax is impractical, Bk V Article IV)
So far I have ignored an issue central to the concerns of many
people, including myself: privacy from government. The case of
privacy from government differs from the case of privacy from private
parties in two important respects. The first is that although private
parties occasionally engage in involuntary transactions such as
burglary, most of their interactions with each other are voluntary
ones. Governments engage in involuntary transactions on an enormously
larger scale. The second difference is that governments almost always
have an overwhelming superiority of physical force over the
individual citizen. While I can protect myself from my fellow
citizens with locks and burglar alarms, I can protect myself from
government actors only by keeping information about me out of their
hands.
[50]
The implications depends on one’s view of government. If
government is the modern equivalent of the philosopher king,
individual privacy simply makes it harder for government actors to do
good. If a government is merely a particularly large and well
organized criminal gang, individual privacy against government
becomes an unambiguously good thing. Most Americans appear, judging
by expressed views on privacy, to be close enough to the latter
position to consider privacy against government as on the whole
desirable, with an exception for cases where they believe that
privacy might be used primarily to protect private criminals.
Seen from this standpoint, one problem with Brin's transparent
society is the enormous downside risk. Played out under less
optimistic assumptions than his, the technology could enable a
tyranny that Hitler or Stalin might envy. Even if we assume with Brin
that the citizens are as well informed about the police as the police
about the citizens, it is the police who have the guns. They know if
we are doing or saying anything they disapprove of, and respond
accordingly, arresting, imprisoning, perhaps torturing or executing
their opponents. We have the privilege of watching. Why should they
object? Public executions are an old tradition, designed in part to
discourage other people from doing things the authorities disapprove
of.
It does not follow that Brin's prescription is wrong. His argument,
after all, is that privacy will simply not be an option, either
because the visible benefits of surveillance are so large or because
the technology will make it impossible to prevent. If he is right,
his transparent society may at least be better than the alternative–universal
surveillance to which only those in power have access, a universal
Panopticon with government as the prison guards.
Part
III: Doing Business Online
The growing importance of Cyberspace is one revolution we can
be confident of, since it has already happened. An earlier chapter
discussed the implications for privacy. This section deals with how
to do business in a networked world in which physical location and
physical identity are becoming increasingly irrelevant. The issues
are connected, since some of the tools for doing business in
cyberspace also provide ways of maintaining control over personal
information while doing so.
We start with the problem of money–how to pay for things? One
attractive answer is anonymous ecash–money that it can be
passed from one computer to another by sending messages, with no need
to transmit anything physical. Such a system has some attractive
appllications–it makes possible a simple solution to the
irritation of spam email and may make it possible to reclaim the
privacy that modern data processing is currently destroying. It also
makes some current law enforcement strategies, most notably the
attempt to enforce laws by monitoring and controlling the flow of
money, unworkable. Perhaps most interestingly, it raises the
possibility of widely used private currencies competing with each
other and with government money in realspace as well as in
cyberspace.
The next chapter considers a different problem–enforcing
contracts online. Online interactions are, in some sense, entirely
voluntary; you (or your computer) can be tricked into doing something
you do not want to but you cannot be forced to do something you do
not want to, since you are the one with physical control over your
computer. In the worst case you can always pull the plug. In an
entirely voluntary world, most legal issues can be reduced to
contract law. While enforcement of contract law through the court
system will become increasingly difficult online, it seems likely
that private alternatives ultimately based on reputational sanctions
will become increasingly viable.
We consider next property–intellectual property. A world of
easy and inexpensive copying and communication is a world where
enforcing copyright is extraordinarily difficult. Are there other,
perhaps in that world better, ways to give creators control over what
they create? That brings us to the recent and increasingly
controversial issue of technological protection of intellectual
property–the online equivalent of the barbed wire fences whose
invention revolutionized western agriculture. It also brings us back
to the possibility of treating personal information as private
property–protected not by law but by technology.
The final chapter of this section deals with ways in which the new
technologies, by greatly reducing the cost of communication and
information, change how we organize our productive activities. One
pattern is a shift away from formal organizations such as
corporations and universities towards more decentralized models, such
as networks of amateur scholars and open source programmers.
Chapter
VII: Ecash
Most of us pay for things in three different ways–credit
cards, checks and cash. The first two let you make large payments
without having to carry large amounts of money with you. What are the
advantages of the third?
A seller does not have to know anything about you in order to accept
cash. That makes money a better medium for transactions with
strangers, especially strangers from far away. It also makes it a
better medium for small transactions, since using money avoids the
fixed costs of checking up on someone to make sure that there is
really money in his checking account or that his credit is good. That
also means that money leaves no paper trail, which is useful not only
for criminals but for anyone who wants to protect his privacy—an
increasingly important issue in a world where data processing
threatens to make every detail of our lives public property. There is
no need for anyone but you to know that the transaction occurred; the
seller knows what he sold but not to whom.
The advantage of money is greater in cyberspace, since transactions
with strangers, including strangers far away, are more likely on the
internet than in my realspace neighborhood. The disadvantage is less,
since my ecash is stored inside my computer, which is usually inside
my house, hence less vulnerable to theft than my wallet.
Despite it’s potential usefulness, there is as yet no
equivalent of cash available online, although there have been
unsuccessful attempts to create one and successful attempts to create
something close.
[51]
The reason is not technological; those problems have been solved. The
reason is in part the hostility of governments to competition in the
money business and their unwillingness, so far, to create their own
ecash, and in part the difficulty of getting standards, in this case
private monetary standards, established. I expect both problems to be
solved sometime in the next decade or so.
Before discussing how a system of virtual currency, private or
governmental, might work, it is worth first giving at least one
example of why it would be useful–for something more important
than allowing men to look at pornography online without their wives
or employers finding out.
Slicing
Spam
My email contains much of interest. It also contains READY
FOR A SMOOTH WAY OUT OF DEBT?, A Personal Invitation from
make_real_money@BIGFOOT.COM,
You've Been Selected..... from
friend@localhost.net,
and a variety of similar messages, of which my favorite offers “the
answer to all your questions.” The internet has brought many
things of value, but for most of us unsolicited commercial email,
better known as spam, is not one of them.
There is a simple solution to this problem—so simple that I am
surprised nobody has yet implemented it. The solution is to put a
price on your mailbox. Give your email program a list of the people
you wish to receive mail from. Mail from anyone not on the list is
returned, with a note explaining that you charge five cents to read
mail from strangers– and the URL of the stamp machine. Five
cents is a trivial cost to anyone with something to say that you are
likely to want to read, but five cents times ten million recipients
is quite a substantial cost to someone sending out bulk email on the
chance that one recipient in ten thousand may respond.
The stamp machine is located on a web page. The stamps are digital
cash. Pay ten dollars from your credit card and you get in exchange
two hundred five cent stamps–each a morsel of encrypted
information that you can transfer to someone else and that he, or
someone he transfers it to, can eventually bring back to the stamp
machine and turn back into cash.
A virtual stamps, unlike a real stamp, can be reused; it is paying
not for the cost of transmitting my mail but for my time and trouble
reading it, so the payment goes to me, not the post office. I can use
it the next time I want to send a message to a stranger. If lots of
strangers choose to send me messages, I can accumulate a surplus of
stamps to be changed back into cash.
The previous section dealt with informational privacy. This time the
issue is attentional privacy–the form of privacy violated when
someone calls me up at dinner to offer to refinance my nonexistent
mortgage.
[52]
Virtual stamps are a way of charging people for violating my
attentional privacy.
How much I charge is up to me. If I hate reading messages from
strangers, I can make the price a dollar, or ten dollars, or a
hundred dollars–and get very few of them. If I enjoy junk
email, I can set a low price. Once such a system is established, the
same people who presently create and rent out the mailing lists used
to send spam will add another service–a database keeping track
of what each potential target charges to receive it.
What is in it for the stamp machine–why would someone maintain
such a system? The old answer goes by the name of "seignorage"–the
profit from coining money. After selling a hundred million five cent
stamps, you have five million dollars of money. If your stamps are
popular, many of them may stay in circulation for a long time–leaving
the money that bought them in your bank account accumulating
interest.
In addition to the free use of other people’s money, there is a
second advantage. If you own the stamp machine, you also own the wall
behind it–the web page people visit to buy stamps.
Advertisements on that wall will be seen by a lot of people.
One reason this solution to spam requires ecash is that it involves a
large number of very small payments. It would be a lot clumsier if we
used credit cards–every time you received a message with a five
cent stamp, you would have to check with the sender's bank before
reading it to make sure the payment was good.
A second reason is privacy. Many of us would prefer not to leave a
complete record of our correspondence with a third party–which
we would be doing if we used credit cards or something similar. What
we want is not merely ecash but anonymous ecash–some way of
making payments that provides no information to third parties about
who has paid what to whom.
Constructing
Ecash
Suppose a bank wants to create a system of ecash. The first
and easiest problem is how to provide people with virtual banknotes
that cannot be counterfeited.
The solution is a digital signature. The bank creates a banknote that
says "First Bank of Cyberspace: Pay the bearer one dollar in U.S.
currency." It digitally signs the note, using its private key. It
makes the matching public key widely available. When you come in to
the bank with a dollar, it gives you a banknote in the form of a file
on a floppy disk. You transfer the file to your hard disk, which now
has a one dollar bill with which to buy something from someone else
online. When he receives the file he checks the digital signature
against the bank's public key.
The Double
Spending Problem
There is a problem—a big problem. What you have gotten
for your dollar is not one dollar bill but an unlimited number of
them. Sending a copy of the file in payment for one transaction does
not erase it from your computer, so you can send it again to someone
else to buy something else. And again. That is going to be a problem
for the bank, when twenty people come in to claim your original
dollar bill.
One solution is for the bank to give each dollar its own serial
number and keep track of which ones have been spent. When a merchant
receives your dollar bill he sends it to the bank, which deposits it
in his account and adds its serial number to a list of banknotes that
are no longer valid. When you try to spend a second copy of the note,
the merchant who receives it tries to deposit it, is informed that it
is no longer valid, and doesn't send you your goods.
This solves the problem of double spending, but it also eliminates
most of the advantages of ecash over credit cards. The bank knows
that it issued banknote 94602... to Alice, it knows that it came back
from Bill, so it knows that Alice bought something from Bill, just as
it would if she had used a credit card.
There is a solution to this problem. It uses what David Chaum, the
Dutch cryptographer who is responsible for many of the ideas
underlying ecash, calls
blind signatures. It is a way in which
Alice, having rolled up a random serial number for a dollar bill, can
get the bank to sign that serial number (in exchange for paying the
bank a dollar) without having to tell the bank what the number they
are signing is. Even though the bank does not know the serial number
it signed, both it and the merchant who receives the note can check
that the signature is valid. Once the dollar bill is spent, the
merchant has the serial number, which he reports to the bank, which
can add it to the list of serial numbers that are now invalid. The
bank knows it provided a dollar to Alice, it knows it received back a
dollar from Bill, but it does not know that they are the same dollar.
So it does not know that Alice bought something from Bill. The seller
has to check with the bank and know that the bank is trustworthy, but
it does not have to know anything about the purchaser.
Curious readers will want to know how it is possible for a bank to
sign a serial number without knowing what it is. I cannot tell them
without first explaining the mathematics of public key encryption,
which requires more math than I am willing to assume my average
reader has. Those who are curious can find the answers in the virtual
footnotes, which point to webbed explanations of both public key
encryption and blind signatures.
[53]
So far I have been assuming that people who receive digital cash can
communicate with the bank that issues it while the transaction is
taking place–that they and the bank are connected to the
internet or something similar. That is not a serious constraint if
the transaction is occurring online. But digital cash could also be
useful for realspace transactions–and the cabby or hotdog
vendor may not have an internet connection.
The solution is another clever trick (Chaum specializes in clever
tricks). It is a form of ecash that contains information about the
person it was issued to but only reveals that information if he tries
to spend the same dollar bill twice. For an explanation of how it
works, you must again go to the virtual footnotes.
[54]
Skeptical readers should at this point be growing increasingly
unhappy at being told that everything about ecash is done by
mathematics that I am unwilling to explain–which they may
reasonably enough translate as "smoke and mirrors." For their benefit
I have invented my own form of ecash–one that has all of the
features of the real thing and can be understood with no mathematics
beyond the ability to recognize numbers. It would be a good deal less
convenient in practice than Chaum's version but it is a lot easier to
explain, and so provides a least a possibility proof for the real
thing.
Low Tech
Ecash
I randomly create a very long number. I put the number and a
dollar bill in an envelope and mail it to the First Bank of
Cybercash. The FBC agrees–in a public statement–to do two
things with money it receives in this way:
I. If anyone walks into the FBC and presents the number, he gets the
dollar bill.
II. If the FBC receives a letter that includes the number associated
with a dollar bill it has on deposit, instructing the FBC to change
it to a new number, it will make the change and post the fact of the
transaction on a publicly observable bulletin board. The dollar bill
will now be associated with the new number.
Lets see how this works:
Alice has sent the FBC a dollar, accompanied by the number 59372. She
now wants to buy a dollar's worth of digital images from Bill, so she
emails the number to him in payment. Bill emails the FBC, sending
them three numbers–59372, 21754, 46629.
The FBC checks to see if it has a dollar on deposit with number
59372; it does. It changes the number associated with that dollar
bill to 21754, Bill's second number. Simultaneously, it posts on a
publicly observable bulletin board the statement "the transaction
identified by 46629 has gone through." Bill reads that message, which
tells him that Alice really had a dollar bill on deposit and it is
now his, so he emails her a dollar's worth of digital images.
Alice no longer has a dollar, since if she tries to spend it again
the bank will report that it is not there to be spent–FBC no
longer has a dollar associated with the number she knows. Bill now
has a dollar, since the dollar that Alice originally sent in is now
associated with a new number and only he (and the bank) knows what it
is. He is in precisely the same situation that Alice was before the
transaction, so he can now spend the dollar to buy something from
someone else. Like an ordinary paper dollar, the dollar of ecash in
my system passes from hand to hand. Eventually someone who has it
decides he wants a dollar of ordinary cash instead; he takes his
number, the number that Alice's original dollar is now associated
with, to the FBC to exchange for a dollar bill.
My ecash may be low tech, but it meets all of the requirements.
Payment is made by sending a message. Payer and payee need know
nothing about the other's identity beyond the address to send the
message to. The bank need know nothing about either party. When the
dollar bill originally came in, the letter had no name on it, only an
identifying number. Each time it changed hands, the bank received an
email but had no information about who sent it. When the chain of
transactions ends and someone comes into the bank to collect the
dollar bill he need not identify himself; even if the bank can
somehow identify him he has no way of tracing the dollar bill back up
the chain. The virtual dollar in my system is just as anonymous as
the paper dollars in my wallet.
With lots of dollar bills in the bank there is a risk that two might
by chance have the same number, or that someone might make up numbers
and pay with them in the hope that the numbers he invents will, by
chance, match numbers associated with dollar bills in the bank. But
both problems become insignificant if instead of using five digit
numbers we use hundred digit numbers. The chance that two random
hundred digit numbers will turn out to be the same is a good deal
less than the chance that payer, payee, and bank will all be struck
by lightning at the same time.
Robot
Mechanics
It may have occurred to you that if you have to roll up a
hundred digit random number every time you want to buy a dollar of
ecash and two more every time you receive one, not to mention sending
off one anonymous email to the bank for every dollar you receive,
ecash may be more trouble than it is worth. Don't worry. That's your
computer's job, not yours. With a competently designed ecash system,
the program takes care of all mathematical details; all you have to
worry about is having enough money to pay your (virtual) bills. You
tell your computer what to pay to whom; it tells you what other
people have paid to you and how much money you have. Random numbers,
checks of digital signatures, blind signing, and the all the rest is
done in the background. If you find that hard to believe, consider
how little most of us know about how the tools we routine use, such
as cars computers or radios, actually work.
Ecash
and Privacy
When Chaum came up with the idea of ecash and created the
mathematics necessary to make it work, email was not yet sufficiently
popular to make spam an issue. What motivated him was the problem we
discussed back in chapter ???– loss of privacy due to the
ability of computers to combine publicly available information into a
detailed portrait of each individual.
Consider an application of ecash that Chaum has actually worked on–automated
toll collection. It would be very convenient if, instead of stopping
at a toll booth when getting on or off the interstate, we could
simply drive past, making the payment automatically in the form of a
wireless communication between the (unmanned) toll booth and the car.
The technology to do this exists and has long been used to provide
automated toll collection for busses on some roads.
One problem is privacy. If the payment is made with a credit card, or
if the toll agency adds up each month's tolls and sends you a bill,
someone has a complete record of every trip you have taken on the
toll road, every time you have crossed a toll bridge. If we deal with
auto pollution by randomly measuring pollutants in the exhaust plumes
of passing automobiles and billing their owners, someone ends up with
detailed, if somewhat fragmentary, records of where you were
when.
Ecash solve that problem. As you whiz pass the toll booth, your car
pays it fifty cents in anonymous ecash. By the time you are thirty
feet down the road, the (online) toll booth has checked that the
money is good; if it isn't an alarm goes off, a camera triggers, and
if you don't stop a traffic cop eventually appears on your tail. But
if your money is good you go quietly about your business–and
there is no record of your passing the toll booth. The information
never came into existence, save in your head. Similarly for an
automated system of pollution charges.
It works for shopping as well. Ecash–this time encoded in a
smart card in your wallet or a palmtop computer in your pocket–provides
much of the convenience of a credit card with the anonymity of cash.
If you want the seller to know who you are, you are free to tell him.
But if you prefer to keep your transactions private, you can.
Private
Money: A New Old Story
My examples so far implicitly assumed two things about ecash—that
it will be produced and redeemed by private banks but denominated in
government money. Both are likely, at least in the short run. Neither
is necessary.
Private money denominated in dollars is already common. My money
market fund is denominated in dollars, although Merrill Lynch does
not actually have a stack of dollar bills in a vault somewhere that
corresponds to the amount of money "in" my account. My university
I.D. card doubles as a money card, with numbers of dollars stored on
its magnetic strip—and decreased appropriately every time I buy
lunch on campus.
A bank could issue ecash on the same basis. Each dollar of ecash
represents a claim to be paid a dollar bill. But the actual assets
backing that claim consist not of a stack of dollar bills but of
stocks, bonds, and the like–which have the advantage of paying
the bank interest for as long as the dollar of ecash is out there
circulating.
While I do not have to know anything about you in order to accept
your ecash, I do have to know something about the bank that issued it–enough
to be sure that the money will eventually be redeemed. That means
that any ecash expected to circulate widely will be issued by
organizations with reputations. In a world of almost instantaneous
information transmission, those organizations will have a strong
incentive to maintain their reputations, since a loss of confidence
will result in money holders bringing in virtual banknotes to be
redeemed, eliminating the source of income that the assets backing
those banknotes provided.
Some economists, in rejecting the idea of private money, have argued
that such an institution is inherently inflationary. Since issuing
money costs a bank nothing and gives it the interest on the assets it
buys with the money, it is always in the bank's interest to issue
more.
The rebuttal to this particular error was published in 1776. When
Adam Smith wrote
The Wealth of Nations, the money of Scotland
consisted largely of banknotes issued by private banks, redeemable in
silver.
[55] As
Smith pointed out, while a bank could print as many notes as it
wished, it could not persuade other people to hold an unlimited
number of its notes. A customer who holds a thousand dollars in
virtual cash–or Scottish banknotes–when he only needs a
hundred is giving up the interest he could have been earning if he
had held the other nine hundred dollars in bonds or some other
interest earning asset instead. That is a good reason to limit his
cash holdings to the amount he actually needs for day to day
transactions.
What happens if a bank tries to issue more of its money than people
wish to hold? The excess comes back to be redeemed. The bank is
wasting its resources printing up money, trying to put it into
circulation, only to have each extra banknote promptly returned for
cash–in Smith's case, silver. The obligation of the bank to
redeem its money for something else of value guarantees its value,
and at that value there is a fixed amount of the money that people
will choose to hold.
"Let us suppose that all the paper of a particular bank, which the
circulation of the country can easily absorb and employ, amounts
exactly to forty thousand pounds; and that for answering occasional
demands, this bank is obliged to keep at all times in its coffers ten
thousand pounds in gold and silver. Should this bank attempt to
circulate forty-four thousand pounds, the four thousand pounds which
are over and above what the circulation can easily absorb and employ,
will return upon it almost as fast as they are issued." (Bk I Chapter
2)
This assumes that the value of whatever the money is defined in–dollars
for us, silver for Smith–is given. In Smith's case that was a
reasonable assumption; in our case it may not be. The more people use
things other than government currency to pay their bills–whether
the university issued money card I presently have, checks drawing off
a money market fund, or ecash–the lower the demand for
government currency. The lower the demand for government currency the
lower, all else being equal, its purchasing power will be. Thus the
growth of private money denominated in dollars could cause inflation
if the government issuing the dollars is unwilling to follow a
monetary policy aimed at preventing it–which might require
reducing the amount of currency in circulation as alternatives become
more popular, in order to maintain its value.
For that reason and others, it is worth rethinking the assumption
that future ecash will and should be denominated in dollars. Dollars
have one great advantage–they provide a common unit already in
widespread use. They also have one great disadvantage–they are
produced by a government, and it may not always be in the interest of
that government to maintain their value in a stable, or even
predictable, way. On past evidence, governments sometimes increase or
decrease the value of their currency inadvertently or for any of a
variety of political purposes. In the extreme case of a
hyperinflation, a government tries to fund its activities with the
printing press, rapidly increasing the amount of money and decreasing
its value. In less extreme cases, a government might inflate in order
to benefit debtors by inflating away the real value of their debts–governments
themselves are often debtors, hence potential beneficiaries of such a
policy–or it might inflate or deflate in the process of trying
to manipulate its economy for political ends.
[56]
Dollars have a second disadvantage, although perhaps a less serious
one. Because they are issued by a particular government, citizens of
other governments may prefer not to use them. This has not prevented
dollars from becoming a
de facto world currency, but it is one
reason why a national currency might not be the best standard to base
ecash on. The simplest alternative would be a commodity standard,
making the unit of ecash a gram of silver, or gold, or some other
widely traded commodity.
Under a commodity standard the monetary unit is no longer under the
control of a government, but it is subject to the forces that affect
the value of any particular commodity. That may lead to undesirable
uncertainty in its value. That problem is solved by replacing a
simple commodity standard with a commodity bundle.
Bring in a million Friedman Dollars and I agree to give you in
exchange ten ounces of gold, forty ounces of silver, ownership of a
thousand bushels each of grade A wheat and grade B soybeans, two tons
of steel [check for equivalent of grade], ... . If the
purchasing power of a million of my dollars is less than the value of
the bundle, it is profitable for people to assemble a million
Friedman dollars, exchange them for the bundle, and sell the contents
of the bundle–forcing me to make good on my promise and, in the
process, reducing the amount of my money in circulation. If the
purchasing power of my money is more than the worth of the
commodities it trades for, it is in my interest to issue some more.
Since the bundle contains lots of different commodities, random
changes in commodity prices can be expected to roughly average out,
giving us a stable standard of value.
A commodity bundle is a good theoretical solution to the problem of
monetary standards, but implementing it has a serious practical
difficulty–all the firms issuing ecash have to agree on the
same bundle. If they fail to establish a common standard, we end up
with a cyberspace in which different people use different currencies
and the exchange rates between them vary randomly. That is not an
unworkable system–Europeans have lived with it for a very long
time–but it is a nuisance.
Life is easier if the money I use is the same as the money used by
most of the people I do business with. On that fact our present world
system–multiple government monies, each with a near monopoly
within the territory of the issuing government–is built. It
works because most transactions are with people near you, and–unless
you happen to live next to the border–people near you live in
the same country you do. It works less well in Europe than in North
America because the countries are smaller.
That system does not work in cyberspace, because in cyberspace
national borders are transparent. For information transactions,
geography is irrelevant–I can download software or digital
images from London as easily as from New York. For online purchases
of physical objects geography is not entirely irrelevant, since the
goods have to be delivered, but less relevant than in my realspace
shopping. Hence a system of multiple national currencies means
everyone in cyberspace having to juggle multiple currencies in the
process of figuring out who has the best price and paying it. The
obvious solution is to establish a single standard of value for
cyberspace–either by adopting one national currency, probably
the dollar, or by establishing a private standard, such as the sort
of commodity bundle described above.
There may be another solution. The reason that everyone wants to use
the same currency as his neighbors is that currency conversion is a
nuisance. But currency conversion is arithmetic, and computers do
arithmetic fast and cheap, which suggests that, with some minor
improvements in the interfaces on which we do online business, we
could make the choice of currency irrelevant, permitting multiple
standards to coexist.
I live in the U.S.; you live in Italy. You have goods to sell,
displayed on a web page, with prices in Lira. I view that page
through my brand new browser–Netscape Navigator v 9.0. One
feature of the new browser is that it is currency transparent. You
post your prices in lira but I see them in dollars. The browser does
the conversion on the fly, using exchange rates read, minute by
minute, from my bank's web page. If I want to buy your goods, I pay
in dollar denominated ecash; my browser sends it to my bank which
sends lira denominated ecash to you. I neither know or care what
country you are in or what money you use–it's all dollars to
me.
Currency transparency will be easiest online, where everything
filters through browsers anyway. One can imagine, with a little more
effort, realspace equivalents. An unobtrusive tag on my lapel gives
my preferred currency; an automated price label on the store shelf
reads my tag and displays the price accordingly. Alternatively, the
price is displayed by a dumb price tag, read by a smart video camera
set into the frame of my glasses, converted to my preferred currency
by my pocket computer, and written in the air by the heads up display
generated by the eyeglass lenses–a technology we will return to
in a later chapter on virtual reality. As I write, the countries of
Europe are in the final stages of replacing their multiple national
currencies with the Euro. If the picture I have just painted turns
out to be correct, they may have finally achieved a common currency
just as it was becoming unnecessary.
We now have three possibilities for ecash. It might be private money
produced by multiple issuers but denominated in dollars or (less
probably) some other widely used national money. It might be private
money denominated in some common private standard of value–gold,
silver, or a commodity bundle. It might be private money denominated
in a variety of different standards, perhaps including both national
monies and commodities, with conversion handled transparently, so
that each individual sees a world where everyone is using his money.
And any of these forms of ecash might be produced by a government
rather than a private firm.
Will It
Happen?
During World War II, George Orwell wrote a regular column for
an American magazine [check details]. After the war he wrote
a retrospective article discussing what he had gotten right and what
wrong. His conclusion was that he was generally right about the way
the world was moving, generally wrong about how fast it would get
there. He correctly saw the logical pattern but failed to allow for
the enormous inertia of human society.
Similarly here. David Chaum's articles, laying out the groundwork for
fully anonymous electronic money, were published in technical
journals in the 1980's and summarized in a 1992 article in
Scientific American. Ever since then various people, myself
among them, have been predicting the rise of ecash along the lines he
sketched. While pieces of his vision have become real in other
contexts, there is as yet nothing close to a fully anonymous ecash
available for general use. Chaum himself, in partnership with the
Mark Twain Bank of Saint Louis, attempted to get a semi-anonymous
ecash into circulation–one which permitted one party to a
transaction be identified by joint action by the other party and the
bank. The effort failed and was abandoned.
[57]
One reason it has not happened is that online commerce has only very
recently become large enough to justify it. A second reason, I
suspect but cannot prove, is that national governments are unhappy
with the idea of a widely used money that they cannot control, and so
reluctant to permit (heavily regulated) private banks to create such
a money. A third and closely related reason is that a truly anonymous
ecash would eliminate a profitable form of law enforcement. There is
no practical way to enforce money laundering laws once it is possible
to move arbitrarily large amounts of money anywhere in the world,
untraceably, with the click of a mouse. A final reason is that ecash
is only useful to me if many other people are using it, which raises
a problem in getting it started.
These factors have slowed the introduction of ecash. I do not think
they will stop it. It only takes one country willing to permit it,
and one issuing institution in that country willing to issue it, to
bring ecash into existence. Once it exists, it will be politically
difficult for other countries to forbid their citizens from using it
and practically difficult, if it is forbidden, to enforce the ban.
There are a lot of countries in the world, even if we limit ourselves
to ones with sufficiently stable institutions so that people
elsewhere will trust their money. Hence my best guess is that some
version of one of the monies I have described in this chapter will
come into existence sometime in the next decade or so.
Chapter
VIII: Contracts in Cyberspace
You hire someone to fix your roof, and (imprudently) pay him in
advance. Two weeks later, you call to ask when he is going to get the
job done. After three months of alternating promises and silence, you
sue him, probably in small claims court.
Suing someone is a nuisance, which is why you waited three months. In
cyberspace it will be even more of a nuisance. The law that applies
to a dispute depends, in a complicated way, on where the parties live
and where the events they are litigating over happened. A contract
made online has no geographical location, and the other party might
live anywhere in the world. Suing someone in another state is bad
enough; suing someone in another country is best left to
professionals–who do not come cheap. If, as I suggested in an
earlier, chapter, the use of online encryption leads to a world of
strong privacy, where many people do business without revealing their
realspace identity, legal enforcement of contracts becomes not merely
difficult but impossible. There is no way to sue someone if you do
not know who he is.
Even in our ordinary, realspace lives, however, there is another
alternative, and one that is probably more important than litigation.
The reason that department stores make good on their "money back, no
questions asked" promises, and the reason the people who mow my lawn
keep doing it once a week even when I am out of town and so unable to
pay them, is not the court system. Customers are very unlikely to sue
a department store, however unreasonable its grounds for refusing to
take something back, and the people who mow my lawn are very unlikely
to sue me, even if I refuse to pay them for their last three weeks of
work.
What enforces the contract in both cases is reputation. The
department store wants to keep me as a customer, and won't if I
conclude that they are not to be trusted. Not only will they lose me,
they may well lose some of my friends, to whom I can be expected to
complain. The people who mow my lawn do a good job at a reasonable
price, such people are not easy to find, and I would be foolish to
offend them by refusing to pay for their work.
When we shift our transactions from the neighborhood to the internet,
legal enforcement becomes harder. Reputational enforcement, however,
becomes easier. The net provides a superb set of tools for collecting
and disseminating information–including information about who
can or cannot be trusted.
On an informal level, this happens routinely through both Usenet and
the Web. Some time back, I heard that my favorite palmtop–a
full featured computer, complete with keyboard, word processor,
spreadsheet, and much else, that fits in my pocket and runs more or
less for ever on its rechargeable battery–was available at an
absurdly low price from a discount reseller, apparently because the
attempt to sell it in the U.S. market (the machine is made by a
British company and is much more common in Europe than here)
[58]
had failed, and the company that made that attempt was dumping its
stock of rebranded Psion Revos (aka Diamond Makos). I went on the
web, searched for the reseller, and in the process
[59]
discovered that it had been repeatedly accused of failing to live up
to its service guarantees and was currently in trouble with
authorities in several states. I didn't care about service
guarantees, so bought the machine anyway as a present for a friend.
The same process works in a somewhat more organized fashion through
specialist web pages–MacIntouch for Macintosh users, the
Digital Camera Resource Page for consumers of digital cameras, and
many more.
For a different version of reputational enforcement online, consider
Ebay. Ebay does not sell goods, it sells the service of helping other
people sell goods, via an online auction system. That raises an
obvious problem. Sellers may be located anywhere and, at least for
the goods I have bid on, are quite likely to be located outside the
U.S. Most transactions are reasonably small, although not all, so
suing for failure to deliver, especially suing someone outside the
U.S. for failure to deliver, is rarely a practical option. With
millions of buyers and sellers, each individual buyer is not likely
to buy many things from any particular seller, so the seller need be
only mildly concerned about his reputation with that particular
buyer. So why don't all sellers simply take the money and run?
One reason is that Ebay provides extensive support for reputational
enforcement. Any time you win an Ebay auction, you have the option,
after taking delivery, of reporting your evaluation of the
transaction–whether the goods were as described and delivered
in good condition, and anything else you care to add. Any time you
bid on an Ebay auction, you have access to all past comments on the
seller, both in summary form and, if you are sufficiently interested,
in full. Successful Ebay sellers generally have a record of many
comments, very few of them negative.
There are, of course, ways that a sufficiently enterprising villain
could try to game the system. One would be by setting up a series of
bogus auctions, selling something under one name, buying it under
another, and giving himself a good review. Eventually he builds up a
string of glowing reviews, and uses them to sell a dozen non-existent
goods for high prices, payable in advance.
It's possible, but it isn't cheap. Ebay, after all, will be
collecting its cut of each of those bogus auctions. The nominal
buyers will require many different identities in order to keep the
trick from being obvious, which involves additional costs. Meanwhile
all the legitimate sellers have to do in order to build up their
reputation is honest business as usual. I am confident, on the basis
of no inside information at all, that at least one villain has done
it–but there don't seem to be enough to seriously discourage
people from using Ebay.
Alternatively, a dishonest seller could try to eliminate competitors
by buying goods from them under a false name then posting (false)
negative information about the transaction. That might be worth doing
in a market with only a few sellers–and for all I know it has
happened. But in the typical Ebay market, with many sellers as well
as many buyers, defaming one seller merely transfers the business to
another.
The
Logic of Reputational Enforcement
The possibilities discussed above suggest that, while a
relatively informal sort of reputational enforcement may be adequate
for many purposes, it is worth thinking about systems that are harder
to cheat on. Before doing so, it is worth saying a little more about
just how reputational enforcement works.
When someone does something bad, is arrested or sued, and ends up
paying damages or in jail, a major objective of the process is to
punish him and so discourage people from doing bad things–although,
in the case of a lawsuit, a second objective is to collect a damage
payment with which to pay the plaintiff's attorney with, hopefully, a
little left over for the plaintiff.
In the case of reputational enforcement, on the other hand,
punishment is not the objective, merely an indirect consequence. The
news that Charley bought an expensive suit jacket at the local
department store, his wife made him take it back, and they refused to
return his money, doesn't make me mad at the store–rather the
contrary. Ever since Charley told me what he really thought of my
latest book, I have regarded his misfortunes as good news not bad. As
word spread, more and more people stop shopping there–despite
the fact that Charley, who has the unfortunate habit of telling
people what he really thinks, has few friends. The reason we stop
shopping at that store is not to avenge him but to protect ourselves–we
too might someday buy something our wives disapproved of.
Reputational enforcement works by spreading true information about
bad behavior, information that makes it in the interest of some who
receive it to modify their actions in a way which imposes costs on
the person who has behaved badly.
As this example suggests, one variable determining how well
reputational enforcement works is how easily interested third parties
can get information about who cheated whom. To see this, suppose we
change the story a little, by making Charley not merely tactless but
routinely dishonest. Now when he complains that the store refused to
take the jacket back even though it was in good condition, we
conclude that his idea of good condition probably included multiple
ink stains and a missing sleeve, due to his wife's emphatic reaction
to how he had been wasting their money–we know her too–and
we continue patronizing the store.
One reason information costs to interested third parties are
important is that if they do not know who is at fault, they do not
know who to avoid future dealings with. A second and more subtle
reason is that if third parties cannot easily find out who is at
fault in a dispute they may never have the opportunity to try,
because the dispute may never become public. If I accuse you of
swindling me, you will of course deny it, and claim that I am using
the accusation to get out of my obligations under our mutual
agreement. Reasonable third parties, unable to check either side's
claims, conclude that at least one of us is a crook, they don't know
which, and it is therefore prudent to avoid both. Anticipating that
result, I decide not to make my accusation public in the first
place.
It follows that the way to make reputational enforcement work is by
setting up a contractual framework that makes it easy for interested
third parties to determine who is at fault. Such a frameworks exists,
and is extensively used, to settle intra-industry disputes in many
different industries. It is called arbitration.
You and I make an agreement and specify the private arbitrator to
settle any future disagreements over its terms. When a disagreement
occurs, one of us demands arbitration. The arbitrator gives a
verdict. If one of the parties refuses to abide by the verdict, the
arbitrator can make that fact public. An interested third party,
typically another firm in the same industry, does not have to know
the facts of the dispute to know who is at fault. All it has to know
is that both of us agreed to the arbitrator, and that the arbitrator
we agreed to says that one of us reneged on that agreement.
[60]
This works well within an industry because the people involved know
each other and are familiar with the industry's institutions for
settling disputes. It works less well in the context of disputes
between a firm and one of its many customers–because other
customers, unless they too are part of the industry, are unlikely to
know enough about the institutions to be confident who was cheating
whom. What about in cyberspace?
Very Close
to Zero: Third Party Costs in Cyberspace
You and I agree to a contract online. The contract contains
the name of the arbitrator who will resolve any disputes and his
public key–the information necessary to check his digital
signature. We both digitally sign the contract and each keeps a
copy.
A dispute arises; you accuse me of failing to live up to my
agreement, and demand arbitration. The arbitrator rules for you and
instructs me to pay you five thousand dollars in damages. I refuse.
The arbitrator writes his account of how the case came out–he
awarded damages, I refused to pay them. He digitally signs it and
sends you a copy.
You now have a package–the original contract and the
arbitrator's verdict. My digital signature on the original contract
proves that I agreed to that arbitrator; his digital signature on the
verdict proves that I reneged on that agreement. That is all the
information that an interested third party needs in order to conclude
that I am not to be trusted. You put the package on a web page, with
my name all over it for the benefit of any search engines looking for
information about me, and email the URL to anyone you think might
want to do business with me in the future. Anyone interested can
check the facts–more precisely, his computer can check the
facts for him, by checking the digital signatures–in something
under a second. He knows that I reneged on my agreement. The most
likely explanation is that I am dishonest. An alternative explanation
is that I was fool enough to agree to a crooked arbitrator–but
he probably doesn't want to do business with fools either. Thus the
technology of digital signatures makes it possible to reduce
information costs to third parties to something very close to zero,
and thus makes possible effective reputational enforcement online,
via a competitive system of private courts–arbitrators.
Private enforcement of contracts along these lines solves the
problems raised by the fact that cyberspace spans many geographical
jurisdictions. The relevant law is defined not by the jurisdiction
but by the private arbitrator chosen by the parties. Over time, we
would expect one or more body of legal rules with regard to contracts
to develop, as common law historically did develop, with many
different arbitrators or arbitration firms adopting the same or
similar legal rules.
[61]
Contracting parties could then choose arbitrators on the basis of
reputation.
For small scale transactions, you simply provide your browser with a
list of acceptable arbitration firms; when you contract with another
party, the software picks an arbitrator from the intersection of the
two lists. If there exists no arbitrator acceptable to both parties,
the software notifies both of you of the problem and you take it from
there.
Private enforcement also solves the problem of enforcing contracts
when at least one of the parties is, and wishes to remain, anonymous.
Digital signatures make it possible to combine anonymity with
reputation. A computer programmer living in Russia or Iraq, where
anonymity is the only way of protecting his income from private or
public bandits, and selling his services online, has an online
identity defined by his public key; any message signed by that public
key is from him. That identity has a reputation, developed through
past online transactions; the more times the programmer has
demonstrated himself to be honest and competent, the more willing
people will be to employ him. The reputation is valuable, so the
programmer has an incentive to maintain it–by keeping his
contracts.
[62]
The
Reputation Market
"(On Earth they) even have laws for private matters such as
contracts. Really. If a man's word isn't any good, who would contract
with him? Doesn't he have reputation?"
(Manny in The Moon is a Harsh
Mistress by Robert Heinlein)
There is at least one way in which the online world I
have been describing makes contract enforcement harder than in the
real world. In the real world, my identity is tied to a particular
physical body, identifiable by face, finger prints, and the like. I
do not have the option, after destroying my realspace reputation for
honesty, of spinning off a new me, complete with new face, new
fingerprints, and an unblemished reputation.
Online I do have that option. As long as other people are willing to
deal with cyberspace personae not linked to realspace identities, I
always have the option of rolling up a new public key/private key
pair and going online with a new identity and a clean reputation.
It follows that reputational enforcement will only work for people
who have reputations–sufficient reputational capital so that
abandoning the current online persona and its reputation is costly
enough to outweigh the gain from a single act of cheating. Hence
someone who wants to deal anonymously in a trust intensive industry
may have to start small, building up his reputation to the point
where its value is sufficient to make it rational to trust him with
larger transactions. Presumably the same thing happens in the diamond
industry today, where enforcement is primarily through reputational
mechanisms.
[63]
The problem of spinning off new identities is not limited to
cyberspace. Real persons in realspace have fingerprints but legal
persons may not. The realspace equivalent of rolling up a new pair of
keys is filing a new set of incorporation papers. Marble facing for
bank buildings and expensive advertising campaigns can be seen as
ways in which a new firm posts a reputational bond in order to
persuade those who deal with it that they can trust it to act in a
way that will preserve its reputation.
[64]
Cyberspace personae do not have the option of marble, at least if
they want to remain anonymous, but they do have the option of
investing either in a long series of transactions or other costly
activities, such as advertising or well publicized charity, in order
to establish a reputation that will bond their future
performance.
What about entities–firms or individuals–that are not
engaged in long term dealings and so neither have a valuable
reputation nor are willing to pay to acquire one? How are they to
guarantee contractual performance in this world?
One solution is to piggyback on the reputation of another entity that
is engaged in such dealings. Suppose I am an anonymous online persona
forming a contract which it might later be in my interest to break.
How, absent a reputation, do I persuade the other party that I will
keep my word? What is to keep me from making the contract, agreeing
to an arbitrator, breaking the contract, ignoring the arbitrator's
verdict, and walking off with my gains, unconcerned by the damage to
my nonexistent reputation?.
I solve the problem by offering to post a performance bond with the
arbitrator—in anonymous digital currency. The arbitrator is
free to allocate all or part of the bond to the other party as
damages for breach.
[65]
This approach–taking advantage of a third party with reputation–is
not purely hypothetical. Purchasers on Ebay at present can supplement
direct reputational enforcement with the services of an escrow agent–a
trusted third party that holds the buyer's payment until the goods
have been inspected and then releases it to the seller.
This approach still depends on reputational enforcement, but this
time the reputation belongs to the arbitrator. With all parties
anonymous, he could simply steal bonds posted with him–but if
he does, he is unlikely to stay in business very long. If I am
worried about such possibilities, I can require the arbitrator to
sign a contract specifying a second and independent arbitrator to
deal with any conflicts between me and the first arbitrator. My
signature to that agreement is worth very little, since it is backed
by no reputation—but the signature of the first arbitrator to a
contract binding him to accept the judgment of the second arbitrator
is backed by the first arbitrator’s reputation.
Conclusion
If the arguments I have offered are correct, we can expect
the rise of online commerce to produce a substantial shift towards
private law privately enforced via reputational mechanisms. While the
shift should be strongest in cyberspace, it ought to be echoed in
realspace as well. Digital signatures lower information costs to
interested third parties whether the transactions being contracted
over are occurring online or not. And the existence of a body of
trusted online arbitrators will make contracting in advance for
private arbitration more familiar and reliance on private arbitration
easier for realspace as well as cyberspace transactions.
[Should I add a discussion of smart contracts, contracts by
intelligent agents, and the like?]
Chapter
IX: Watermarks and Barbed Wire
Authors expect to be paid for their work. So do programmers,
musicians, film directors, and lots of other people. If they cannot
be paid for their work, we are likely to have fewer books, movies,
songs, programs. This may be a problem if what is produced can be
inexpensively reproduced. Once it is out there, anyone who has a copy
can make a copy, driving the price of copies down to the cost of
reproducing them.
One popular solution is for the creator of a work to be able to
control the making of copies. Copyright law gives him a legal right
to do so. How well that solves the problem depends on how easily that
right can be enforced.
Copyright
in Digital Media
"The rumors of my death have been greatly
exaggerated."
Mark Twain–perhaps also copyright
To enforce copyright law, the owner of the copyright has
to be able to discover illegal copying and take legal action against
those responsible. How easy that is to do depends in large part on
the technology of copying.
Consider the old fashioned printing press, c. 1910. It was large and
expensive; printing a book required first setting hundreds of pages
of type by hand. That made it much less expensive to print ten
thousand copies of a book on one press than a hundred copies each on
a hundred different presses. Since nobody wanted ten thousand copies
of a book for himself, a producer had to find customers–lots of
customers. Advertising the book, or offering it for sale in
bookstores, brought it to the attention of the copyright owner. If he
had not authorized the copying, he could locate the pirate and
sue.
Enforcement becomes much harder if copying is practical on a scale of
one or a few copies–the current situation for digital works
such as computer programs, digitized music, or films on DVD.
Individuals making a copy for themselves or a few copies for friends
are much harder to locate than mass market copiers. Even if you can
locate them all, it is harder to sue ten thousand defendants than
one. Hence, as a practical matter, firms mostly limit the enforcement
of their copyright to legal action against large scale
infringers.
The situation is not entirely hopeless from the standpoint of the
copyright holder. If the product is a piece of software widely used
in business–Microsoft Word, for example–there will be
organizations that use, not one copy, but thousands. If they choose
to buy one and produce the rest themselves, someone may notice–and
sue.
Furthermore, even if copying can be done on a small scale, there
remains the problem of distribution. If I get programs or songs by
illegally copying them from my friends I am limited to what my
friends have, which may not include what I want. I may prefer to buy
from distributors providing a wide range of alternatives–and
they, being potential targets for infringement suits, have an
incentive to buy what they sell legally rather than produce it
themselves illegally. Hence, even in a world where many expensive
works in digital form–Word, for example–can easily be
copied, the producers of such works can still use copyright law to
get paid for at least some of what they produce.
Or perhaps not. As has now been demonstrated with MP3's,
[66]
distribution over the internet makes it possible to combine
individual copying with mass market distribution, using specially
designed search tools to find the friendly individual who happens to
have the particular song you want and is willing to let you copy it.
A centralized distribution system is vulnerable to legal attack, as
Napster discovered. But a decentralized system in which individuals
on the net make their music collection available for download in
exchange for the ability to download songs from other people's
collections, is a more serious problem. If each user is copying one
of your songs once, but there are a hundred thousand of them, can you
sue them all?
Perhaps you can–if you take proper advantage of the technology.
A decentralized system must provide some way of finding someone with
the song you want who is willing to share it. Copyright owners might
use the same software to locate individuals who make their works
available for copying–and sue all of them, perhaps in a suit
that joins many defendants. Since copyright law sets a $500 statutory
minimum for damages for an act of infringement, suing ten thousand
individuals each of whom has made one copy of your copyrighted work
could, in principle, bring in more money than suing one individual
who had made ten thousand copies.
So far as I know, it has not yet been tried. Currently [check
this], it is hard to force multiple defendants into a single suit–but
one could imagine modifications in the relevant legal rules, perhaps
applicable only to copyright suits, that would change that
situation.
While that approach might work for a while, its long run problems
should be clear from the earlier discussion of strong privacy. A well
designed decentralized system would make it possible to locate
someone willing to let you copy a song but not to identify him. You
do not need his name, his face, or his social security number in
order to copy the file encoding the song you want, merely some way of
getting messages to and from him.
[67]
There remains, for some sorts of intellectual property, the option of
collecting royalties from firms who use a lot of it–corporations
that use Word, movie theaters performing movies. In the longer run,
even that option may shrink or vanish. In a world where strong
privacy is sufficiently universal we may end up with virtual firms–groups
of individuals linked via the net but geographically dispersed and
mutually anonymous. Even if all of them use pirated copies of Word–or
whatever the equivalent is at that point–no whistle blower can
report them because nobody, inside or outside the firm, knows who
they are.
Digital
Watermarks
Consider the same problem in a different context–images
on the world wide web. Such images originated somewhere and may well
belong to someone. But once posted, anyone can copy them. Not only is
it hard for the copyright owner to prevent illegal copying, it may be
hard for even the copier to prevent illegal copying, since he may not
know who the image belongs to or whether it has been put in the
public domain.
An increasingly popular way of dealing with these problems is digital
watermarking. Using special software, the creator of the image imbeds
in it concealed information, identifying him and claiming copyright.
In a well designed system, the information has no noticeable effect
on how the image looks to the human eye and is robust against
transformation–meaning that it is still there after a user has
converted the image from one format to another, cropped it, edited
it, perhaps even printed it out and scanned it back in.
Digital watermarking can be used in a number of different ways. The
simplest is by embedding information in an image and making the
software necessary to read the information widely available. That
lowers the cost to users of avoiding infringement, by making it easy
for them to discover that an image is under copyright and who the
copyright owner is. It raises the cost of committing infringement, at
least on the web, since search engines can be (and are) designed to
search through the web looking for copyrighted images and report what
they find to the copyright owner. He can then check to see if the use
of his image was licensed, and if not take legal action. The
existence of the watermark will help him prove both that the image is
his and that the user knew or should have known it was his, hence is
liable for not only infringement but deliberate infringement.
A deliberate infringer, however, has the option of trying to remove
the watermark while preserving the image. A well designed system can
make this more difficult. But as long as the watermark is observable,
the infringer has the option of trying ways of removing it until he
finds one that works. And the fact that the software for reading the
watermark must be publicly available makes it harder to keep secret
the details of how the watermark works, hence easier to design
software to remove it. So this form of watermark provides protection
against inadvertent infringement, raises the cost of deliberate
infringement–the infringer must go to some trouble to remove
the watermark–but cannot prevent or reliably detect deliberate
infringement.
The obvious solution is an invisible watermark–one designed to
be read only by special software not publicly available. That is of
no use for preventing inadvertent infringement but substantially
raises the risks of deliberate infringement, since the infringer can
never be sure he has successfully removed the watermark. By
imprinting an image with both a visible and an invisible watermark,
the copyright holder can get the best of both worlds–provide
information to those who do not want to infringe and a risk of
detection and successful legal action to those who do.
There is another way in which watermarking could be used to enforce
copyright, in a somewhat different context. Suppose we are
considering, not digital images, but computer programs. Further
suppose that enforcing copyright law against the sellers of pirated
software is not an option–they are located outside of the
jurisdiction of our court system, doing business anonymously, or
both.
Even if the sellers of pirated copies of our software are anonymous,
the people who originally bought the software from us are not. When
we sell the program, each copy has embedded in it a unique watermark–a
concealed serial number, sometimes referred to as a digital
fingerprint. We keep a record of who got each copy and make it clear
to our customers that permitting the program to be copied is itself a
violation of copyright law for which we will hold them liable. If
copies of our software appear on pirate archives we buy one, check
the fingerprint, and sue the customer from whose copy it was
made.
Thus digital watermarking provides one example of a new technology
that can be used to get back at least some of what other new
technologies took away. The ease of copying digital media made
enforcement of copyright harder–at first glance, impossibly
hard–by enabling piracy at the individual level. But the
ability of digital technologies to embed invisible, and potentially
undetectable, information in digital images, combined with the
ability of a search engine to check a billion web pages looking for
the one that contains an unlicensed copy of a watermarked image,
provide the possibility of enforcing copyright law against individual
pirates. And the same technology, by embedding the purchasers
fingerprint in the purchased software, provides a potential way of
enforcing copyright law even in a world of strong privacy–not
against anonymous pirates or their anonymous customers but against
the known purchaser from whom they got the original to copy.
While these are possible solutions, there is no guarantee that they
will always work. Invisible watermarking is vulnerable to anyone
sufficiently ingenious–or with sufficient inside information–to
crack the code, to figure out how to read the watermark and remove
it. The file representing the image or program, after all, is in the
pirate's hands. He can do what he wants with it–provided he can
figure out what needs to be done.
The individual who pirates images or software is unlikely to have the
expertise to figure out how to remove even visible watermarks, let
alone invisible ones. To do so he needs the assistance of someone
else who does have that expertise–most readily provided in the
form of software designed to remove visible watermarks and identify
and remove invisible ones. That raises the possibility of
backstopping the technological solution of digital watermarks with
legal prohibitions on the production and distribution of software
intended to defeat it. That is precisely the approach used by the
recent–and highly controversial–Digital Millenium
Copyright Act.
[68]
It bans software whose purpose is to defeat copyright management
schemes such as digital watermarking. How enforceable that ban will
be, in a world of networks and widely available encryption, remains
to be seen.
The approaches to enforcing copyright that I have been discussing
have serious limitations. The use of digital fingerprints to identify
the source of pirated copies only works if the original sale is
sufficiently individualized so that the seller knows the identity of
the buyer–and while it would be possible to sell all software
that way, it would be a nuisance. Perhaps more important, the
approach works very poorly for software that is expensive and widely
used. One legitimate copy of Word could be the basis for ten million
illegitimate copies, giving rise to a claim for a billion dollars or
so in damages–and if Microsoft limits its sales to customers
capable of satisfying such a claim it will not sell very many copies
of Word. The use of digital watermarks to identify pirated copies
only works if the copies are publicly displayed–for digital
images on the web but not for a pirated copy of Word on my hard
drive. These limitations suggest that producers of intellectual
property have good reason to look for other ways of protecting
it.
One could solve these problems by making my hard drive public–by
converting cyberspace, at least the parts of it residing on hardware
under the jurisdiction of U.S. courts, into a transparent society. My
computer is both a location in cyberspace and a physical object in
realspace; in the latter form it can be regulated by a realspace
government, however good my encryption is. One can imagine, in a
world run by copyright owners, a legal regime that required all
computers to be networked and all networked computers to be open to
authorized search engines, designed to go through their hard drives
looking for pirated software, songs, movies, or digital images.
I do not think such a legal regime will be politically viable option
in the U.S. anytime in the near future, although the situation might
be different elsewhere. There are, however, private versions that
might be more viable, technologies permitting the creator of
intellectual property to make it impossible to use it save on
computers that meet certain conditions–one of which could be
transparency to authorized agents of the copyright holder.
Digital
Barbed Wire
If using technology to enforce copyright law in a world of
easy copying is not always workable, perhaps we should instead use
technology to replace copyright law. If using the law to keep
trespassers and stray cattle off my land doesn't work, perhaps I
should build a fence.
You have produced a collection of songs and wish to sell them online.
To do so, you digitize the songs and insert them in a
cryptographically protected container–what Intertrust, one of
the pioneering firms in the industry, calls a digibox.
[69]
The container is a piece of software that protects the contents from
unauthorized access while at the same time arranging, providing, and
charging for authorized access. Once the songs are safely inside the
box you give away the package by making it available for download on
your web site.
I download the package to my computer; when I run it I get a menu of
choices. If I want to listen to a song once, I can do so for free.
Thereafter, each play costs five cents. If I really like the song,
fifty cents unlocks it forever, letting me listen to it as many times
as I want. Payment is online by ecash, credit card, or an arrangement
with a cooperating bank.
The digibox is a file on my hard disk, so I can copy it for a friend.
That's fine with you. If he wants to listen to one of your songs more
than once, he too will have to pay for it.
It may have occurred to you that there is a flaw in the business plan
I have just described. The container provides one free play of each
song. In order to listen for free, all the customer has to do is make
lots of copies of the container and use each once.
Making a new copy every time you play a song is a lot of trouble to
go to in order in order to save five cents. Intertrust does not have
to make it impossible to defeat its protection, whether in that
simple way or in more complicated ways, in order for it and the
owners of the intellectual property it protects to make money. It
only has to make defeating the protection more trouble than it is
worth.
As in the case of digital watermarking, how easy it is to defeat the
protection depends very largely on who is doing it. The individual
customer is unlikely to be an expert in programming or encryption,
hence unlikely to be able to defeat even simple forms of
technological protection. The risk comes from the person who is an
expert and makes his expertise available, cheaply or for free, in the
form of software designed to crack the protection.
One approach to dealing with that problem is by making it illegal to
create, distribute, or possess such software–the approach put
into law by the Digital Millenium Copyright Act on behalf of owners
of intellectual property. That law currently faces legal challenges
by plaintiffs who argue that publishing information, including
information about how to defeat other people's software, is free
speech, hence protected. Even if the court declines to protect that
particular sort of speech, the arguments of an earlier chapter
suggest that in the online world free speech may itself be
technologically protected–by the wide availability of
encryption and computer networks–hence the relevant parts of
the DMCA may in the long run prove unenforceable.
If law cannot provide protection, either against piracy or against
computerized safecracking tools designed to defeat technological
protection, the obvious alternative is technological–safes that
cannot be cracked. Is that possible?
For some forms of intellectual property, of which songs are an
example, it is not. However good the software protecting the contents
of the digibox, at some point in the process the customer gets to
play the song–that, after all, is what he is paying for. But if
a customer is playing a song on his own computer in his own home, he
can also be playing it into his own tape recorder–at which
point he has a copy of the song outside the box. If he prefers an MP3
to a cassette he can play the song back to the computer, digitize it,
and compress it. If he wants to preserve audio quality, he can short
circuit the process, feeding the electrical signals from his computer
to his speakers back into the computer to be redigitized and
recompressed. A similar approach could be used to hijack a book,
video or any other work that is presented to the customer in full
when he uses it. Technological protection make make the process of
getting the work out of the digibox a considerable nuisance–but
once one person has done it, in a world where copyright law is
difficult or impossible to enforce, the work is available to all.
I do not see any solution for works of this kind short of limiting
their consumption to a controlled environment–showing the video
in a movie theater with video cameras banned–or making
everybody's hard disk searchable.
[70]
There are, however, other works for which secure protection may be a
more serious option. Consider a large data base, containing
information that can be used to answer millions or billions of
different queries. The owner distributes it inside a digibox which
charges by the query.
Suppose the producer is Consumer Reports, the database is information
on automobiles, and its function is to advise the user as to what car
he should buy. His query describes price range, preferences, and a
variety of other relevant information. The answer is a report
tailored to that particular customer.
Having received the report, he can copy it and give it to his
neighbor. But his neighbor is very unlikely to want it, since he is
unlikely to have all the same tastes, circumstances, and constraints.
What the neighbor wants is his own customized report–which
requires that he make his own payment.
Given enough time, energy, and money, a pirate could ask a million
questions and use the answers to reverse engineer the protected data–but
why should he? The pirate can give away what he steals, he can use it
himself, but he has only a very limited ability to sell it. As long
as the protection can raise the cost of reconstructing the database
high enough, it should be reasonably safe.
[71]
For a different approach to the problem, consider the case of
computer software. I have a program that does something very valuable–speech
recognition, say. I divide it into two parts. One, which contains
most of the code and does most of the work, I give away to anyone who
wants it. The rest, including the key elements that make my program
special, resides on my server. In order for the first part to work,
it must continually exchange message with the second part–access
to which I charge for, by the minute.
What is particularly elegant about this solution is that the disease
is also the cure. Part of what makes copyright unenforceable in the
near future world we are considering is the ready availability of
high speed computer networks, enabling the easy distribution of
pirated software. But high speed computer networks are precisely what
you need in order to make the form of protection I have just
described work, since they allow me to make software on my server
almost as accessible to you as software on your hard disk–at a
price.
Adding
it all Up
Putting together everything in this chapter, we have a
picture of intellectual property protection in a near future world of
widely available highspeed networks, encryption, easy copying.
Intellectual property used publicly, such as images on the web, can
be legally protected provide it is not valuable enough to make it
worth going to a lot of trouble to remove hidden watermarks and
provided also that it is being used somewhere that copyright law can
reach. That second proviso means that if we move all the way to a
world of strong privacy all bets are off, since copyright law is
useless if you cannot identify the infringer. Even in that world,
some intellectual property can be protected by fingerprinting each
original and holding the purchaser liable for any copies made from
it.
Where copyright law cannot be enforced, intellectual property may
still be protected technologically. That approach is of limited
usefulness for works that must be entirely revealed every time they
are accessed, such as a song. It may work considerably better for
more complicated works, such as a database or a computer program. For
both sorts of works, protection will be easier if it is possible to
use the law to suppress software designed to defeat it–but it
probably won't be.
Does this mean that, in the near future, songs will stop being sung
and novels stop being written? That is not likely. What it does mean
is that those who produce that sort of intellectual property will
have to find ways of getting paid that do not depend on control over
copying. For songs, one obvious possibility is to give away the
digitized version and charge for concerts. Another is to rely on the
generosity of fans–in a world where is will be easy to email a
ten cent appreciation to the creator of the song you have just
enjoyed. A third is to give away the song along with a digitally
signed appreciation for the firm that paid you to write it–as a
way of buying your fans' goodwill.
Similar options are available for authors. The usual royalty payment
for a book is between five and ten percent of its face value. A lot
of readers may be willing to voluntarily pay the other that much in a
world where the physical distribution of books is essentially
costless. Other books will get written in the same way that articles
in academic journals are written now–to spread the author's
ideas or to build up a reputation that can be used to get a job, or
consulting contracts, or speaking opportunities.
And
For Our Next Trick
Several chapters back I raised the possibility of treating
transactional information as private property, with ownership
allocated by agreement at the time of the transaction. Such
information is a form of intellectual property and can be protected
by the same sorts of technology we have just discussed.
Suppose, for example, that you are perfectly happy to receive
catalogs in the mail (or email) but do not want to make it possible
for a stranger to compile enough information about you to enable
identity theft, spot you as a target for extortion, or in other ways
use your personal information against you. You therefore want the
personal information generated by your transactions–purchases,
employment, car rental, and the like–to be available only in a
very special sort of database. The database allows users to create
address lists of people who are likely customers for what they are
selling but does not allow them to get individualized data about
those people. It will be distributed inside a suitably designed and
cryptographically protected container or on a protected server,
designed to answer queries but not to reveal the underlying data.
The information is created by your transactions. In the highest tech
version, you conduct all of them anonymously, hence, from the start,
nobody but you has the information. In a lower tech version, both you
and the seller start with the information–the fact that he sold
you something–but he is contractually obliged to erase the
record once the transaction is complete.
[72]
In either version, you arrange for the information to be available
only within the sort of protected database I have just described–and,
if access to such a database is sufficiently valuable, you get paid
for doing so.
X:
Reactionary Progress–Amateur Scholars and Open
Source
A list of the half dozen most important figures in the early
history of economics would have to include David Ricardo; it might
well include Thomas Malthus and John Stuart Mill. A similar list for
geology would include William Smith and James Hutton. For biology it
would surely include Charles Darwin and Gregor Mendel, for physics
Isaac Newton.
Who were they? Malthus and Darwin were clergymen, Mendel a monk,
Smith a mining engineer, Hutton a gentleman farmer, Mill a clerk and
writer, Ricardo a retired stock market prodigy. Of the names I have
listed, only Newton was a university professor–and by the time
he became a professor he had already come up with both calculus and
the theory of gravitation.
There were important intellectual figures in the seventeenth,
18
th and early 19
th centuries who were
professional academics–Adam Smith, for example. But a large
number, probably a majority, were amateurs. In the twentieth century,
on the other hand, most of the major figures in all branches of
scholarship were professional academics. Most of those, although by
no means all, started their careers with a conventional course of
university education, typically leading to a PhD degree.
Why did things change? One possible answer is the enormous increase
in knowledge. When fields were new, scholars did not need access to
vast libraries.
[73]
There were not many people in the field, the rate of progress was not
very rapid, so letters and occasional meetings provided adequate
communication. As fields developed and specialization increased, the
advantages of the professional–libraries, laboratories,
colleagues down the hall–became increasingly important.
Email is as easy as walking down the hall. The web, while not a
complete substitute for a library, makes enormous amounts of
information readily available to a very large number of people. In my
field, at least, it is becoming common for the authors of scholarly
articles to make their datasets available on the web, so that other
scholars can check that they really say what the article claims they
say. Once there, they are available to anyone.
An alternative explanation of the shift from amateur to professional
scholarship is that it was due to the downward spread of education.
In the 18
th century, someone sufficiently educated to
invent a new science was likely to be a member of the upper class,
hence had a good chance of not needing to work for a living. In the
twentieth century, the correlation between education and wealth is a
good deal weaker.
We are not likely to return to the class society of 18
th
century England. But by the standards of that society, most of us are
rich–rich enough to make a tolerable living and still have time
and effort left to devote to our hobbies. For a large and increasing
fraction of the population, amateur scholarship, like amateur sports,
amateur music, amateur dramatics, and much else, is an increasingly
real option.
These arguments suggest the possibility that, having shifted once
from a world of amateur scholars to a world of professionals, we may
now be shifting back. That conjecture is based in large part on my
own experiences. Two examples:
Robin Hanson is currently a professor of economics. When I first came
into (virtual) contact with him, he was a NASA scientist with an odd
hobby. His hobby was inventing institutions. His ideas–in
particular an ingenious proposal to design markets to generate
information
[74]–were
sufficiently novel and well thought out so that I found corresponding
with him more interesting than corresponding with most of my fellow
economists. They were sufficiently interesting to other people to get
published. Eventually he decided that his hobby was more fun than his
profession and went back to school for a PhD in economics.
One of my hobbies for the past thirty years has been cooking from
very early cookbooks; my earliest source is a letter written in the
sixth century by a Byzantine physician named Anthimus to Theoderic,
king of the Franks. When I started, one had to pretty much reinvent
the wheel. There were no published translations of early cookbooks in
print and almost none out of print. The only available sources in
English, other than a small number of unreliable books about the
history of cooking, were a few early English cookbooks–in
particular a collection that had been published by the Early English
Text Society in 1890 (check exact date). I managed to get one
seventeenth century source by finding a rare book collection that had
a copy of the original and paying to have it microfilmed.
The situation has changed enormously over the past thirty years. The
changes include the publication of several reliable secondary
sources, additional English sources, and a few translations–all
of which could have happened without the internet. But the biggest
change is that there are now ten or fifteen translations [check
numbers] of early cookbooks on the web, freely available to
anyone interested. Most were done by amateurs for the fun of it.
There are hundreds of worked out early recipes (the originals usually
omit irrelevant details such as quantities, times and temperature)
webbed. There is an email list that puts anyone interested in touch
with lots of experienced enthusiasts. Some of the people on that list
are professional cooks, some are professional scholars. So far as I
know, none is a professional scholar of cooking history.
Similar things appear to be happening in other areas. I am told that
amateur astronomers have long played a significant role–because
skilled labor is an important input to star watching. There seems to
be an increasing interaction between historians and groups that do
amateur historical recreation–sometimes prickly, when hobbyists
claim expertise they don't have, sometimes cordial. The
professionals, on average, know much more than the amateurs–but
there are a lot more amateurs and some of them know quite a lot. And
the best of the amateurs have access not only to information but to
each other–and any professional more interested in the ability
of the people he corresponds with than their credentials.
Open
Source Software
Amateur scholarship is one example of the way in which rising
incomes and improved communication technology make it easier to
produce things for fun. Another is open source software.
The standard example is Linux, a computer operating system. The
original version was created by a Finnish graduate student named
Linus Torvalds. Having done a first draft himself, he invited
everyone else in the world to help improve it. They accepted–with
the result that Linux is now a highly developed operating system,
widely used for a variety of different tasks. Another open source
project, the Apache web server, is the software on which a majority
of World Wide Web pages run.
Most commercial software is closed source. When you buy a copy of
Microsoft Word you get the object code, the version of the program
that the computer runs. With an open source program, you get the
source code–the human readable version that the original
programmer wrote and that other programmers need if they want to
modify the program. You can compile it into object code to run it,
but you can also revise it and then compile and run your revised
version.
The mechanics of open source are simple. Someone comes up with a
first version of the software. He publishes the source code. Other
people interested in the program modify it–which they are able
to do because they have the source code–and send their
modifications to him. Modifications that he accepts go into the code
base–the current standard version which other programmers will
start their modifications from. At the peak of Linux development,
Torvalds was updating the code base daily.
There are a number of advantages to the open source model. If lots of
programmers are familiar with the code because each is working on the
parts that interest him, then when someone reports a problem there is
likely to be someone else to whom the source of the problem is
obvious. As Torvalds put it, "with enough eyeballs, all bugs are
shallow." And with the source code open, bugs can be found and
improvements suggested by anyone interested.
As Eric Raymond, one of the leading spokesmen for the movement, has
pointed out, Open Source has its own set of norms and its own
implicit property rights. Nobody owns an open source program in the
usual sense–there is nobody whose permission you need if you
want to copy or modify it. Linux is free. But there is ownership in
two other and important senses.
Linus Torvalds owns Linux. Eric Raymond owns Sendmail. A committee
owns Apache. That ownership has no legal support, since under an open
source license anyone is free to modify the code any way he likes
(provided that he makes the source code to his modified version
public, thus keeping it open source). But it is nonetheless real.
Programmers want to all work on the same code base so that each can
take advantage of improvements made by the others. Hence there is
considerable hostility in the community of open source programmers to
forking a project–developing two inconsistent versions. If
Torvalds rejects your improvements to Linux, you are still free to
use them–but don't expect any help. Everyone else will be
working on his version. Thus ownership of a project is a property
right enforced entirely by private action.
As Eric Raymond has pointed out, such ownership is controlled by
rules similar to the common law rules for owning land. Ownership of a
project goes to the person who creates it–homesteads that
particular programming opportunity by creating the first rough draft
of the program. If he loses interest, he can transfer ownership to
someone else. If he abandons the program, someone else can claim it–publicly
check to be sure nobody else is currently in charge of it and then
publicly take charge of it himself. The equivalent in property law is
adverse possession, the legal rule under which, if you openly treat
property as yours for long enough and nobody objects, it is
yours.
There is a second form of ownership in open source–credit. Each
project is accompanied by a file identifying the authors. Meddle with
that file–substitute in your name, thus claiming to be the
author of code someone else wrote–and your name in the open
source community is Mud. The same is true in the scholarly community.
From the standpoint of a professional scholar, copyright violation is
a peccadillo, theft someone else's problem–plagiarism the
ultimate sin.
One way of viewing the open source movement is as a variant on the
system of institutions under which most of modern science was
created. Programmers create software; scholars create ideas. Ideas,
like open source programs, can be used by anyone. The source code,
the evidence and arguments on which the ideas are based, is public
information. An article that starts our "the following theory is
true, but I won't tell you why" is unlikely to persuade many readers.
Scientific theories do not have owners in quite the sense that open
source projects do, but at any given time in most fields there is a
consensus as to what the orthodox body of theory is from which active
scholars work--must work if they want others to take them seriously.
Apache's owner is a committee; arguably neo-classical economics
belongs to a somewhat larger committee. Of course, a scholar can
always defy the orthodoxy and strike out on his own, and some do. But
then, if you don't like Linux, you are always free to start your own
open source operating system project.
Market and
Hierarchy
One of the odd features of a market system is how socialist
it is. Firms interact with other firms and with customers through the
decentralized machinery of trade. But the firms themselves are
miniature socialist states, hierarchical organizations controlled, at
least in theory, by orders from above.
There is one crucial difference between Microsoft and Stalin's
Russia. Microsoft's interactions with the rest of us are voluntary.
It can get people to work for it or buy its products only by offering
them a deal they prefer to all alternatives. I do not have to use the
Windows operating system unless I want to, and in fact I don't and
don't. Stalin did not face that constraint.
One implication is that, however bad the public image of large
corporations may be, they exist because they serve human purposes.
Employees work for them because they find doing so a better life than
working for themselves; customers buy from them because they prefer
doing so to making things for themselves or buying from someone else.
The disadvantages associated with taking orders, working on other
people's projects, depending for your reward on someone else's
evaluation of your work, are balanced by advantages sufficient, for
many people, to outweigh them.
[75]
The balance depends on a variety of factors--one of which is the
collection of technologies associated with exchanging information,
arranging transactions, enforcing agreements, and the like. As those
technologies change, so does that balance. The easier it is for a
dispersed group of individuals to coordinate their activities, the
larger we would expect the role of decentralized coordination, market
rather than hierarchy, in the overall mix. This has implications for
how goods are likely to be produced in the future--Open Source is a
striking example. It also has implications elsewhere, for political
systems, social networks, and a wide range of other human
activities.
One example occurred some years ago in connection with one of my
hobbies, one at least nominally run by a non-profit corporation
itself controlled by a self perpetuating board of directors. The
board responded to problems of growth by hiring a professional
executive director with no ties to the hobby. Acting apparently on
his advice, they announced, with no prior discussion, that they had
decided to double dues and to implement a highly controversial
proposal that had been previously dropped in response to an
overwhelmingly negative response by the membership.
If it had happened ten years earlier there would have been grumbling
but nothing more. The corporation, after all, controlled all of the
official channels of communication. When its publication, included in
the price of membership, commented on the changes, the comments were
distinctly one sided. Individual members, told by those in charge
that the changes, however disturbing, were necessary to the health of
the hobby, would for the most part have put up with them.
That is not what happened. The hobby in question had long had an
active Usenet news group. Its members included individuals with
professional qualifications, in a wide range of relevant areas, a
arguably superior to those of the board members, the executive
director, or the corporation's officers. Every time an argument was
raised in defense of the corporation's policies, it was answered--and
at least some of the answers were persuasive. Only a minority of
those involved in the hobby read the newsgroup, of course, but it was
a large enough number to get the relevant arguments widely dispersed.
And email provided an easy way for dispersed members unhappy with the
changes to communicate, coordinate, act. The corporation's board of
directors was self-perpetuating--membership in the organization did
not include a vote--but it was made up of volunteers, people active
in the hobby who were doing what they thought was right. They
discovered that quite a lot of others, including those they
respected, disagreed, and were prepared to support their disagreement
with facts and arguments. By the time the dust cleared, every member
of the board of directors that made the decision, save those whose
terms had ended during the controversy, had resigned; their
replacements reversed the most unpopular of the decisions. It seemed
to me a striking example of the way in which the existence of the
internet had shifted the balance between center and
periphery.
[76]
For a less exotic example, consider the recent announcement that Eli
Lilly had decided to subcontract part of its chemical research, not
to another firm but to the world at large. Lilly created a
subsidiary, InnoCentive LLC, to maintain a web page of chemistry
problems that Lilly wants solved--and the prices, up to $100,000,
that they are offering for the solutions. InnoCentive has invited
other companies to use their services to get their problems solved
too. So far, according to a story in the Wall Street Journal, they
have gotten "about 1,000 scientists from India, China and elsewhere
in the world" to work on their problems.
[77]
One problem Innocentive raises is that the people who are solving
Lilly's problems may be doing so on someone else's payroll. Consider
a chemist hired to work in an area related to one of the problems on
the list. He has an obvious temptation to slant the work in the
direction of the $100,000 prize, even if the result is to slow the
achievement of his employer's objectives. A chemist paid by firm A
while working for firm B is likely to be caught--and fired--if he
does it in realspace. But if he combines a realspace job with
cyberspace moonlighting--still more if parts of the realspace job are
done by telecommuting from his home--the risks may be substantially
less. So one possibility, if Lilly's approach catches on, is a shift
from paying for time to paying for results, at least for some
categories of skilled labor. In the limiting case, employment
vanishes and everyone becomes a subcontractor, selling output rather
than time.
Information Warfare
So far we have been considering ways in which the
internet supports cooperation, productive activity, in a
decentralized form. It supports other things as well. A communication
system can also be used as a weapon, a way of misleading other
people, creating forged evidence, and in general accomplishing your
objectives at the expense of your opponents. Consider an academic
example.
The Story of the Four Little Pigs
The year is 1995, the place Cornell University. Four freshman have
compiled a collection of misogynist jokes entitled "75 Reasons Why
Women (Bitches) Should Not Have Freedom of Speech" and sent copies to
their friends. One or more of the copies reaches someone who find it
offensive and proceeds to distribute it to many other people who
share that view. The result is a firestorm of controversy, not only
at Cornell but in a variety of newspapers elsewhere. The central
question is whether the creating of such a list is, within a
university system, an offense that ought to be punished or a
protected exercise of free speech.
Eventually, Cornell announces its decision. The students have
violated no university rules, and so will be subject to no penalties.
They have, however, recognized the error of their ways:
"... in addition to the public letter of apology they wrote that was
printed by the Cornell Daily Sun on November 3, 1995, the students
have offered to do the following:
Each of them will attend the "Sex at 7:00" program sponsored by
Cornell Advocates for Rape Education (CARE) and the Health Education
Office at Gannett Health Center. This program deals with issues
related to date and acquaintance rape, as well as more general issues
such as gender roles, relationships and communication.
Each of them has committed to perform 50 hours of community service.
If possible, they will do the work at a non-profit agency whose
primary focus relates to sexual assault, rape crisis, or similar
issues. Recognizing that such agencies may be reluctant to have these
students work with them, the students will perform the community
service elsewhere if the first option is not available.
The students will meet with a group of senior Cornell administrators
to apologize in person and to express regret for their actions and
for the embarrassment and disruption caused to the University.
(public statement by Barbara L. Krause, Judicial Administrator)
There are two possible ways of interpreting that outcome. One is that
Ms Krause is telling the truth, the whole truth, and nothing but the
truth--Cornell imposed no penalty at all on the students, they
imposed an entirely voluntary penalty on themselves. It seems a bit
strange--but then, Cornell is a rather unusual university.
The alternative interpretation starts with the observation that
university administrators have a lot of ways of making life difficult
for students if they really want to. By publicly announcing that the
students had broken no rules and were subject to no penalty, while
privately making it clear to the students that if they planned to
remain at Cornell they would be well advised to "voluntarily"
penalize themselves, Cornell engaged in a successful act of
hypocrisy. They publicly maintained their commitment to free speech
while covertly punishing students for what they said.
At this point, someone who preferred the second interpretation
thought up a novel way of supporting it. An email went out during
Thanksgiving break to thousands of Cornell students, staff, and
faculty--21,132 of them according to its authors.
--------------------------
CONFIDENTIAL
I would like to extend my
heartfelt thanks to the many faculty members who advised me regarding
the unfortunate matter of the "75 Reasons" letter that was circulated
via electronic mail. Your recommendations for dealing with the
foul-mouthed "four little pigs" (as I think of them) who circulated
this filth was both apposite and prudent.
Now that we have had time to
evaluate the media response, I think we can congratulate ourselves on
a strategy that was not only successful in defusing the scandal, but
has actually enhanced the reputation of the university as a sanctuary
for those who believe that "free speech" is a relative term that must
be understood to imply acceptable limits of decency and
restraint--with quick and severe punishment for those who go beyond
those limits and disseminate socially unacceptable sexist
slurs.
I am especially pleased to
report that the perpetrators of this disgusting screed have been
suitably humiliated and silenced, without any outward indication that
they were in fact disciplined by us. Clearly, it is to our advantage
to place malefactors in a position where they must CENSOR THEMSELVES,
rather than allow the impression that we are censoring
them.
...
Yours
sincerely
Barbara L. Krause Judicial
Administrator
------------------
The letter was not, of course, actually written by Barbara Krause--as
anyone attentive enough to check the email address could have figured
out. It was written, and sent, by an anonymous group calling
themselves OFFAL--Online Freedom Fighters Anarchist Liberation. They
described it as "a bogus, satirical email Message ... a protest
against recent acts by the sanctimonious hypocrites at that "seat of
learning" who think it's more important to be politically correct
than free."
The letter was a satire, and an effective one, giving a believable
and unattractive picture of what its authors suspect Ms Krause's real
views were. It was also a fraud--some readers would never realize
that she was not the real author. In both forms it provided
propaganda for a particular view of the situation. But it did
something more than that.
Email is not only easily distributed, it is easily answered. Some
recipients not only believed the letter, they agreed with it, and
said so. Since OFFAL had used, not Ms Krause's email address, but an
email address that they controlled, those answers went back to them.
OFFAL produced a second email, containing the original forgery, an
explanation of what they were doing, and a suitable selection of the
responses.
I happen to support your actions and the
resolution of this incident, but put into the wrong hands, this memo
could perhaps be used against you.
---
Thank god you sent this memo--something
with a little anger and fire--something that speaks to the emotion
and not just the legalities. I hope you are right in stating that
what went on behind the scenes was truly humiliating for
"them".
---
I agree with what your memo states about
the "four little pigs" (students who embarrassed the entire Cornell
community), but I don't think I was one of the people really intended
for your confidential memo. ... Great Job in the handling of a most
sensitive issue.
---
The authors of the list have received
richly-deserved humiliation
Their summary:
We believe that ridicule is a more
powerful weapon than bombs or death threats. And we believe that the
Internet is the most powerful system ever invented for channeling
grass-roots protests and public opinion in the face of petty tyrants
who seek to impose their constipated values on everyday citizens who
merely want to enjoy their constitutionally protected
liberties.
It is hard not to have some sympathy for the perpetrators. They were
making a defensible point, although I am not certain a correct one,
and they were making it in an ingenious and effective way. But at the
same time they, like the purveyors of many other sorts of propaganda,
were combining a legitimate argument with a dishonest one--and it was
the latter that was the product of their ingenuity.
The correct point was that Cornell's actions could plausibly be
interpreted as hypocritical--attacking free speech while pretending
to support it. The dishonest argument was the implication that the
responses they received provided support for that interpretation. The
eight replies that OFFAL selected consisted of six supporting the
original email, one criticizing it, one neither. If it were a random
selection of responses, it would be impressive--but it wasn't. All it
shows is that about half a dozen people out of more than twenty
thousand supported the position, which is neither surprising nor
interesting.
What is interesting about the incident is the demonstration of a form
of information warfare made practical by the nature of the net--very
low transaction costs, anonymity, no face to face contact. Considered
as parody, it could have been done with old technology. As an act of
fraud, a way of getting people to reveal their true beliefs by
fooling them into thinking they were revealing them to someone who
shared them, it could have been done with old technology, although it
would have been a good deal more trouble. But as an act of mass
production fraud, a way of fooling thousands of people in order to
get a few of them to reveal their true beliefs, it depended on the
existence of email.
Some years ago on a Usenet group, I read the following message:
I believe that it is okay to have sex before marriage unlike some
people. This way you can expirence different types of sex and find
the right man or woman who satifies you in bed. I you wait until
marriage then what if your mate can not satisfy you, then you are
stuck with him. Please write me and give me your thoughts on this.
You can also tell me about some of your ways to excite a woman
because I have not yet found the right man to satisfy me.
It occurred to me that what I was observing might be a commercial
application of a similar tactic. The message is read by thousands,
perhaps tens of thousands, of men. A hundred or so take up the
implied offer and email responses. They get suitably enticing emails
in response--the same emails for all of them, with only the names
changed. They continue the correspondence. Eventually they receive a
request for fifty dollars--and a threat to pass on the correspondence
to the man's wife if the money is not paid. The ones who are not
married ignore it; some of the married ones pay. The responsible
party has obtained a thousand dollars or so at a cost very close to
zero. Mass production blackmail.
One of my students suggested a simpler explanation. The name and
email address attached to the message belonged not to the sender but
to someone the sender disliked. Whether or not he was correct, that
form of information warfare has been used elsewhere online. It is not
a new technique--the classical version is a phone number on a
bathroom wall. But the net greatly expands the audience.
A Sad Story
The following story is true; names and details have been
changed to protect the innocent.
SiliconTech is an institution of higher education where the students
regard Cornell, OFFAL and all, as barely one step above the stone
age. If they ever have a course in advanced computer intrusion--for
all I know they do--there will be no problem finding qualified
students.
Alpha, Beta and Gamma were graduate students at CT. All three came
from a third world country which, in the spirit of this exercise, I
will call Sparta. Alpha and Beta were a couple for most of a year, at
one point planning to get married. That ended when Beta told Alpha
that she no longer wanted to be his girlfriend. Over the following
months Alpha attempted, unsuccessfully, to get her back.
Eventually the two met at a social event held by the Spartan Student
Association; in the course of the event, Alpha learned that Beta was
now living with
Gamma. This resulted in a heated discussion among the three of them;
there were no outside witnesses and the participants later disagreed
about what was said. Alpha's version is that he threatened to tell
other members of the Spartan community at ST things that would damage
the reputation of Beta and her family. Sparta is a sexually
conservative and politically oppressive society, so it is at least
possible that spreading such information would have had serious
consequences. Beta and Gamma's version is that Alpha threatened to
buy a gun and have a duel with Gamma.
Later that evening, someone used Alpha's account on the computer used
for his research to log onto a university machine and from that
machine forge an obscene email to Beta that purported to come from
Gamma. During the process the same person made use of Alpha's account
on a university supercomputer. A day or so later, Beta and Gamma
complained about the forged email to the ST computer organization,
which traced it to Alpha's machine, disabled his account on their
machine, and left him a message. Alpha, believing (by his account)
that Beta and Gamma had done something to get him in trouble with the
university, sent an email to Gamma telling him that he would have to
help Beta with her research, since Alpha would no longer be
responsible for doing so.
The next day, a threatening email was sent from Alpha's account on
his research computer to Gamma. Beta and Gamma took the matter to the
ST authorities. According to their account, Alpha had:
1. Harassed Beta since they broke up, making her life miserable and
keeping her from doing her research.
2. Showed her a gun permit he had and told her he was buying a
gun.
3. Threatened to kill her.
4. Threatened to have a duel with Gamma.
They presented the authorities with copies of four emails--the three
described so far, plus an earlier one sent at the time of the
original breakup. According to Alpha, two of them were emails that he
had sent but that had been altered, two he had never seen before.
Two days later, Beta and Gamma went to the local police with the same
account plus an accusation that, back when Alpha and Beta were still
a couple, he had attempted to rape her. Alpha was arrested on charges
of felony harassment and terrorism, with bail set at more than a
hundred thousand dollars. He spent the next five and half months in
jail under fairly unpleasant circumstances. The trial took two weeks;
the jury then took three hours to find Alpha innocent of all charges.
He was released. ST proceeded to have its own trial of Alpha on
charges of sexual harassment. They found him guilty and expelled
him.
When I first became interested in the case--because it involved
issues of identity and email evidence in a population technologically
a decade or so ahead of the rest of the world--I got in touch with
the ST attorney involved. According to her account, the situation was
clear. Computer evidence proved that the obscene and threatening
emails had ultimately originated on Alpha's account, to which only he
had the password, having changed it after the breakup. While the jury
may have acquitted him on the grounds that he did not actually have a
gun, Alpha was clearly guilty of offenses against (at least) ST
rules.
I then succeeded in reaching both Alpha's attorney and a faculty
member sympathetic to Alpha who had been involved in the controversy,
from whom I learned a few facts that the CT attorney had omitted.
1. All of Alpha's accounts used the same password. Prior to the
breakup with Beta, the password had been "Beta." Afterwards, it was
Alpha's mother's maiden name.
2. According to the other graduate students who worked with Alpha,
and contrary to Beta's sworn testimony, the two had remained friends
after the breakup and Alpha had continued to help Beta do her
research--on his computer account. Hence it is almost certain that
Beta knew the new password. Hence she, or Gamma, or Gamma's older
brother (also a student at ST) could have accessed the accounts and
done all of the things that Alpha was accused of doing.
3. The "attempted rape" was supposed to have happened early in their
relationship. According to Beta's own testimony at trial, she
subsequently took a trip alone with him during which they shared a
bed. According to other witnesses, they routinely spent weekends
together for some months after the purported attempt.
4. In the course of the trial there was evidence that many of the
statements made by Beta and Gamma were false. In particular, Beta
claimed never to have been in Alpha's office during the two months
after the breakup (relevant because of the password issue), the other
occupants of the office testified that she had been there repeatedly.
Beta claimed to have been shown Alpha's gun permit; the police
testified that he did not have one.
5. One of the emails supposedly forged by Alpha had been created at a
time when he not only had an alibi--he was in a meeting with two
faculty members--but had an alibi he could not have anticipated
having, hence could not have prepared for by somehow programming the
computer to do things when he was not present.
6. The ST hearing was conducted by a faculty member who had told
various other people that Alpha was guilty and ST should get rid of
him before he did something that they might be liable for. Under
existing school policy, the defendant was entitled to veto suggested
members of the committee. Alpha attempted to veto the chairman and
was ignored. According to my informant, the hearing was heavily
biased, with restrictions by the committee on the introduction of
evidence and arguments favorable to Alpha.
7. During the time Alpha was in jail awaiting trial, his friends
tried to get bail lowered. Beta and Gamma energetically and
successfully opposed the attempt, tried to pressure other members of
the Spartan community at ST not to testify in Alpha's favor, and even
put together a booklet containing not only material about Alpha but
stories from online sources about Spartan students killing lovers or
professors.
Two different accounts of what actually happened are consistent with
the evidence. One, the account pushed by Beta and Gamma and accepted
by ST, makes Alpha the guilty party and explains the evidence that
Beta and Gamma were lying about some of the details as a combination
of exaggeration, innocent error and perjury by witnesses friendly to
Alpha. The other, the account accepted by at least some of Alpha's
supporters, makes Beta and Gamma the guilty parties and ST at the
least culpably negligent. On that version, Beta and Gamma conspired
to frame Alpha for offenses he had not committed, presumably as a
preemptive strike against his threat to release true but damaging
information about Beta--once he was in jail, who would believe him?
They succeeded to the extent of getting him locked up for five and a
half months, beaten in jail by fellow prisoners, costing him and his
friends some twenty thousand dollars in legal expenses--and
ultimately getting him expelled.
I favor the second account, in part because I think it is clear that
the ST attorney I originally spoke with was deliberately trying to
mislead me--concealing facts that not only were relevant, but
directly contradicted the arguments she was offering. I resent being
lied to. On the other hand attorneys, even attorneys for academic
institutions, are hired to serve the interest of their clients, not
to reveal truth to curious academics, so even if she believed Alpha
was guilty she might have preferred to conceal the evidence that he
was not. For my present purposes what is interesting is not the
question of which side was guilty but the fact that either side could
have been, and the problems that fact raises for the world that they
were, and we will be, living in.
Lessons
"Women have simple tastes. They can take pleasure in the conversation
of babes in arms and men in love." (H.L. Mencken, In Defense of
Women)
Online communication--in this case email--normally
carries identification that, unlike one's face, can readily be
forged. The Cornell case demonstrated one way in which that fact
could be used--to extract unguarded statements from somebody by
masquerading as someone they have reason to trust. This case, on one
interpretation, demonstrates another--to injure someone by persuading
third parties that he said things he in fact did not say.
The obvious solution is some way of knowing who sent what message. At
the simplest level, the headers of an email are supposed to do that.
As these cases both demonstrate, that does not work very well. On the
simplest interpretation of the events at ST, Alpha used a procedure
known to practically everyone in that precocious community to send a
message to Beta that purported to come from Gamma. On the alternative
interpretation, Beta or Gamma masqueraded as Alpha (accessing his
account with his password) in order to send a message to Beta that
purported to come from Gamma--and thus get Alpha blamed for doing
so.
ST provided a second level of protection--passwords. The passwords
were chosen by the user, hence in many cases easy to guess--users
tend to select passwords that they can remember. And even if they had
been hard to guess, one user can always tell another his password.
However elaborate the security protecting Alpha's control over his
own identification--up to and including the use of digital
signatures--it could not protect him against betrayal by himself.
Alpha was in love with Beta, and men in love are notoriously
foolish.
Or perhaps it could. One possible solution is the use of
biometrics--identification linked to physical characteristics such as
fingerprints or retinal patterns. If ST had been twenty years ahead
of the rest of us instead of only ten, they might have equipped their
computers with scanners that checked the users' fingers and retinas
before letting them sign on and kept records linking what was done
with the computer to who did it. Even a man in love is unlikely to
give away his retinas. With that system, we would know which party
was guilty.
Provided, of course, that none of the students at ST--the cream of
the world's technologically precocious young minds--figured out how
to trick the biometric scanners or hack the software controlling
them.
Even if the system works, it has some obvious disadvantages. In order
to prevent someone from editing a real email he has received and then
presenting the edited version as the original--what Alpha claims that
Beta and Gamma did--the system must keep records of all email that
passes through it. Many users may find that objectionable on the
grounds of privacy. And the requirement of biometric identification
eliminates not only forged identity but open anonymity--which
arguably could have a chilling effect on free speech.
So far I have implicitly assumed a single computer network with a
single owner. That was the situation at ST but it is not the
situation for the Internet. With a decentralized system under the
control of many individual parties, creating a system of unforgeable
identity becomes an even harder challenge. It can be done via digital
signatures--but only if the potential victims are willing to take the
necessary precautions to keep other people from getting access to
their private keys. Biometric identification, even if it becomes
entirely reliable, is still vulnerable to the user who inserts his
own hardware or software between the scanner and the computer of his
own system, and uses it to lie to the computer about what the scanner
saw.
XI: Intermission: What's a Meta Phor?
I am typing these words into a metaphorical document in a
metaphorical window on a metaphorical desktop; the document is
contained in a metaphorical file folder represented by a miniature
picture of a real file folder. I know the desktop is metaphorical
because it is vertical; if it were a real desktop, everything would
slide to the bottom.
All this is familiar to anyone whose computer employs a graphical
user interface. We use that collection of layered metaphors for the
same reason we call unauthorized access to a computer a break-in and
machine language programs, unreadable by the human eye, writings. The
metaphor lets us transport a bundle of concepts from one thing, about
which that bundle first collected, to something else to which we
think most of the bundle is appropriate. Metaphors reduce the
difficulty of learning to think about new things. Well chosen
metaphors do it at a minimal cost in wrong conclusions.
Consider the metaphor that underlies modern biology: evolution as
intent. Evolution is not a person and does not have a purpose. Your
genes are not people either and also do not have purposes. Yet the
logic of Darwinian evolution implies that each organism tends to have
those characteristics that it would have if it had been designed for
reproductive success. Evolution produces the result we would get if
each gene had a purpose–increasing its frequency in future
generations–and acted to achieve that purpose by controlling
the characteristics of the bodies it built.
Everything stated about evolution in the language of purpose can be
restated in terms of variation and selection–Darwin's original
argument. But since we have dealt with purposive beings for much
longer than we have dealt with the logic of Darwinian evolution, the
restated version is further from our intuitions; putting the analysis
that way makes it harder to understand, clumsier. That is why
biologists
[78]
routinely speak in the language of purpose, as when Dawkins titled
his brilliant exposition of evolutionary biology "The Selfish
Gene."
For a final example, consider computer programming. When you write
your first program, the approach seems obvious: Give the computer a
complete set of instructions telling it what to do. By the time you
have gotten much beyond telling the computer to type "Hello World,"
you begin to realize that a complete set of instructions for a
complicated set of alternatives is a bigger and more intricate web
than you can hold in your mind at one time.
People who design computer languages deal with that problem through
metaphors. Currently the most popular are the metaphors of object
oriented languages such as Java and C++. A programmer builds classes
of objects. None of these objects are physical things in the real
world; each exists only as a metaphorical description of a chunk of
code. Yet the metaphor–independent objects, each owning control
over its own internal information, interacting by sending and
receiving messages–turns out to be an extraordinarily powerful
tool for writing and maintaining programs, programs more complicated
than even a very talented programmer could keep track of if he tried
to conceptualize each as a single interacting set of commands.
Metaphorical
Crimes
From time to time I read a news story about an intruder
breaking into a computer, searching through the contents, and leaving
with some of them. Looking at the computer sitting on my desk, it is
obvious that such intrusion is impractical for anything much bigger
than a small cat. There isn't room. And if my cat wants to get into
my computer, it doesn't have to break anything–just hook its
claws into the plastic loop on the side (current Macs are designed to
be easily upgradeable) and pull.
"Computer break-in" is a metaphor. So are the fingerprints and
watermarks of the previous chapter. Computer programmers have fingers
and occasionally leave fingerprints on the floppy disks or CD's that
contain their work, but copying the program does not copy the
prints.
New technologies make it possible to do things that were not
possible, sometimes not imagined, fifty years ago. Metaphors are a
way of fitting those things into our existing pattern of ideas,
instantiated in laws, norms, language. We already know how to think
about people breaking into other people's houses and what to do about
it. By analogizing unauthorized access to a computer to breaking into
a house we fit it into our existing system of laws and norms.
The choice of metaphor matters. What actually happens when someone
"breaks into" a computer over the internet is that he sends the
computer messages, the computer responds to those messages, and
something happens that the owner of the computer does not want to
happen. Perhaps the computer sends him what was supposed to be
confidential information. Perhaps it erases its hard disk. Perhaps it
becomes one out of thousands of unwitting accessories to a denial of
service attack, sending thousands of requests to read someone else's
web page–with the result that the overloaded server cannot deal
with them all and the page temporarily vanishes from the web.
The computer is doing what the cracker wants instead of what its
owner wants. One can imagine the cracker as an intruder, a virtual
person traveling through the net, making his way to the inside of the
computer, reading information, deleting information, giving commands.
That is how we are thinking of it when we call the event a
break-in.
To see how arbitrary the choice of metaphor is, consider a lower tech
equivalent. I want to serve legal papers on you. In order to do so,
my process servers have to find you. I call you up. If you do not
answer, I tell the servers to look somewhere else. If you do answer,
I hang up and send them in.
Nobody is likely to call what I have just described a break-in. Yet
it fits almost precisely the earlier description. Your telephone is a
machine which you have bought and connected to the phone network for
a purpose. I am using your machine without your permission for a
different purpose, one you disapprove of–finding out whether
you are home, something you do not want me to know. With only a
little effort, you can imagine a virtual me running down the phone
line, breaking into your phone, peeking out to see if you are in, and
reporting back. An early definition of cyberspace was "where a
telephone conversation happens."
We now have two metaphors for unauthorized access to a computer–housebreaking
and an unwanted phone call. They have very different legal and moral
implications.
Consider a third–what crackers refer to as "human
engineering,
[79]"
tricking people into giving you the secret information needed to
access a computer. It might take the form of a phone call to a
secretary from a company executive outside the office who needs
immediate access to the company's computer. The secretary, whose job
includes helping out company executives with their problems, responds
with the required passwords. She may not be sure she recognizes the
caller's name–but does she really want to expose her ignorance
of the names of the top people in the firm she works for?
Human engineering is both a means and a metaphor for unauthorized
access. What the cracker is going to do to the computer is what he
has just done to the secretary–call it up, pretend to be
someone authorized to get the information it holds, and trick it into
giving that information. If we analogize a computer not to a house or
a phone but to a person, unauthorized access is not housebreaking but
fraud–against the computer.
We now have three quite different ways of fitting the same act into
our laws, language and moral intuitions–as housebreaking,
fraud, or an unwanted phone call. The first is criminal, the second
often tortious, the third legally innocuous. In the early computer
crime cases
[80]
courts were uncertain what the appropriate metaphor was.
Much the same problem arose in the early computer copyright cases.
Courts were uncertain whether a machine language program burned into
the ROMs of a computer was properly analogized to a writing
(protectable), a fancy cam (unprotectable, at least by copyright), or
(the closest actual analogy for which they had a ruling by a previous
court) the paper tape controlling a player piano.
[81]
In both cases, the legal uncertainty was ended by legislatures–Congress
when it revised the copyright act to explicitly include computer
programs, state legislatures when they passed computer crime laws
that made unauthorized intrusion a felony. The copyright decision was
correct, as applied to literal copying, for reasons I have discussed
at some length elsewhere.
[82]
The verdict on the intrusion case is less clear.
Choosing a
Metaphor
We now have three different metaphors for fitting
unauthorized use of a computer over a network–telephone system
or internet–into our legal system. One suggests that it should
be a felony, one a tort, one a legal if annoying act. To choose among
them, we consider how the law will treat the acts in each case and
why one treatment or the other might be preferable.
The first step is to briefly sketch the difference between a crime
and a tort.
A crime is a wrong treated by the legal system as an offense against
the state. A criminal case has the form "The State of California v D.
Friedman." So far as the law is concerned, the victim is the state of
California–the person whose computer was broken into is merely
a witness. Whether to prosecute, whether to settle (an out of court
settlement in a criminal case is called a plea bargain) and how to
prosecute are decided by employees of the state of California. The
cost of prosecution is paid by the state and the fine, if any, paid
to the state. The punishment has no direct connection to the damage
done by the wrong, since the offense is not "causing a certain amount
of damage" but "breaking the law."
A tort is a wrong treated by the legal system as an offense against
the victim; a civil case has the form "E. Cook v. D. Friedman." The
victim decides whether to sue, hires and pays for the attorney,
controls the decision of whether to settle out of court and collects
the damages awarded by the court. In most cases, the damage payment
awarded is supposed to equal the amount of damage done to the victim
by the wrong–enough to "make whole" the victim.
An extensive discussion of why and whether it makes sense to have
both kinds of law and why it makes sense to treat some kinds of
offenses as torts and some as crime is matter for another book;
interested readers can find it in Chapter ??? of my Law's
Order. For our present purposes it will be sufficient to note
some of the advantages and disadvantages of the alternatives in the
course of discussing which might be more applicable to unauthorized
computer access.
As a general rule, criminal conviction requires intent–although
the definition of intent is occasionally stretched pretty far. On the
face of it, unauthorized access clearly meets that requirement.
Or perhaps not. Consider three stories–two of them true.
The
Boundaries of Intent
The year is 1975. The computer is an expensive multi-user
machine located in a dedicated facility. An employee asks it for a
list of everyone currently using it. One of the sets of initials he
gets belongs to his supervisor–who is standing next to him,
obviously not using the computer.
[83]
The computer was privately owned but used by the Federal Energy
Administration, so they called in the FBI. The FBI succeeded in
tracing the access to Bertram Seidlitz, who had left six months
earlier after helping to set up the computer's security system. When
they searched his office, they found forty rolls of computer printout
paper containing source code for WYLBUR, a text editing program.
The case raised a number of questions about how existing law fit the
new technology. Did secretly recording the "conversation" between
Seidlitz and the computer violate the law requiring that recordings
of phone conversations be made only with the consent of one of the
parties (or a wire tapping authorization from a court, which they did
not have)? Was the other party the computer; if so could it consent?
Did using someone else's code to access a computer count as obtaining
property by means of false or fraudulent pretenses, representations,
or promises–the language of the statute? Could you commit fraud
against a machine? Was downloading trade secret information, which
WYLBUR was, a taking of property? The court found that it could, you
could and it was; Seidlitz was convicted.
One further question remains: was he guilty? Clearly he used someone
else's access codes to download and print out the source code to a
computer program. The question is why.
Seidlitz's answer was quite simple. He believed the security system
for the computer was seriously inadequate. He was demonstrating that
fact by accessing the computer without authorization, downloading
stuff from inside the computer, and printing it out. When he was
finished, he planned to send all forty rolls of source code to the
people now in charge of the computer as a demonstration of how weak
their defenses were. One may suspect–although he did not say–that
he also planned to send them a proposal to redo the security system
for them. If he was telling the truth, his access, although
unauthorized, was not in violation of the law he was convicted under–or
any then existing law that I can think of.
The strongest evidence in favor of his story was forty rolls of
computer output. In order to make use of source code, you have to
compile it–which means that you first have to get it in a form
readable by a computer. In 1975, optical character recognition, the
technology by which a computer turns a picture of a printed page back
into text, did not yet exist; even today it is not entirely reliable.
If Seidlitz was planning to sell the source code to someone who would
actually use it, he was also planning at some point to have someone
type all forty rolls back into a computer–making no mistakes,
since each mistake would introduce a potential bug into the program.
It would have been far easier, instead of printing the source code,
to download it to a tape cassette or floppy disk. Floppy disks
capable of being written to had come out in 1973, with a capacity of
about 250K; a single 8" floppy could store about a hundred pages
worth of text. Forty rolls of printout would be harder to produce and
a lot less useful than a few floppy disks. On the other hand, the
printout would provide a more striking demonstration of the weakness
of the computer's security, especially for executives who didn't know
very much about computers.
One problem with using law to deal with problems raised by a new
technology is that the legal system may not be up to the job. It is
likely enough that the judge in
U.S. v. Seidlitz (1978) had
never actually touched a computer and more likely still that he had
little idea what source code was or how it was used.
Seidlitz had clearly done something wrong. But deciding whether it
was a prank or a felony required some understanding of both the
technology and the surrounding culture and customs–which a
random judge was unlikely to have. In another unauthorized access
case,
[84] decided a
year earlier, the state of Virginia had charged a graduate student at
Virginia Polytechnic Institute with fraudulently stealing more than
five thousand dollars. His crime was accessing a computer that he was
supposed to access in order to do the work he was there to do–using
other students' passwords and keys, because nobody had gotten around
to allocating computer time to him and he was embarrassed to ask for
it. He was convicted and sentenced to two years in the State
penitentiary. The sentence was suspended, he appealed, and on appeal
was acquitted–on the grounds that what he had stolen were
services, not property. Only property counted for purposes of the
statute, and the scrap value of the computer cards and printouts was
less than the required hundred dollars. While charges of grand
larceny were still pending against him VPI gave him his degree,
demonstrating what they thought of the seriousness of his
offense.
When I tell my students the sad case of Bertram Seidlitz, I like to
illustrate the point with another story, involving more familiar
access technologies. This time I am the hero, or perhaps villain.
The scene is the front door of the University of Chicago Law School.
I am standing there because, during a visit to Chicago, it occurred
to me that I needed to check something in an article in the Journal
of Legal Studies before emailing off the final draft of an article.
The University of Chicago Law School not only carries the JLS, it
produces the JLS; the library is sure to have the relevant volume.
While checking the article, perhaps I can drop in on some of my
ex-colleagues and see how they are doing.
Unfortunately, it is a Sunday during Christmas break; nobody is in
sight inside and the door is locked. The solution is in my pocket.
When I left the Law School last year to take up my present position
in California I forgot to give back my keys. I take out my key ring,
find the relevant key, and open the front door of the law school.
In the library another problem arises. The volume I want is missing
from the shelf, presumably because someone else is using it. It
occurs to me that one of the friends I was hoping to see is both a
leading scholar in the field and the editor of the JLS. He will
almost certainly have his own set in his office–as I have in my
office in California.
I knock on his door; no answer. The door is locked. But at the
University of Chicago Law School–a very friendly place–the
same key opens all faculty offices. Mine is in my pocket. I open his
door, go in, and there is the Journal of Legal Studies on his office
shelf. I take it down, check the article, and go.
The next day, on the plane home, I open my backpack and discover
that, as usual, I was running on autopilot; instead of putting the
volume back on the shelf I took it with me. When I get home, I mail
the volume back to my friend with an apologetic note of
explanation.
Let us now translate this story into a more objective account and see
where I stand, legally speaking:
Using keys I had no legal right to possess I entered a locked
building I had no legal right to enter, went into a locked room I had
no legal right to enter and left with an item of someone else's
property that I had no authorization to take. Luckily for me, the
value of one volume of the Journal of Legal Studies is considerably
less than $5000, so although I may be guilty of burglary under
Illinois law I am not covered by the federal law against interstate
transportation of stolen property. Aside from the fact that the
Federal government has no special interest in the University of
Chicago Law School,
[85]
the facts of my crime were nearly identical to the facts of
Seidlitz's. Mine was just the low tech version.
As it happens, the above story is almost entirely fiction–inspired
by the fact that I really did forget to give back my keys until a
year or so after I left Chicago, so could have gotten into both the
building and a faculty office if I had wanted to. But even if it were
true, I would have been at no serious risk of being arrested and
jailed. Everyone involved in my putative prosecution would have
understood the relevant facts–that not giving keys back is the
sort of thing absent minded academics do, that using those keys in
the same way you have been using them for most of the past eight
years, even if technically illegal, is perfectly normal and requires
no criminal intent, that looking at a colleague's copy of a journal
without his permission when he isn't there to give it is also
perfectly normal, and that absent minded people sometimes walk off
with things instead of putting them back where they belong. Seidlitz–assuming
he really was innocent–was not so lucky.
My third story, like my first, is true.
[86]
The scene this time is a building in Oregon, belonging to Intel. The
year is 1993. The speaker is an Intel employee named Mark
Morrissey.
On Thursday, October 28, at 12:30 in the afternoon, I noticed an
unusual process running on a Sun computer which I administer. Further
checking convinced me that this was a program designed to break, or
crack, passwords. I was able to determine that the user "merlyn" was
running the program. The username "merlyn" is assigned to Randal
Schwartz, an independent contractor. The password cracking program
had been running since October 21st. I investigated the directory
from which the program was running and found the program to be Crack
4.1, a powerful password cracking program. There were many files
located there, including passwd.ssd and passwd.ora. Based on my
knowledge of the user, I guessed that these were password files for
the Intel SSD organization and also an external company called
O'Reilly and Associates. I then contacted Rich Cower in Intel
security.
Intel security called in the local police. Randy Schwartz was
interrogated at length; the police had a tape recorder but did not
use it. Their later account of what he said was surprisingly
detailed, given that it dealt with subjects the interrogating
officers knew little about, and strikingly different from his account
of what he said. The main facts, however, are reasonably clear.
Randy Schwartz was a well known computer professional, the author of
two books on PERL, a language used in building things on the Web. He
had a reputation as the sort of person who preferred apologizing
afterwards to asking permission in advance. One reason Morrissey was
checking the computer Thursday afternoon was to make sure Schwartz
wasn't running any jobs on it that might interfere with its intended
function. As he put it in his statement, "Randal has a habit of using
as much CPU power as he can find."
Schwartz worked for Intel as an independent contractor running parts
of their computer system. He accessed the system from his home using
a gateway through the Intel firewall located on an Intel machine, a
gateway that he had created on instructions from Intel for the use of
a group working off site but retained for his own use. In response to
orders from Intel he had first tightened its security and later shut
it down completely–then quietly recreated it on a different
machine and continued to use it.
How to
Break Into Computers
The computer system at Intel, like many others, used
passwords to control access. This raises an obvious design problem.
In order for the computer to know if you typed in the right password,
it needs a list of passwords to check yours against. But if there is
a list of passwords somewhere in the computer's memory, anyone who
can get access to that memory can find the list.
You solve this problem by creating a public key/private key pair and
throwing away the private key.
[87]
Every time a new password is created, encrypt it using the public key
and add it to the computer's list of encrypted passwords. When a user
types in a password, encrypt that with the public key and see if what
you get matches one of the encrypted passwords on the list. Someone
with access to the computer's memory can copy the list of encrypted
passwords, can copy the public key they are encrypted with, but
cannot copy the matching private key because it is not there. Without
the private key, he cannot get from the encrypted version of the
password in the computer's memory to the original password that he
has to type to get the desired level of access to (and control over)
the computer.
A program like Crack solves that problem by guessing passwords,
encrypting each with the public key, and comparing the result to the
list of encrypted passwords. If it had to guess at random, the
process would take a very long time. But despite the instructions of
the computer technicians running the system, people who create
passwords insist on using their wife's name, or their date of birth,
or something else easier to remember than V7g9H47ax. It does not take
all that long for a computer program to run through a dictionary of
first names and every date in the past seventy years, encrypt each,
and check it against the list. One of the passwords Randy Schwartz
cracked belonged to an Intel vice president. It was the word
PRE$IDENT.
[Note to readers: Was this explanation
unnecessary? Unclear?]
Randy Schwartz's defense was the same as Bertram Seidlitz's. He was
responsible for parts of Intel's computer system. He suspected that
its security was inadequate. The obvious way to test that suspicion
was to see whether he could break into it. Breaking down doors is not
the usual way of testing locks, but breaking into a computer does
not, by itself, do any damage. By correctly guessing one password,
using that to get at a file of encrypted passwords, and using Crack
to guess a considerable number of them, Randy Schwartz demonstrated
the vulnerability of Intel's computer system. I suspect, knowing
computer people although not that particular computer person, that he
was also entertaining himself and demonstrating how much smarter he
was than the people who set up the system he was cracking–including
one particularly careless Intel vice-president. He was simultaneously
(but less successfully) running Crack against a password file from a
computer belonging to O'Reilly and Associates, the company that
publishes his books.
Since Intel's computer system contains a lot of valuable intellectual
property protected (or not) by passwords, demonstrating its
vulnerability might be seen as a valuable service. Intel did not see
it that way. They actively aided the state of Oregon in prosecuting
Randy Schwartz for violating Oregon's computer crime law. He ended up
convicted of two felonies and a misdemeanor–unauthorized access
to, alteration of, and copying information from, a computer
system.
Two facts lead me to suspect that Randy Schwartz may have been the
victim, not the criminal. The first is that Intel produced no
evidence that he had stolen any information from them other than the
passwords themselves. The other is that, when Crack was detected
running, it was being run by "merlyn"–Randy Schwartz's username
at Intel. The Crack program was in a directory named "merlyn." So
were the files for the gate through which the program was being run.
I find it hard to believe that a highly skilled computer network
professional attempting to steal valuable intellectual property from
one of the world's richest high tech firms would do it under his own
name. If I correctly interpret the evidence, what actually happened
was that Intel used Oregon's computer law to enforce its internal
regulations against a subcontractor in the habit of breaking them.
Terminating the offender's contract is a more conventional, and more
reasonable, response.
In fairness to Intel, I should add that almost all my information
about the case comes from an extensive web page set up by supporters
of Randy Schwartz–extensive enough to include the full
transcript of the trial. Neither Intel nor its supporters has been
willing to web a reply. I have, however, corresponded with a friend
who works for Intel and is in a position to know something about the
case. My friend believed that Schwartz was guilty but was unwilling
to offer any evidence.
Perhaps he was guilty; Intel might have reasons for keeping quiet
other than a bad conscience. Perhaps Seidlitz was guilty. It is hard,
looking back at a case with very imperfect information, to be sure my
verdict on its verdict is correct. But I think both cases, along with
my own fictitious burglary, show problems in applying criminal law to
something as ambiguously criminal as unauthorized access to a
computer, hence provide at least a limited argument for rejecting the
break-in metaphor in favor of one of the alternatives.
Is Copying
Stealing: The Bell South Case
One problem with trying to squeeze unauthorized access into
existing criminal law is that intent may be ambiguous. Another is
that it does not fit very well. The problem is illustrated by U.S.
v. Robert Riggs, entertainingly chronicled in The Hacker
Crackdown, Bruce Sterling's account of an early and badly bungled
campaign against computer crime.
The story starts in 1988 when Craig Neidorff, a college student,
succeeded in accessing a computer belonging to Bell South and
downloading a document about the 911 system. He had no use for the
information in the document, which dealt with bureaucratic
organization–who was responsible for what–not technology.
But written at the top was "WARNING: NOT FOR USE OR DISCLOSURE
OUTSIDE BELLSOUTH OR ANY OF ITS SUBSIDIARIES EXCEPT UNDER WRITTEN
AGREEMENT," which made getting it an accomplishment and the document
a trophy. He accordingly sent a copy to Robert Riggs, who edited a
virtual magazine–distributed from one computer to another–called
Phrack. Riggs cut out about half of the document and included what
was left in Phrack.
Eventually someone at Bell South discovered that their secret
document was circulating in the computer underground–and
ignored it. Somewhat later, federal law enforcement agents involved
in a large scale crackdown on computer crime descended on Riggs. He
and Neidorff were charged with interstate transportation of stolen
property valued at more than five thousand dollars–a Federal
offense. Neidorff accepted a plea bargain and went to jail; Riggs
refused and went to trial.
Bell South asserted that the twelve page document had cost them
$79,449 to produce–well over the $5,000 required for the
offense. It eventually turned out that they had calculated that
number by adding to the actual production costs–mostly the
wages of the employees who created the document–the full value
of the computer it was written on, the printer it was printed on, and
the computer's software. The figure was accepted by the federal
prosecutors without question. Under defense questioning, it was
scaled back to a mere $24,639.05. The case collapsed when the defense
established two facts: that the warning on the 911 document was on
every document that Bell South produced for internal use, however
important or unimportant, and that the information it contained was
routinely provided to anyone who asked for it. One document,
containing a more extensive version of the information published in
Phrack, information Bell South had claimed to value at just under
eighty thousand dollars, was sold by Bell South for $13.
In the ancient days of single sex college dormitories there was a
social institution called a panty raid. A group of male students
would access, without authorization, a dormitory of female students
and exit with intimate articles of apparel. The objective was not
acquiring underwear but defying the authority of the college
administration. Craig Neidorff engaged in a virtual panty raid–and
ended up pleading guilty to a felony and serving time in prison
[check details]. Robert Riggs received the booty from a
virtual panty raid and displayed it in his virtual window. For that
offense, the federal government attempted to imprison him for a term
of ... . [check–6-12 years?].
Part of the problem, again, was that the technology was new, hence
unfamiliar to many of the people–cops, lawyers, judges–involved
in the case. Dealing with a world they did not understand, they were
unable to distinguish between a panty raid and a bank robbery. But
another problem was that the law the case was prosecuted under was
designed to deal with the theft and transportation of physical
objects.
It was natural to ask the questions appropriate to the law under
which the case was prosecuted–including how much the object
stolen cost to produce. But what was labeled theft was in fact
copying; after Neidorff stole the document, Bell South still had it.
The real measure of the damage was not what it cost to produce the
document but the cost to Bell South of other people having the
information. Bell South demonstrated, by its willingness to sell the
same information at a low price, that it regarded that cost as
negligible. Robert Riggs was prosecuted under a metaphor. On the
evidence of at least that case, it was the wrong metaphor.
Crime or
Tort?
Bell South's original figure for the cost of producing the
911 document was one that no honest person could have produced. If
you disagree, ask yourself how Bell South would have responded to an
employee who, sending in his travel expenses for a hundred mile trip,
included the full purchase price of his car–the precise
equivalent of what Bell South did in calculating the cost of the
document. Bell's testimony about the importance and secrecy of the
information contained in the document was also false, but not
necessarily dishonest; whoever gave it may not have know that the
firm provided the same information to anyone who asked for it. Those
two false statements played a major role in a criminal prosecution
that could have put Robert Riggs in prison and did cost him, his
family and his supporters hundreds of thousands of dollars in legal
expenses.
Knowingly making false statements that cost other people money is
usually actionable. But the testimony of a witness in a trial is
privileged–even if deliberately false, the witness is not
liable for the damage done. He can, of course, be prosecuted for
perjury. But that decision is made not by the injured party but by
the state.
Suppose the same case had occurred under tort law. Bell South sues
Riggs for $79,449. In the course of the trial it is established that
the figure was wildly inflated by the plaintiff, that in any case the
plaintiff still has the property, so has a claim only for damage done
by the information getting out, and that that damage is zero since
the information was already publicly available from the plaintiff.
Not only does Bell South lose its case, it is at risk of being sued
for malicious prosecution–which is not privileged. In addition,
of course, Bell South, rather than the federal government, would have
been paying the costs of prosecution. Hence putting such cases under
tort law would have given Bell South an incentive to check its facts
and figure out whether it had really been injured before, not after,
it initiated the case–saving everyone concerned a good deal of
time, money and unpleasantness.
One advantage of tort law is that the plaintiff might have been
liable for the damage it did by claims that it knew were false.
Another is that it would have focused attention on the relevant issue–not
the cost of producing the document but the injury to the plaintiff
from having it copied. That is a familiar issue in the context of
trade secret law, which comes considerably closer than criminal law
to fitting the actual facts of the case.
A further problem with criminalizing such acts is illustrated by the
fate of Craig Neidorff. Unlike Robert Riggs, he accepted a plea
bargain and went to jail. One reason, presumably, was the threat of a
much longer jail term if the case went to trial and he lost. Criminal
law, by providing the prosecution with the threat of very severe
punishments, poses the risk that innocent defendants may agree to
plead guilty to a lesser offense. If the case had been a tort
prosecution by the victim, the effective upper bound on damages would
have been everything that Neidorff owned.
There is, of course, another side to that argument. Under tort law,
the plaintiff pays for the prosecution. If winning the case is likely
to be expensive and the defendant does not have the money to pay
large damages, it may not be worth suing in the first place–in
which case there is no punishment and no incentive not to commit the
tort. That problem–providing an adequate incentive to prosecute
when prosecution is private–is one we will return to in chapter
???, where I argue that the loss of privacy due to modern information
processing has some interesting implications for the choice between
private and public enforcement of law.
Part
IV: Crime and Control
XII:
The Future of Computer Crime
Although the previous chapter discussed some past computer
crimes, its subject was not computer crime but metaphor. This one is
on computer crime–future computer crime.
The Past as
Prologue
In the early years, computers were large standalone machines;
most belonged to governments, large firms or universities. Frequently
they were used by those organizations in taking important real world
actions–writing checks, keeping track of orders, delivering
goods. The obvious tactic for the computer criminal was to get access
to those machines and change the information they contained in ways
that benefited him–creating fictitious orders and using them to
have real goods delivered that he had not really paid for, arranging
to have checks written to an account he controlled in payment for
nonexistent services,
[88]
or, if the computer was used by a bank, transferring money from other
people's accounts to his.
As time passed, it became increasingly common for large machines to
be accessible from off site over telephone lines. That was an
improvement from the standpoint of the criminal. Instead of having to
gain admission to a computer facility–with the risk of being
caught–he could access the machine from off site, evading
computer defenses rather than locked doors.
While accessing computers to steal money or stuff was the most
obvious form of computer crime, there were at other possibilities.
One was vandalism. A discontented employee or ex-employee could crash
the firm's computer or erase its data. But this was a less serious
problem with the new technology than the old. If a vandal smashes
your truck, you have to buy another truck. If he crashes your
computer, all you have to do is reboot. Even if he wipes your hard
drive you can still restore from your most recent backup, losing only
the most recent data.
A more interesting possibility was extortion. In one case, an
executive of a British firm decided that it was time to retire–in
comfort. He took the reels of tape that were the mass storage for the
firm's computer, the backup tapes, and the extra set of backups that
were stored off site, erased the information actually in the
computer, and departed. He then offered to sell the tapes–containing
information that the firm needed for its ordinary functioning–back
to the firm for a mere £???.
[89]
In a world with anonymous ecash, the payoff could have been made over
the net through a remailer. In a world of strong privacy, he could
have located a firm in the business of collecting payoffs and
subcontracted the collection end of his project. Unfortunately for
the executive, he committed his crime too early. He had to collect
the payoff himself–and was caught doing it.
Wonders of
a Networked World
Large computers controlling lots of valuable stuff still
exist, but nowadays they are likely to be part of networks. In
addition, tens of millions, soon hundreds of millions, of small
computers are linked by the internet. This opens up some interesting
possibilities.
Some years back, the Chaos Computer Club of Hamburg Germany
demonstrated one of them on German television. What they had written
was an ActiveX control, a chunk of code downloaded from a website
onto the user's computer. It was designed to work with Quicken, a
widely used accounting package. One of the things Quicken can do is
pay bills online. The control they demonstrated modified Quicken's
files to add an additional payee. Trick a million people into
downloading it, have each of them pay you ten marks a month–a
small enough sum so that it might take a long time to be noticed–and
retire.
One of the classic computer crime stories–possibly apocryphal–concerns
a programmer who computerized a bank's accounting system. After a few
months, bank officials noticed that something seemed to be wrong–a
slow leakage of money. But when they checked the individual accounts,
everything balanced. Eventually someone figured out the trick. The
programmer had designed the system so that all rounding errors went
to him. If you were supposed to receive $13.43 6/10 in interest, you
got $13.43, his account got .6 cents. It was a modest fraud–a
fraction of a cent is not much money, and nobody normally worries
about rounding errors anyway. But if the bank has a million accounts
and pays interest daily, the total comes to about five thousand
dollars a day.
That sort of fraud is known as a "salami scheme"–nobody notices
one more thin slice missing from a salami. The Chaos Computer Club
had invented a mass production version. Hardly anyone notices a
leakage of a few dollars a month from his account–but with
millions of accounts, it adds up fast. It is the old computer crime
of tricking a computer into transferring lots of money to you,
modernized to apply to a world with lots of networked computers, each
controlling fairly small resources. So far as I know, nobody has yet
put this particular form of computer crime into practice, despite the
public demonstration that it could be done. But someone will.
Another old crime was extortion–holding the contents of a
firm's computer to ransom. The modern version could use either a
downloaded ActiveX control or a computer virus–and take
advantage of the power of public key encryption. Once the software
gets onto the victim's computer it creates a large random number and
uses it as the key to encrypt the contents of the hard drive, erasing
the unencrypted version as it does so. The final step is to encrypt
the key using the criminal's public key and display a message to the
victim.
The message tells him that he can have the contents of his hard drive
back for twenty dollars in anonymous ecash, sent to the criminal
through a suitable remailer; the money should be accompanied by the
encrypted key, which the message includes. The extortionist will send
back the decrypted key, and the software with which it can be used to
decrypt the hard drive.
This particular scheme has two attractive features–from the
standpoint of the criminal. The first is that since each victim's
hard drive is encrypted with a different key, there is no way they
can share the information to decrypt it–each must pay
separately. The second is that, with lots of victims, the criminal
can establish a reputation for honest dealing–after the first
few cases, everyone will know that if you pay you really do get your
hard drive back. So far as I know, nobody has done it yet, although
there was an old case where someone attempted a less sophisticated
version of the scheme, using floppy disks instead of
downloads.
[90]
What else can be done in the world of small networked computers?
Simple vandalism for the fun of it is one obvious possibility,
familiar in the form of computer viruses. A more productive
possibility is to imitate some of the earliest computer criminals–remember
the VPI graduate student–and steal, not money, but computing
power. At any instant, millions of desktop computers are twiddling
their thumbs while their owners are eating lunch or thinking about
what to type next. When you operate at a million instructions a
second, there's a lot of time between keystrokes.
The best known attempt to harness that wasted power is SETI–the
Search for Extra-Terrestrial Intelligence. It is a volunteer effort
by which large numbers of individuals permit their computers,
whenever they happen to be idle, to work on a small part of the
immense job of number crunching necessary to search the haystack of
interstellar radio noise for the needle of information that might
tell us that, somewhere in the galaxy, someone else is home. Similar
efforts on a smaller scale have been used in experiments to test how
hard it is to break various forms of encryption–another project
that requires very large scale number crunching.
One could imagine an enterprising thief stealing a chunk of that
processing power–and perhaps justifying the crime to himself on
the grounds that nobody was using it anyway. The approach would be
along SETI's lines, but without SETI's public presence. Download a
suitable bit of software to each of several million unknowing
servants then use the Internet to share out the burden of very large
computing projects among them. Charge customers for access to the
worlds' biggest computer, while keeping its exact nature a trade
secret. Think of Randy Schwartz–who, whether or not he stole
trade secrets, had the reputation of grabbing all the CPU power he
could get his hands on.
Nobody has done it. My guess is that nobody will, since a continuing
access is too easy to detect. But a more destructive version has been
implemented repeatedly. It is called a Distributed Denial of Service
attack–DDOS for short. To do it, you temporarily take over a
large number of networked computers and instruct each to spend all of
its time trying to access a particular web page–belonging to
some person or organization you disapprove of. A web server can send
out copies of its web page to a lot of browsers at once, but not an
unlimited number. With enough requests coming fast enough, the server
is unable to handle them all and the page vanishes from the web.
Distributed
Computing: The Solution the Problem Comes
From
We have been discussing problems due to software downloaded
from a web page to a user's computer, where it does something. Such
software originated as the solution to a problem implicit in
networked computing. The problem is server overload; the solution is
distributed computing.
You have a web page that does something for the people who access it–draws
a map showing them how to get to a particular address, say. Drawing
that picture–getting from information on a database to a map a
human being can read–takes computing power. Even if it does not
take very much, when a thousand people each want a different map
drawn at the same time it adds up–and your system slows
down.
Each of those people is accessing your page from his own computer.
Reading a web page does not take much in the way of computing
resources, so most of those computers are twiddling their thumbs–operating
at far below capacity. Why not put them to work drawing maps?
The web page copies to each of the computers a little map drawing
program–an ActiveX control or Java Applet. That only has to be
done once. Thereafter, when the computer reads the web page, the page
sends it the necessary information and it draws the map itself.
Instead of putting the whole job on one busy computer it is divided
up among a thousand idle computers. The same approach works for
multiplayer webbed games and a great variety of other applications.
It is a solution–but a solution that, as we have just seen,
raises a new problem. Once that little program gets on your computer,
who knows what it might do there?
Microsoft's solution to that problem is to use digital signatures,
authenticated by Microsoft, to identify where each ActiveX control
comes from. Microsoft's response to the Hamburg demonstration was
that there was really no problem. All a user had to do to protect
himself was to tell his browser not to take controls from strangers–which
he could do by an appropriate setting of the security level on
Explorer.
This assumes Microsoft cannot be fooled into signing bogus code. I
can think of at least two ways of doing it. One is to get a job with
a respectable software company and insert a little extra code into
one of their ActiveX controls–which Microsoft would then sign.
The other is to start your own software company, produce useful
software that makes use of an ActiveX control, add an additional
unmarked feature inspired by the Chaos Computer Club, get it signed
by Microsoft, put it up on the web–then close up shop and
decamp for Brazil.
Sun Computer has a different solution to the same problem. Java
Applets, their version of software for distributed computing, are
only allowed to play in the sandbox–designed to have a very
limited ability to affect other things in the computer, including
files stored on the hard drive. One problem with that solution is
that it limits the useful things an Applet can do. Another is that
even Sun sometimes makes mistakes. The fence around the sandbox may
not be entirely appletproof.
The odds are that both ActiveX and Applets will soon be history and
some new method will be used to enable distributed computing.
Whatever it is, it will face the same problem and the same set of
possible solutions. In order to be useful, it has to be able to do
things on the client computer. The more it can do, the greater the
possibility of doing things that the owner of that computer would
disapprove of. That can be controlled either by controlling what gets
downloaded and holding the firm that produced it responsible or by
strictly limiting what any such software is allowed to do–Microsoft's
and Sun's approaches respectively.
Growing
Pains
Readers with high speed internet connections may at this
point be wondering if they ought to pull the plug. I don't think
so.
There are two important things to remember about the sort of problem
we have been discussing. The first is that it is your computer,
sitting on your desktop. A bad guy may be able to get control of it
by some clever trick, by getting you to download bogus software or a
virus. But you start with control–and whatever the bad guy
does, you can always turn the machine off, boot from a CD, wipe the
hard drive, restore from your backup and start over. The logic of the
situation favors you–it is only bad software design and
careless use that makes it possible for other people to take over
your machine.
The second things to remember is that this is a new world and we have
just arrived. Most desktop computers are running under software
originally designed for standalone machines. It is not surprising
that such software frequently proves vulnerable to threats that did
not exist in the environment it was designed for. As software evolves
in a networked world, a lot of the current problems will gradually
vanish.
Until the next innovation.
The Worm
Turns: Clients Fooling Servers
We have been discussing crimes committed by a server against
clients–downloading to them chunks of code that do things their
owners would not approve of. I once got into an interesting
conversation with someone who had precisely the opposite problem. He
was in the computer gaming business–online role playing games
in which large numbers of characters, each controlled by a different
player, interact in a common universe, allying, fighting each other,
gaining experience, becoming more powerful, acquiring enchanted
swords, books of spells, and the like.
People running online games want lots of players. But as more and
more players join, the burden on the server supporting the game
increases–it has to keep track of the characteristics and
activities of an increasing number of characters. Ideally, a single
computer keeps track of everything in order to maintain a consistent
universe–but there is a limit to what one computer can do.
The solution is distributed computing. Offload most of the work to
the player's computer. Let it draw the pretty pictures on the screen,
maps of a dungeon or a fighter's eye view of the monster he is
fighting. Let it keep track of how much gold the character has, how
much experience he has accumulated, what magic devices are in his
pouch, what armor on his back. The server still needs to keep track
of the shared fundamentals–who is where–but not the
details. Now the game scales–when you double the number of
players you almost double the computing power available, since the
new players' computers are now sharing the load.
Like many solutions, this one comes with a problem. If my computer is
keeping track of how strong my character is and what goodies he has,
that information is stored on files on my hard drive. My hard drive
is under my control. With a little specialized knowledge about how
the information is stored–provided, perhaps, by a fellow
enthusiast online–I can modify those files. Why spend hundreds
of hours fighting monsters in order to become a hero with muscles of
steel, lightening reactions, and a magic sword, when I can get the
same result by suitably editing the file describing my character? In
the online gaming world, where many players are technically
sophisticated, competitive, and unscrupulous–or, if you prefer,
where many players regard competitive cheating as merely another
level of the game–it is apparently a real problem.
I offered him a solution; I do not know if he, or anyone else, has
tried implementing it:
The server cannot keep track of all the details of all the
characters, but it can probably manage one in a hundred. Pick a
character at random and, while his computer is calculating what is
happening to him, run a parallel calculation on the server. Follow
him for a few days, checking to make sure that his characteristics
remain what they should be. If they do, switch to someone else.
What if the character has mysteriously jumped twenty levels since the
last time he logged off? Criminal law solves the problem of deterring
offenses that are hard to detect–littering, for example–by
scaling up the punishment to balance the low probability of imposing
it. It should work here too.
I log into the game where my character, thanks to hundreds of hours
of playing assisted by some careful hacking of the files that
describe him, is now a level 83 mage with a spectacular collection of
wands and magic rings. There is a surprise waiting:
"You wake up in the desert, wearing only a loin cloth. Clutched in
your hand is a crumpled parchment."
"Look at the Parchment"
"It looks like your handwriting, but unsteady and trailing off into
gibberish at the end."
"Read the Parchment"
The parchment reads:
"I shouldn't have done it. Dabbling in forbidden arts. The Demons are
coming. I can feel myself pouring away. No, No, No ... ."
"Show my statistics"
Level: 1. Possessions: 1 loincloth.
Crime doesn't pay.
High Tech
Terrorism: Nightmare or Employment Project
A few years ago, I participated in a conference called to
advise a presidential panel investigating the threat of high tech
terrorism. So far as I could tell, the panel originated from an
exercise by the Nation Security Agency, in which they demonstrated
that, had they been real bad guys, they could have done a great deal
of damage by breaking into computers controlling banks, hospitals,
and much else.
I left the conference uncertain whether what I had just seen was a
real threat or an NSA employment project, designed to make sure that
the end of the Cold War did not result in serious budget cuts.
Undoubtedly a group of highly sophisticated terrorists could do a lot
of damage by breaking into computers. But then, a group of
sophisticated terrorists could do a lot of damage in low tech ways
too. I had seen no evidence that the same team could not have done as
much damage–or more–without ever touching a computer. A
few years after that conference, a group of not terribly
sophisticated terrorists demonstrated just how much damage they could
do by flying airplanes into tall buildings. No computers
required.
I did, however, come up with one positive contribution to the
conference. If you really believe that foreign terrorists breaking
into computers in order to commit massive sabotage is a problem, the
solution is to give the people who own computers adequate incentives
to protect them–to set up their software in ways that make it
hard to break in. One way of doing so would be to decriminalize
ordinary intrusions. If the owner of a computer cannot call the cops
when he finds that some talented teenager has been rifling through
his files, he has an incentive to make it harder to do so, to protect
himself. If the computers of America are safe against Kevin Mitnick,
Saddam Hussein won't have a chance.
XIII:
Law Enforcement x 2
The previous chapter dealt with the use of new technologies
by criminals; this one deals with the other side of the picture. I
begin by looking at ways in which new technologies can be used to
enforce the law, and some associated risks. I then go on–via a
brief detour to the eighteenth century–to consider how
technologies discussed in earlier chapters may affect not how law is
enforced but by whom.
High
Tech Crime Control
Criminals are not the only ones with access to new
technologies; cops have it too. Insofar as enforcing law is a good
thing, new technologies that make it easier are a good thing. But the
ability to enforce the law is not an unmixed blessing–the
easier it is to enforce laws, the easier it is to enforce bad
laws.
There are two different ways in which our institutions can prevent
governments from doing bad things. One is by making particular bad
acts illegal. The other is by making them impossible. That
distinction appeared back in Chapter ???, when I argued that
unregulated encryption could serve as the 21st century
version of the Second Amendment–a way of limiting the ability
of government to control citizens.
For a less exotic example, consider the Fourth Amendment's
restrictions on searches–the requirement of a warrant issued
upon showing of reasonable cause. At least some searches under
current law–wiretaps, for instance–can be done without
the victim even knowing about it. What's the harm? If you have
nothing to hide, why should you object?
One answer is that the ability to search anyone at any time, to tap
any phone, puts too much power in the hands of law enforcement
agents. Among other things, it lets them collect information
irrelevant to crimes but useful for blackmailing people into doing
what they are told. For similar reasons, the U.S., practically alone
among developed nations, has never set up a national system of
required I.D. cards–although that that may have changed by the
time this book is published. Such a system would make law enforcement
a little easier. It would also make abuses by law enforcement
easier.
The underlying theory, which I think everyone understands although
few put it into words, is that if the government has only a little
power, it can only do things that most of the population approves of.
If it has a lot of power, it can do things that most people
disapprove of–including, in the long run, converting a nominal
democracy into a de facto dictatorship. Hence the delicate balance
intended to provide enough power to prevent most murder and robbery
but not much more than that.
How might new technologies available for law enforcement affect that
balance?
Knowing Too
Much
A policeman stops me and demands to search my car. I ask him
why. He replies that my description fits closely the description of a
man wanted for murder. Thirty years ago, that would have been a
convincing argument. It is less convincing today. The reason is not
that policemen know less but that they know more.
In the average year, there are about twenty thousand murders in the
U.S. With twenty thousand murders and (I'm guessing) several thousand
wanted suspects, practically everyone fits the description of at
least one of them. Thirty years ago, the policemen would have had
information only on those in his immediate area. Today he can access
a databank listing all of them.
Consider the same problem as it might show up in a courtroom. A
rape/murder is committed in a big city. The jury is told that the
defendant's DNA matches that of the perpetrator–well enough so
that there is only a one in a million probability that the match
would happen by chance. Obviously he is guilty–those odds
easily satisfy the requirements of "beyond a reasonable doubt."
There are two problems with that conclusion. The first is that the
one in a million statement is false. The reason it is false has to do
not with DNA but with people. The figure was calculated on the
assumption that all tests were done correctly. But we have plenty of
evidence from past cases that the odds that they weren't–the
odds that someone in the process, whether the police who sent in the
evidence or the lab that tested it, was either incompetent or
dishonest–are a great deal higher than one in a
million.
[91]
The second problem is not yet relevant, but soon will be. To see it,
imagine that we have done DNA tests on everyone in the country in
order to set up a national database of DNA information, perhaps as
part of a new nationwide system of I.D. cards.
Under defense questioning, more information comes out. The way the
police located the suspect was by going through the DNA database. His
DNA matched the evidence, he had no alibi, so they arrested him.
Now the odds he is guilty shift down dramatically. The chance that
someone's DNA would match the sample as closely as his did is only
one in a million. But the database contains information on seventy
million men in the relevant age groups. By pure chance, about seventy
of them will match. All we know about the defendant is that he is one
of those seventy, doesn't have an alibi, and lives close enough to
where the crime happened so that he could conceivably have committed
it. There might easily be three or four people who meet all of those
conditions, so the fact that the defendant does is very weak evidence
that he is guilty.
Consider a third version of the same problem–this time one that
has existed for the past twenty years or so. An economist interested
in crime has a theory–that the death penalty increases the risk
to police of being killed, since cornered murder suspects have
nothing to lose. To test that theory, he runs a regression–a
statistical procedure designed to see how different factors affect
the number of police killed in the line of duty. The death penalty is
not the only factor, so he includes additional terms for variables
such as the fraction of the population in high crime age groups,
racial mix, poverty level, and the like. When he publishes his
results, he reports that the regression fits his prediction at the
.05 level: there is only one chance in twenty that the result would
fit his prediction as well as it did by chance.
What he does not mention in the article is that the regression he
reports is one of sixty that he ran–varying which other factors
were included, how they were measured, how they were assumed to
interact. With sixty regressions, the fact that at least one came out
as he predicted does not tell us very much–by pure chance,
about three of them should.
Fifty years ago, running a regression was a lot of work–done,
if you were lucky, on an electric calculating machine that did
addition, multiplication, and not much else. Doing sixty of them was
not a practical option, so the fact that someone's regression fit his
theory at the .05 level was evidence that the theory was to some
degree true. Today, any academic–practically any schoolchild–has
access to a computer that can do sixty regressions in a few minutes.
That makes it easy to do a specification search–try lots of
different regressions, each specifying the relationship a little
differently, until you find one that works. You can even find
statistical packages that do it for you. Hence the fact that an
article reports a successful regression no longer provides much
support for the author's theory. At the very least, you have to
report the different specifications you tried, give a verbal summary
of how they came out, and detailed results for a few of them. If you
really want to persuade people you have to make your dataset freely
available–ideally over the internet–and let other people
run as many regressions on it in as many different ways as they want
until they convince themselves that the relationship you found is
really there, not an illusion created by carefully selecting which
results you reported.
All of these examples–the police stop on suspicion, the DNA
evidence, the specification search–involve the same issue. By
increasing access to information you make it easier to find evidence
for the right answer. Also for the wrong answer.
If you are the one looking for evidence, the additional information
is an asset. The traffic cop can check his database, see that the
person whose description I fit was last reported on the other side of
the country, and decide not to bother stopping me. The police, having
located several suspects who fit the DNA evidence, can engage in a
serious attempt to see if one of them is guilty and only make an
arrest if there is enough additional evidence to convict. The
researcher can report his specification search–and use its
results to improve his theory.
But in each case, the additional information also makes it easier to
generate bogus evidence. The traffic cop who actually wants to stop
me because of the color of my skin, or because I have out of state
plates, or in the hope that he will find something illegal and be
offered a bribe not to report it, can honestly claim that I met the
description of a wanted man. The D.A. who wants a good conviction
rate before his next campaign for high office can report the DNA fit
and omit any explanation of how it was obtained and what it really
means. And the academic researcher, desperate for publications to
bolster his petition for tenure, can selectively remember only those
regressions that came out right. If we want to prevent such, we must
alter our rules and customs accordingly, raising the standard for how
much evidence it takes to reflect how much easier it has become to
produce evidence–even for things that aren't true.
Every
Phone
The hero of
The President's Analyst, who spends much
of the film evading various bad guys who want to kidnap him and use
him to influence his star patient, has temporarily escaped his
pursuers and made it to a phone booth; he calls up a friendly CIA
agent to come rescue him. When he tries to leave the booth, the door
won't open. Down the road comes a phone company truck loaded with
phone booths. The truck's crane picks up the booth containing the
analyst, deposits it in the back, replaces it with an empty booth and
drives off.
A minute later a helicopter descends containing the CIA agent and a
KGB agent who is his temporary ally. They look in astonishment at the
empty phone booth. The American speaks first:
"It can't be. Every phone in America tapped?"
The response (you will have to imagine the Russian accent)
"Where do you think you are–Russia?"
A great scene in a very funny movie. But it may not be a joke much
longer.
Now fast forward to the debate over the digital wiretap bill–legislation
pushed by the FBI to require phone companies to provide law
enforcement agents facilities to tap digital phone lines.
[92]One
point made by critics of the legislation was that the FBI appeared to
be demanding the ability to simultaneously tap about one phone out of
a hundred . While that figure was probably an exaggeration–there
was disagreement as to the exact meaning of the capacity the FBI was
asking for–it was not much of an exaggeration.
[93]
As the FBI pointed out, that didn't mean they would be using all of
that capacity.
[94]
In order to tap one percent of the phones in any particular place–say
a place with lots of drug dealers–they needed the ability to
tap one percent of the phones in every place. Besides, the one
percent figure represented the level that phone companies had to be
capable of increasing to, not a level they had to routinely provide,
it only applied in parts of the country where the FBI thought it
might need such a capacity, and it included not only wire taps but
also less intrusive surveillance, such as keeping track of who called
whom but not of what they said.
At the time they made the request, wiretaps were running at a rate of
under a thousand a year–not all at the same time. Even after
giving the FBI the benefit of all possible doubt, the capacity they
asked for would only be needed if they were contemplating an enormous
increase in telephone surveillance.
The FBI defended the legislation as necessary to maintain the status
quo, to keep developments in communications technology from reducing
the ability of law enforcement to engage in court ordered
interceptions. Critics argued that there was no evidence such a
problem existed. My own suspicion is that the proposal was indeed
motivated by technology–but not that technology.
The first step in the argument is to ask why, if phone taps are as
useful as law enforcement spokesmen claim, there are so few of them
and they produce so few convictions. The figure for 1995 was a total
of 1058 authorized interceptions at all levels, Federal state and
local. They were responsible for a total of 494 convictions, mostly
for drug offenses. Total drug convictions for that year, at the
Federal level alone, were over 16,000.
The answer is not the reluctance of courts to authorize wiretaps. The
National Security Agency, after all, gets its wiretaps authorized by
a special court,
[95]
widely reported to have never turned down a request. The answer is
that wiretaps are very expensive. Some rough calculations by Robin
Hanson
[96] suggest
that on average, in 1993, they cost more than fifty thousand dollars
each. Most of that was the cost of labor–police officers' time
listening to 1.7 million conversations at a cost of about
$32/conversation.
That problem has been solved. Software to convert speech into text is
now widely available on the market. You no longer need a human being
on one end of the wire. Instead you can have a computer listen,
convert the speech to text, search the text for key words and
phrases, and notify a human being if it gets a hit. It is true that
current commercial software is not very reliable unless it has first
been trained by the user to his voice. But an error level that would
be intolerable for using a computer to take dictation is more than
adequate to pick up key words in a conversation. And besides, the
software is getting better.
Computers work cheap. If we assume that the average American spends
half an hour a day on the phone–a number created out of thin
air by averaging in two hours for teenagers and ten minutes for
everyone else–that gives, on average, about six million phone
conversations at any one time. Taking advantage of the wonders of
mass production, it should be possible to produce enough dedicated
computers to handle all of that for less than a billion dollars.
Every phone in America.
A Legal
Digression: My Brief for the Bad Guys
Law enforcement agencies still have to get court orders for
all of those wiretaps–and however friendly the courts may be,
persuading judges that every phone in the country needs to be tapped,
including theirs, might be a problem.
Or perhaps not. A computer wiretap isn't really an invasion of
privacy–nobody is listening. Why should it require a search
warrant? If I were an attorney for the FBI, facing a friendly
judiciary, I would argue that a computerized tap is at most
equivalent to a pen register, which keeps track of who calls whom and
does not currently require a warrant. The tap only rises to the level
of a search when a human being listens to the recorded conversation.
Before doing so, the human being will, of course, go to a judge,
offer the judge the computer's report on key words and phrases
detected, and use that evidence to obtain a warrant. Thus law
enforcement will be free to tap all our phones without recourse to
the court system–until, of course, it finds evidence that we
are doing something wrong. If we are doing nothing wrong, only a
computer will hear our words–so why worry? What do we have to
hide?
Living I.D.
Cards
In the wake of the attack on the World Trade Center there has
been political pressure to establish a national system of I.D. cards;
currently (defined by when I write not when you read) it is unclear
whether it will succeed. In the long run it may not matter very much.
Each of us, after all, already carries with him a variety of
biological identification cards–face, fingerprints, retinal
patterns, DNA. Given adequate technologies for reading that
information, a paper card is superfluous.
In low density populations, face alone is adequate. Nobody needs to
ask a neighbor for identification because everybody already knows
everybody else. That system breaks down in the big city because we
aren't equipped to store and search a million faces.
We could be. Facial recognition software is already pretty good and
getting better. As it gets better, there is no technical reason why
someone, most probably law enforcement, could not compile a database
of every face in the country, along with associated information.
Point the camera at someone and read off name, age, citizenship,
criminal history, and whatever else is in the database.
Faces are an imperfect form of identification, since there are ways
to change your appearance. Fingerprints are better. There already
exist commercial devices to recognize fingerprints, used to control
access to laptop computers–if someone else puts his finger on
the sensor, the computer refuses to give him access. I do not know
how close we are to an inexpensive fingerprint reader, matched with a
filing system, but it does not seem like an inherently difficult
problem. Nor does the equivalent using a scan of retinal patterns.
Cheap DNA recognition is a little further off–but there too,
technology has been progressing rapidly.
We could make laws forbidding law enforcement from compiling and
using such databases, but it does not seem likely that we will, given
the obvious usefulness of the technology in doing the job we want
them to do. Even if we did forbid it, enforcing the ban, against both
law enforcement and everyone else, would be difficult. When the
Social Security system was set up, the legislation explicitly forbade
the use of the Social Security number as a national identifier.
Nonetheless, any time you deal with the Federal government, or any of
a lot of other people, they ask for it. Even if there is no official
national database of faces, each police department will have its own
collection of faces that interest it. If expanding that collection is
cheap–and it will be–"interest" will become a weaker and
weaker requirement. And there is nothing to stop different police
departments from talking to each other–especially in a world of
widely available high speed networks.
The
Obsolesence of Criminal Law
A few chapters back, I raised the question of whether
unauthorized access to a computer ought to be treated as a tort or a
crime. It is now time to return to that issue in a broader
context.
Most of us think of law enforcement is almost entirely the province
of government. In fact that has never been true, and is not true now.
Total employment in private crime prevention–security guards,
burglar alarm installers, and the like–has long been greater
than in public enforcement. It is true that catching and prosecuting
criminals is almost entirely done by agents of government, but that
is only because crime is defined as the particular sort of offense
that is prosecuted by the government. Precisely the same action–killing
your wife, for example–can be prosecuted either by the state as
a crime or by private parties as a tort. That fact suggests at least
the possibility of a purely private system, one in which all wrongs
are privately prosecuted, perhaps by the victim or his agents. Such
systems have existed in the past. They may again.
A Brief
Temporal Digression
Consider, for one of my favorite examples, criminal
prosecution in 18th century England. On paper, they had
the same distinction we do between crimes and torts. A crime was an
offense against the crown–the case was Rex v
Friedman.
The crown owned the case, but it didn't prosecute it. England in the
18th century had no police as we understand the word–no
professionals employed by government to catch and convict criminals.
There were constables–sometimes unpaid–with powers of
arrest, but figuring out who to arrest was not part of their job
description. It was not until the 1830's that Robert Peel created the
first English police force. Not only were their no police, there were
no public prosecutors either–the equivalent of the District
Attorney in the modern American system did not exist in England until
the 1870's, although for some decades prior to that police officers
functioned as de facto prosecutors.
With neither police nor public prosecutors, criminal prosecution was
necessarily private. The legal rule was that any Englishman could
prosecute any crime. In practice, prosecution was usually by the
victim or his agent.
That raises an obvious puzzle. When I sue someone under tort law, at
least I have the hope of winning and being paid damages–with
luck more than enough to cover my legal bills. But a private
prosecutor under criminal law had no such incentive. If he got a
conviction the criminal would be hanged, transported, permitted to
enlist in the armed services, or pardoned–none of which put any
money in the prosecutor's pocket. So why did anyone bother to
prosecute?
One answer is that the victim prosecuted in order to deter–not
crimes in general but crimes against himself. That makes sense if he
is a repeat player–the owner of a store or factory at continual
risk from thieves. Hang one and the others will get the message. That
is why, even today, in a system where prosecution is nominally
entirely public, some department stores have signs announcing that
they prosecute shoplifters. Arguably it is why Intel prosecuted Randy
Schwartz.
Most potential victims were not repeat players. For them, the
18th century English came up with an ingenious solution–societies
for the prosecution of felons. There were thousands of them. The
members of each contributed a small sum to a pooled fund, available
to pay the cost of prosecuting a felony committed against any one of
them. The names of the members were published in the newspaper for
the felons to read. Potential victims thus precommitted themselves to
prosecute. They had made deterrence into a private good.
That set of institutions was eventually abandoned. One possible
explanation is that, in order for it to work, criminals had to know
their victims–at least well enough to know whether the victim
either had a reputation for prosecuting or was a member of a
prosecution association. As England became increasingly urbanized,
crime became increasingly anonymous. It did no good to join a
prosecution association and publish your membership in the local
paper if the burglar didn't know your name.
Forward
Into the Past
One consequence of modern information processing technology
is the end of anonymity, at least in realspace. Public information
about you is now truly public–not only is it out there, anyone
who wants can find it. In an earlier chapter, I discussed that in the
context of threats to privacy; given the existing technology, privacy
through obscurity is no longer an option.
We can now see a different consequence. In the 19th
century, big cities made victims anonymous. But nobody is anonymous
any more. We are back in the 18th century.
Consider the implication for our earlier discussion of how to handle
unauthorized access to computers. One problem with using tort law is
inadequate incentive to prosecute–the random cracker probably
does have enough resources to pay the cost of catching and convicting
him. That problem was solved two hundred and fifty years ago. Under
criminal law there were no damages to collect, so 18th
century Englishmen found a different incentive. The same approach
that made deterrence a private good for them could equally well make
it a private good for us.
All it takes is the online equivalent of a society for the
prosecution of felons. Subscribers pay an annual fee, in exchange for
which they are guaranteed prosecutorial services if someone accesses
their computer in ways that impose costs on them. The names of
subscribers–and their I.P. addresses–are posted on a web
page, for prudent crackers to read and avoid. If the benefit of
deterrence is worth the cost, there should be lots of customers. If
it is not, why provide deterrence at the taxpayers' expense?
There remains one problem. Under ordinary tort law the penalty is
either the damage done or the largest amount the offender can pay,
whichever is less. If computer intruders are hard to catch, that
penalty might not be adequate to deter them. One time out of ten, the
intruder must pay for his damage–if he can. The other nine
times he goes free.
Criminal law solves that problem by permitting penalties larger,
sometimes much larger, than the damage done, thus making up for the
fact that only some fraction of offenders are caught, convicted, and
punished. Punitive damages in tort law achieve the same effect. But
punitive damages are, and criminal punishment is not, limited by the
assets of the offender–the criminal law can impose non-monetary
punishments such as imprisonment.
So we have two possibilities for private enforcement of legal rules
against unauthorized access. One is to use ordinary tort laws, with
private deterrence as the incentive to prosecute. That works so long
as the assets of offenders are large enough so that taking them via a
tort suit is an adequate punishment to deter most offenses. The other
is to go all the way back to the 18th century–private
prosecution with criminal penalties.
I have discussed the problems with private prosecution–what are
the advantages? The main one is the advantage that private enterprise
usually has over state enterprise. The proprietors of an online
prosecution firm are selling a service to their customers on a
competitive market. The better they do their job, the more likely
they are to make money. If costs are high and quality low, they don't
have the option of getting bailed out by the taxpayers. Some other
advantages were discussed earlier.
The argument is more general than the narrow context of defending
computers against unwanted intruders. Information processing
technology eliminates the anonymity that urbanization created–in
that respect, at least, it puts us back in villages. Doing so
eliminates what was arguably the chief reason for the shift from
private to public prosecution–of all crime.
Private
Enforcement Online
The argument so far has been based on history and theory. But
it was inspired, at least in part, by an online news story that I
came across a few years back. It involved a large scale credit card
fraud whose perpetrator had just been brought to justice. The arrest
was made by the FBI [check], but the job of catching him had
been done by his victims–using the internet to coordinate their
efforts.
That story suggests another way in which modern technology may make
private law enforcement more practical than it has been in the recent
past. Many crimes involve a single criminal but multiple victims.
Each victim has reasons, practical and moral, to want the criminal
caught, but no one can do the job on his own. The internet, by
drastically reducing the cost of finding fellow victims and
coordinating with them, helps solve that problem.
This is an example of a point we discussed earlier–the degree
to which the internet makes it practical for amateurs to replace
professionals in a variety of roles.
Part
V: Biotechnologies
Chapter
XIV: Human Reproduction
Through most of the past century, reproductive technology has
consisted largely of better ways of not reproducing. Improvements in
contraception have been accompanied by striking changes in human
mating patterns: a steep decline in traditional marriage, a
corresponding increase in non-marital sex and, perhaps surprisingly,
extraordinarily high rates of childbirth outside of
marriage.
[97]
While the long term consequences of reliable contraception will
continue to play out over the next few decades, they will not be
discussed here. This chapter deals with more recent developments in
the technology of human reproduction.
Wise
Fathers
While human mating patterns have varied a good deal across
time and space, long term monogamy is far and away the most common.
This pattern--male and female forming a mated pair and remaining
together for an extended period of time--is uncommon in other
mammalian species. It is, oddly enough, very common among
birds.
[98] Swans
and geese, for example, have long been known to mate for life.
Modern research has shown that the behavior of most varieties of
mated birds is even closer to that of humans than we once supposed.
As with humans, the norm is monogamy tempered by adultery. While a
mated pair will raise successive families of chicks together, a
significant fraction of those chicks--genetic testing suggests
figures from ten to forty percent--are not the offspring of the male
member of the pair. Similar experiments are harder to arrange with
humans, but such work as has been done suggests that some significant
percentage of the children of married women cohabiting with their
husbands are fathered by someone else.
[99]
From an evolutionary standpoint, the logic of the situation is clear.
Males play two different roles in human (and avian) reproduction.
They contribute genes to help produce children and resources to help
rear them. The latter contribution is costly, the former is not. A
male who can successfully impregnate another male's mate gets
reproductive success--more copies of his genes in the next
generation--at negligible cost. Hence it is not surprising that
males, whether men or ganders, invest substantial effort both in
attempting to impregnate females they are not mated to and in
attempting to keep the females they are mated to from being
impregnated by other males.
A faithful female gets both genes and support from her mate--trades
her contribution to producing offspring for his. But an unfaithful
female can do even better. She mates with the best provider who will
have her and then, given the opportunity, becomes pregnant by the
highest quality male available, where "quality" is defined by
whatever observable characteristics signal heritable characteristics
that can be expected to result in reproductive success for her
offspring--tail length in swallows, income and status in humans.
"Power is the ultimate aphrodisiac."
[100]
This strategy works, for geese and women, because of a curious
feature of our biology--the inability of males to reliably identify
their offspring. If that were not the case, if males were equipped
with some built-in system of biometric identification based on scent,
appearance, or the like--they could and would refuse to provide
support for the offspring of other males.
[101]
This feature of human biology has just vanished. Paternity testing
now provides males what evolution failed to provide--a reliable way
of determining which children are theirs. What are the likely
consequences?
I first started thinking about this question in response to a
hypothetical by a colleague: Suppose it became customary to check the
paternity of every child at birth. What would the consequences
be?
The obvious consequence is that some men would discover that their
wives had been unfaithful and some marriages would break up as a
result. The slightly less obvious consequence is that married women
conducting affairs would take more care with contraception. The still
less obvious consequence--except to economists and evolutionary
biologists--is that men would invest more of their resources in their
offspring.
From the economist's standpoint, the reason is that people are
observed to value the welfare of their own offspring above the
welfare of other people's offspring. From the biologist's standpoint,
the reason is that human beings (and other living creatures) have
been designed by evolution to act in ways that maximize their
reproductive success--and one way of doing so is to take better care
of your own children than of other people's. Either way, the
conclusion is the same. Routine paternity testing would mean that men
knew that their children were really theirs and so would be willing
to invest more resources in them. They would invest less in those of
their wives' children that were not theirs--but there would be fewer
of those than before, due to the desire of wives to have children
that their husbands will think are theirs. And those children that
did have a father who was not their mother's husband could prove it,
and so have at least a hope of support from him.
Readers who question the assumption that parents are biased in favor
of their own children might want to look at the literary evidence.
Across a very wide variety of cultures, it is taken for granted that
step parents cannot be trusted to care for their step
children.
[102]
And, going beyond our species, there is evidence that male
birds adjust the amount of parental care they give chicks to take
account of the probability that the chicks are not theirs.
[103]
So far I have been considering a straightforward consequence of the
combination of a new technology and a new social practice. The
technology has already happened; the practice, so far at least, has
not changed in response.
The law, however, has. Under Lord Mansfield's rule, a common law
doctrine going back to the eighteenth century, a married man
cohabiting with his spouse was legally forbidden from challenging the
legitimacy of her offspring. This appears in modern statutes as the
rule that the mother of a child is the woman from whose body the
child is born and, if that woman was married an cohabiting with the
father when the child was conceived, he is conclusively presumed to
be the father. That was a reasonable legal rule as long as there was
no practical way of demonstrating paternity. Most of the time it gave
the right answer. When it didn't, there was usually no good way of
doing better and no point in using up time, effort and marital good
will trying.
It is no longer a reasonable legal rule and, increasingly, it is no
longer the rule embodied in modern statutes. In California, for
example, a state whose family law we shall be returning to at the end
of this chapter, the current statute provides that the presumption
may be rebutted by scientific evidence that the husband is not the
father.
So much for the present and the immediate future. A more interesting
question is the long term effect of the technology. One function of
the marriage institutions of most human societies we know of, past
and present, is to give males a reasonable confidence of paternity by
providing that under most circumstances no more than one male has
sexual access to each female.
[104]
With modern paternity testing, that is no longer necessary.
Which raises some interesting possibilities. We could, at one
extreme, have a society of casual promiscuity--Samoa, at least as
imagined by Margaret Mead.
[105]
When a child was born, the biological father, as demonstrated by
paternity testing, would have the relevant parental rights and
responsibilities.
There are problems with that system. It is easier for two parents to
raise a child jointly if they are living together--and the fact that
a couple enjoy sleeping together is very weak evidence that they will
enjoy living together. An alternative that is both more attractive
and more interesting is some form of group marriage-- three or more
people living together, rearing children together--but with everyone
knowing which child is whose. Such arrangements have been attempted
in the past and no doubt some currently exist. The only form that has
ever been common--polygyny, one husband with several wives--is the
one that does not require paternity testing to determine paternity.
The question is whether other firms will now become more common.
That in turn comes down to a simple question to which I do not know
the answer: Is male sexual jealousy hard wired? Do men object to
other men sleeping with their mates because evolution has built into
them a strong desire for sexual exclusivity or because they have
chosen, or been taught, that strategy as a way of achieving the
(evolutionarily determined) objective of not spending their resources
rearing another man's children? Weak evidence for the latter
explanation is provided by an anthropologist's observation that men
spent less time monitoring their wives when the wives were pregnant,
hence could not conceive.
[106]
One person I have discussed the question with who had some first hand
evidence reported that he and people he knew did not experience male
sexual jealousy; readers interested in joining that discussion should
be able to find him and some of his friends on the Usenet newsgroup
alt.polyamory, which means what it sounds like. But what he was
observing may have been only the tail of the distribution--the small
fraction of men who, because they have abnormally low levels of
sexual jealousy, are willing to experiment with unconventional mating
patterns.
Building
Better Babies
Eugenics, the idea of improving the human species by
selective breeding, was supported by quite a lot of people in the
late nineteenth and early twentieth centuries. Currently it ranks, in
the rhetoric of controversy, only a little above "Nazi." Almost any
reproductive technology capable of affecting future generations is at
risk of being attacked as "eugenics" by its opponents.
That argument confuses, sometimes deliberately, two quite different
ways of achieving similar objectives. One is to treat human beings
like dogs or race horses--have someone, presumably the state, decide
which ones get to reproduce in order to improve the breed. This has
at least two serious problems. The first is that it involves forcing
people who want to have children not to do so--and perhaps forcing
people who don't want to have children to do so. The second is that
it imposes the eugenic planner's desires on everyone--and there is no
reason to assume that the result would be an improvement by other
people's standards. A prudent state might, after all, decide that
submissiveness, obedience to authority, and similar characteristics
were what it wanted to breed for.
The alternative is what I think of as libertarian eugenics. The
earliest description I know of is in a science fiction
novel--
Beyond This Horizon, one of the less successful early
works of Robert Heinlein, arguably one of the ablest and most
innovative science fiction writers of the century.
In Heinlein's story, genetic technology is used to control which of
the children a given couple might have they do have. The control is
exercised not by the state but by the parents. They, assisted by
expert advice, select among the eggs produced by the wife and the
sperm produced by the husband the particular combination of egg and
sperm that will produce the child they most want to have--the one
that doesn't carry the husband's gene for a bad heart or the wife's
for poor circulation but does carry the husband's good coordination
and the wife's musical ability. Thus each couple that wants a child
gets one--their own child--yet characteristics that parents don't
want their children to have are gradually eliminated from the gene
pool. Since the planning is done by each set of parents for its own
children, not by someone for everyone, it should maintain a high
degree of genetic diversity--different parents want different things.
And since parents, unlike state planners, can usually be trusted to
care a great deal about the welfare of their children, the technology
should mostly be used to benefit the next generation, not to exploit
it.
Heinlein's technology does not exist but his result, in at least a
crude form, does. A primitive version, extensively used for some time
now, consists of a woman conceiving a child, obtaining fetal cells by
extracting amniotic fluid ("amniocentesis"), having the cells checked
to see if they carry any serious genetic defect, and aborting the
fetus if they do.
[107]
A version which eliminates the emotional (some would say moral) costs
of abortion is now coming into use. Obtain eggs from the intended
mother, sperm from the intended father. Fertilize in vitro--outside
the mother's body. Let the fertilized eggs grow to the eight cell
level. Extract one cell--which at that point can be done without
damage to the rest. Analyze its genes. Select from the fertilized
eggs the one that comes closest to what the parents want--in
particular, the one that does not carry whatever serious genetic
defect they are trying to avoid. Implant that egg in the mother.
At present there are two major limitations to this process. The first
is that in vitro fertilization is still a difficult and expensive
process. The second is that genetic testing is still a new
technology, so only a small number of genetic characteristics can
actually be identified in the cell. Some genetic diseases
yes--musical ability or intelligence, no. Given current rates of
progress in the field, the second limitation at least is likely to be
rapidly reduced over the next decade or two. We will then be in a
world where at least some people are able to deliberately produce
"the best and the brightest"--of the children those people could have
had.
So far I have been considering a reproductive technology that already
exists, although at a fairly primitive level--selecting among the
fertilized eggs produced by a single couple. We come next to some
newer technologies. The one that has gotten most of the attention is
cloning--producing an individual who is genetically identical to
another.
[108] One
form is natural and fairly common; identical twins are genetically
identical to each other. The same effect has been produced
artificially in animal breeding: get a single fertilized egg, from it
produce multiple fertilized eggs, implant them and thus produce
multiple genetically identical offspring.
Unlike these, the form of cloning which has recently become
controversial starts with an adult cell and uses it to produce a baby
that is the identical twin of that adult. Much of the initial
hostility to the technology seemed to be rooted in the bizarre belief
that cloning replicates an adult--that, after I am cloned, one of me
can finish writing this chapter while the other puts my children to
bed. That isn't how cloning works--although we will discuss something
very similar in a later chapter, where the copying will be into
silicon instead of carbon.
A second possibility is genetic engineering. If we knew enough about
how genes work and how to manipulate them, it might be possible to
take genetic material from sperms, eggs, or adult cells contributed
by two or more individuals and combine it, producing a single
individual with a tailor made selection of genes.
Sexual reproduction already combines in us genes from our parents.
Genetic engineering would let us choose which genes came from which,
instead of accepting a random selection or, with a more advanced
technology, choosing from among several random selections. It would
also let us combine genes from more than two individuals without
taking multiple generations to do it and from other species.
Primitive versions of such genetic engineering have been used to
insert genes from one species of plant or animal into a different
species.
The third possibility is to create artificial genes, perhaps an
entire additional chromosome.
[109]
Such genes would be designed to do things within our cells that we
wanted done--prevent aging, say, or fight AIDS-- that no existing
gene did. They would represent a fusion of human reproductive
technology with nanotechnology--a subject we will discuss in Chapter
XXX.
Current and near future technologies to control what sort of children
we have depend on in vitro fertilization (IVF)--the process by which
an egg is removed from the mother's body to be fertilized ("in vitro"
literally means "in glass"), then implanted. That technology was
developed to make it possible for otherwise infertile women to have
children. It also makes possible artificial cloning of eggs, by
letting the fertilized egg divide and then separating it into two. It
makes possible cloning of adult cells, by replacing the nucleus of a
fertilized egg with a nucleus from an adult cell. And it may yet make
possible genetic engineering and artificial genes. It also makes
possible a true host mother--a mother who bears a child produced from
another woman's fertilized egg.
Other new technologies may make possible reproduction by a different
sort of infertile parents--same sex couples. At present a pair of
women who wish to rear a child can (depending on local law) adopt
one. One of the women can bear a child using donated sperm. But they
cannot do what almost all other couples desiring children do--produce
a child who is the genetic offspring of both of them. The closest
they can manage with traditional technology is to use sperm donated
by a father or brother of one to inseminate the other, thus producing
a child who is, genetically speaking, half one of them and a quarter
the other.
That situation is changing. Techniques have been developed for
producing artificial sperm containing genetic material from an adult
cell.
[110] Hence
it may be possible in the fairly near future for two women to produce
a child who is in the full sense theirs. At some point an analogous
technology might make possible artificial eggs, permitting two men to
produce a child who is in the same sense theirs--with the assistance
of a borrowed womb.
[111]
Why
Bother?
New technologies make it possible to do new things; there
remains the question of whether they are worth doing. In the case of
reproductive technology, the initial driving force, still important,
was the desire of people to have their own children. From that we get
IVF and the use of host mothers--to permit a mother unable to bring
her fetus to term to get someone else to do it for her. The desire to
have your own children also provides a possible incentive for
cloning--to permit a couple unable to produce a child of both
(because one is infertile) to produce instead a child who is an
identical twin of one--and for technologies to allow single sex
couples to reproduce.
A second and increasingly important motive is the desire to have
better children. In the early stages of the technology this means
avoiding the catastrophe of serious genetic defects. As the
technology gets better, it opens the possibility of eliminating less
serious defects--the risk of a bad heart, say, which seems to be in
part genetic, or alcoholism which may well be--and selecting in favor
of desirable characteristics. Parents want their children to be
happy, healthy, smart, strong, beautiful, and these technologies
provide one way of improving the odds.
One can imagine the technologies used for other purposes. A
dictatorial government might try to engineer the entire
population--to breed some characteristic, say aggressiveness or
resistence to authority, out of it. A less ambitious government might
use cloning to produce multiple copies of the perfect soldier, or
secret policeman, or scientific researcher, or dictator--although
multiple identical dictators might be asking for trouble.
Such scenarios are more plausible as movie plots than as policies.
The obvious problem is that it takes about twenty years to produce an
adult human; few real world governments can afford to plan that far
ahead. And while a clone will be genetically identical to the donor,
its environment will not be identical. Hence while cloning produces a
more predictable result than sexual reproduction, it is far from
perfectly predictable. And getting your soldiers, secret police, or
scientists the old fashioned way has the advantage of letting you
select them from a large population of people already adult and
observable.
One further argument against the idea is that if it is an attractive
strategy for a dictatorial state it ought already to have happened.
Selective breeding of animals is a very old technology. Yet I know of
no past society which made any serious large scale attempt at
selective breeding of humans in order to produce in the ruled traits
desired by the rulers. Insofar as we have observed selective breeding
of humans it has been at the individual or family level--people
choosing mates for themselves or their children in part on the basis
of what sort of children they think those mates will help
produce.
A potentially more serious worry is the exploitation of cloned
children at the individual level. In a version sometimes offered as
an argument against cloning humans, an adult produces a clone of
himself in order to disassemble it for body parts to be used for
future transplant. An obvious problem with the argument is that even
if the cloning were legal, the disassembly would not be--in the U.S.
at present or in any reasonably similar society. But one can imagine
a future society in which it was. On the other hand, the process
again involves a substantial time lag--and becomes increasingly less
useful as improved medical technology reduces problems of transplant
rejection.
There has been at least one real world case distantly analogous to
this--and one that raises serious questions as to our immediate
negative reaction to the idea of producing a human being at least
partly to provide tissue for transplant..
In 1988, Anissa Ayala, then a high school sophomore, was diagnosed
with a slow progressing but ultimately fatal form of leukemia. Her
only hope was a treatment that would kill off all her existing blood
stem cells then replace them by a transplant from a compatible donor.
The odds that a random donor would be compatible were about one in
twenty thousand.
Her parents spent two years in an unsuccessful search for a
compatible donor, then decided to try to produce one. The odds were
not good. A second child would have only a 25% chance of
compatibility. Even with a compatible donor the procedure had a
survival probability of only 70%. The mother was already forty-two,
the father had been vasectomized.
The alternative was worse; Anissa's parents took the gamble. The
vasectomy was successfully reversed. Their second daughter was
born--and compatible. Fourteen months later she donated the bone
marrow that--as she put it five years later in a television
interview--saved her sister's life.
Marissa was produced by conventional methods--the controversial
element, loudly condemned by a variety of bioethicists, was producing
a child in the hope that she could donate the bone marrow required to
save another. But cloning, had it been practical, would have raised
the odds of a match from 25% to 100%.
For another potentially controversial use of cloning, consider
parents whose small child has just been killed in an auto accident.
Parents have a very large emotional investment in their children--not
children in the abstract but this particular small person whom they
love. Cloning could let them, in a real although incomplete sense,
get her back--produce a second child very nearly identical to the
first.
Reasons Not
To Do It
Reproductive technology--most recently cloning, earlier
contraception, IVF and artificial insemination--have aroused
widespread opposition. One reason--the idea that such a technology
might be used by a dictatorial state in a variety of ways--I have
already briefly discussed and dismissed as implausible. There are at
least three others.
The first is the Yecch factor. New technologies involving things as
intimate as reproduction feel weird, unnatural and, for many people,
frightening and ugly. That was true for contraception, it was true
for IVF and artificial insemination, it is strikingly true for
cloning, and will no doubt be true for genetic engineering when and
if we can do it. That reaction may slow the introduction of new
reproductive technologies but is unlikely to prevent it, so long as
those technologies make it possible for people to do things they very
much want to do.
A second reason is that new technologies may not work very well at
first. Judging by experience so far with cloning large mammals, if
someone tries tomorrow to cloned a human it will probably take many
unsuccessful tries to produce one live infant, and that infant may
well suffer from a variety of problems. That is a strong argument
against cloning a human being today. But it is an argument that will
get weaker and weaker as further experiments in cloning other large
mammals produce more and more information about how to do it
right.
The final reason is the most interesting of all. It is the
possibility that individual reproductive decisions might have
unintended consequences--perhaps seriously negative ones.
Where Have
All the Women Gone?
Consider a very simple example--gender selection. Parents
often have a preference as to whether they want a boy or a girl. The
simplest technology to give them what they want--selective
infanticide--has been in use for thousands of years. A less costly
alternative--selective abortion--is now used extensively in some
parts of the world.
[112]
And it is possible to substantially alter the odds of producing male
or female offspring by less drastic methods.
[113]
As such techniques become more reliable and more widely available, we
will move towards a world where parents have almost complete control
over the gender of the offspring they produce. What will be the
consequences?
For the most extreme answer, consider the situation under China's one
child policy--imposed on a society where families have a strong
preference for having at least one son. The result is that a
substantial majority of the children born are male; some estimates
suggest about 120 boys for 100 girls. A similar but weaker effect has
occurred in India even without a restriction on number of
children--recent figures suggest about 107 boys for 100 girls. With
better technologies for gender selection the ratios would be higher.
The consequence is likely to be societies where many men have
difficulty finding a wife.
The problem may be self correcting--with a considerable time lag. In
a society with a high male to female ratio women are in a strong
bargaining position, able to take their pick of mates and demand
favorable terms in marriage.
[114]
As that becomes clear, it will increase the payoff to producing
daughters. There is not a lot of point to preserving the family name
by having a son if you don't expect him to be able to find a woman
willing to produce grandchildren for you. A high ratio of men to
women might also result in a shift in mating patterns in the
direction of polyandry--two or more husbands sharing the same wife.
Even without changes in marriage laws there is still the possibility
of serial polyandry. A woman marries one man, produces a child for
him, divorces him and marries a second husband.
[115]
Class
Genes
What about technologies allowing parents to choose among the
children they might have, or even to add useful genes, perhaps
artificial, that neither parent carries? Lee Silver, a mouse
geneticist and the author of a fascinating book on reproductive
technology, worries that the long term result might be a society
divided into two classes--generich and genepoor. The former will be
the genetically superior descendants of people who could afford to
use new technologies to produce superior offspring, the latter the
genetically natural descendants of those who could not.
There are two reasons this is not likely to happen. The first is that
human generations are long and technological change is fast. We might
have a decade or two in which high income people have substantially
better opportunities to select their children than low income people.
After that the new technology, like many old technologies, will
probably become inexpensive enough to be available to almost anyone
who really wants it. It was not that long ago, after all, that
television was a new technology restricted to well off people.
Currently, about ninety-seven percent of American families below the
poverty line own at least one color television.
The second reason is that human mating is not strictly intraclass.
Rich men sometimes marry poor women and vice versa. And even without
marriage, if rich men are believed to carry superior genes--as, after
a few generations of Lee Silver's hypothetical future, they would
be--that will be one more reason for less rich women to conceive by
them, a pattern that, however offensive to egalitarian sensibilities,
is historically common. Put in economic terms, sperm is a free good,
hence provides a low cost way of obtaining high quality genes for
one's offspring. I doubt we will get that far, but if we do we can
rely on the traditional human mating pattern--monogamy tempered by
adultery--to blur any sharp genetic lines between social or economic
classes.
The
Parent Problem
Whether or not new reproductive technologies are going to
generate new problems for people in the future, they are already
producing problems for the legal system and the social institutions
in which it is embedded.
The first, and currently the biggest, is the paternity problem. State
welfare agencies and unmarried or no-longer-married mothers would
like to find a man who can be made responsible for supporting the
mothers' children. If their genetic father can be identified, he is
the obvious candidate. But what if he cannot?
The view of at least some states is that the man who was the mother's
mate ought to be responsible anyway. This is, in effect, a reversion
to Lord Mansfield's rule--with cohabiting couple broadly defined. It
was workable before modern paternity testing because the state could
argue that the man she had been living with, perhaps married to,
really was, or at least might well be, the genetic father of her
children, even if he denied it. It is less workable now.
The obvious argument against is that a man who had been cuckolded by
his wife is already a victim of her betrayal; to make him responsible
for supporting someone else's child only adds insult to injury. The
counter argument is that even if the mother is at fault her child is
not--and someone has to support it. One possibility a little farther
down the road is a genetic database that could be used to identify
the genetic father and make him liable--bringing us back to the idea
of paternity testing at birth. A less ambitious alternative is to
require the mother to identify the father or possible fathers and
have the state compel him to permit testing. But if the mother is
unwilling or unable to identify the father, we are back with the
problem of who, other than the state, can be made responsible.
At the same time that paternity testing raises the issue of making
men responsible for supporting children that are not theirs, an older
technology relieves men of responsibility for children that are.
Under current law in many states, if a woman conceives by artificial
insemination using sperm from a sperm bank, she has no claims for
support from the donor.
[116]
Prior to paternity testing, that legal rule could be enforced by
simply not keeping the relevant records. But now the records
establishing paternity are stamped into every cell of father and
child. If law and custom change, as they have changed in the past in
the direction of making it easier for adopted children to locate
their birth parents, some men may be in for a surprise.
A broader set of problems is raised by the complications of modern
reproductive technology. We can do a better job than in the past of
determining who has what relation to a child. But we can also produce
a more complicated set of relations, making it harder to fit the new
reality into the old law.
With current technology and practice, the term "mother," which we
usually assume to be well defined, actually has at least three
different meanings. One is intentional mother--the woman who intended
to play the social role of mother when the arrangements for producing
the child were made. One is womb mother--the mother in whose womb the
fetus grew. The third is egg mother--the mother who provided the egg.
Once we start cloning humans a fourth category will be mitochondrial
mother--the woman who provides the egg whose nucleus is replaced by
the nucleus of a cell from the clone donor, retaining the woman's
extranuclear DNA.
Fathers still come in only two varieties. The intentional father is
the man who intended to play the social role of father, the
biological father is the man who provided the sperm. The one obvious
change associated with the newer technology is that it is more common
than it used to be for the intentional father to know he is not the
biological father.
[117]
With three or four varieties of mother and two of father, the
definition of "parents" becomes distinctly ambiguous.
That problem was mentioned back in Chapter II, where I described the
(real, California) case of the child with five parents.All five were
different people--and the intentional parents, John and Luanne,
separated a month before the baby was born. The court decided that
the intentional parents were the ones that counted--although neither
had any biological connection to the child. Luanne ended up with the
child, John liable for child support.
The same legal approach could be used to resolve the issues of
parenthood raised by cloning. The human clone gets all of his nuclear
DNA from one donor. Genetically speaking, one might describe that
donor as both parents. On the other hand, genetic tests would reveal
that the clone is a child of the donor's parents--are they therefore
responsible? Further minor complications arise from extranuclear DNA,
which comes from the woman who donated the egg used in the
procedure--who might or might not be the donor of the nuclear DNA.
And, since cells can be obtained from a donor without his consent, we
have the possibility that a woman could bear the clone of a rich and
prominent man and then sue him for child support on a scale
appropriate to his income. There is at least one real world case in
which a woman impregnated herself with sperm fraudulently obtained
and succeeded in establishing a claim for child support against the
father--the nearest equivalent under the old technology.
[118]
The definition established by the California court--parenthood
ultimately determined by intention--provides a single rule to cover
nearly all circumstances, whatever the reproductive technology; in
order for the child to come into existence, at least one person had
to intend it, and that person, at least, counts as a parent. There
are, however, still problems.
Conventional reproduction involves one male and one female. Once the
biology can be subcontracted, intentional reproduction could involve
one male and one female, two males, two females, a partnership of
four of each, or a corporation--say Microsoft, looking to breed a
successor for Bill Gates. Only the conventional pattern fits legal
rules designed for that pattern. Once we accept intentional
parenthood, the law must either restrict who can be an intentional
parent--require, say, that before any form of assisted reproduction
can legally take place, one man or one woman (in the most
conservative version) must identify themselves as intentional
parents--or specify parental rights and obligations broadly enough so
that any of the permitted arrangements can fit them.
[Add a discussion somewhere of intellectual
property in engineered genes. Do I need a license to live--since I'm
reproducing my cells on which you have a patent? Analogous to putting
up a web page--implicitly gives people permission to read it, even
thought that is a form of copying? Do I need permission to have
babies? Not a realistic problem, both because forbidding people to
have babies is not politically viable and because patents expire too
fast. But ... stay tuned for the next chapter.]
Ambiguous
Gender
Modern technology, by increasing both what we know and what
we can do, introduces at least one other complication into our usual
systems, legal and social, for classifying people. We are used to
taking it for granted that every human being is either male or
female. For a long time that was quite a good approximation.
[119]
One challenge is provided by modern biology. Almost all men have an X
and a Y chromosome and a male body; almost all women have two X's and
a female body. But not all.
Some humans are XY but have female bodies; they are genetically male
but morphologically female. Some are the reverse--XX with male
bodies. And some humans are genetically XYY, with male bodies and a
mild tendency towards aggressive personalities, or XXY, or ... .
So far we are dealing with genetics and morphology, both of which are
fairly unambiguous, even if the combinations are wilder than we
thought. The situation gets more confused if you add in psychology.
Some people who are genetically and morphologically male claim to be
psychologically female; to think of themselves as women. Others have
the reverse pattern. There is some evidence that this is more than a
delusion--that in a way not yet clearly understood, such people have
brains designed for the wrong gender. Modern surgical techniques make
it possible to at least partly correct the error--for someone who
self identifies as a woman in a man's body to have the body altered
to at least a reasonable facsimile of a woman's body, although an
infertile one. Similarly the other way.
All of this raises interesting problems for both individuals and the
law. Am I to think of Dierdre, a professional colleague who used to
be Donald, as a slightly odd looking woman, a man surgically altered
to look like a woman, or something neither man nor woman? If I had
discussed these issues with Donald back when he/she/it possessed, in
his/her/its view, the body of a man but the mind of a woman--as it
happens I didn't--how should I have thought of him? If Dierdre
marries a wealthy man and he later dies intestate, can his heirs
successfully challenge her claim to part of his estate on the grounds
that she was a he, hence could not contract a legal marriage with a
man? That particular case--with a different transsexual--is currently
been litigated in Kansas.
It's going to be an interesting century.
XV:
As Gods in the Garden
"in the day ye eat thereof, then your eyes
shall be opened, and ye shall be as gods" Genesis
The previous chapter dealt with a narrow slice of
biotechnology--human reproduction. In this chapter we first consider
the issues raised by a different application of technology to human
beings--genetic testing--and then go on to consider some issues
raised by applications of technologies to other living creatures.
Is
Ignorance Bliss?
"...and a thousand natural ills/That flesh is heir to"
Hamlet
Of the ills that human flesh is heir to, some are entirely due to
having the wrong genes--sickle cell anemia, for example. Many others,
such as heart disease and alcoholism, appear to have a substantial
genetic component. As our knowledge of human genetics and our
technology for testing the genetics of a particular human improve, we
will increasingly be able to identify individuals who do or do not
have the genes that make them more likely to die young of a heart
attack, become alcoholics, or suffer other undesirable
consequences.
Someone who knows he is genetically predisposed to heart disease has
good reasons to take greater precautions against it--exercise, diet,
testing and the like--than someone who knows he is predisposed
against it. That my grandfather died of a heart attack and my father
has twice had bypass surgery are good reasons for me to take
cholesterol lowering medication and try to maintain regular exercise.
But I would have even better information if I knew whether or not I
carried the genes that caused those problems for my father and
grandfather. And someone who knew he was, for genetic reasons,
particularly vulnerable to alcoholism might choose to avoid the
problem by never taking the first drink.
What if I have a genetic problem for which there is no solution--say
a gene that results in abnormally rapid aging? Knowing I have it at
least lets me do a better job of planning my life--have children
early or not at all, for example. But knowledge is not inevitably
desirable. If I carry a death sentence in my genes, I might prefer
not to know about it. Sometimes ignorance is bliss--at least for a
while.
So far I have been describing the effect on me of my knowledge of my
genes. Rather different problems might arise from other people having
that knowledge.
Consider, for example, the situation faced by an insurance company in
a future where reliable genetic testing is readily available. To make
matters simple, start with the simplest case--insuring against a
disease that is entirely genetic. Once the testing is available, the
risk of the disease becomes uninsurable. Only people who know they
have the relevant genes will buy the insurance--and the sellers,
knowing that, will price it accordingly.
What about the more realistic situation where a problem is in part
genetic? The expected cost of insuring me against that problem then
depends on what genes I have. If insurance companies are permitted to
insist on testing their clients before selling them insurance, both
those with and without bad genes will be able to buy insurance--but
at different prices. The part of the risk due to having bad genes
becomes uninsurable, and insurance is only available for the residual
risk--the uncertainty of the disease, given that you already know
whether or not you have the genetic propensity. If we consider the
still more likely case where what you are insuring against is not a
particular risk such as a heart attack but the combined effect of
lots of risks--the case of life or health insurance--the result is
the same. Your life expectancy depends in part on your genetic makeup
and in part on other things. Uncertainty due to the former becomes
uninsurable, uncertainty due to the latter does not.
An obvious solution, and one that some have recommended, is to make
it illegal for insurance companies to require testing and condition
rates in part on the result. Unfortunately, that doesn't work.
Individuals still can and will test themselves for their own
information. Once I know that I am, genetically speaking,
extraordinarily healthy, I also know that both life insurance and
health insurance--priced on the assumption that I am average--are for
me losing gambles. I can expect to pay much more than I collect. If,
on the other hand, I know that I am likely to drop dead at age forty,
then lots of life insurance--provided I expect to have survivors I
care about--is obviously a good deal.
This effect is known in the insurance literature as adverse
selection, and was described in a classic article titled "The Market
for Lemons." It occurs when one party has information about the
quality of what is being sold that the other party does not have and
cannot get. The ignorant buyers pay the same price for good used cars
(creampuffs) and bad used cars (lemons)--making the sale of your car
a good deal if you have a lemon and a poor deal if you have a
creampuff. The result is that lemons sell and creampuffs, for the
most part, do not. Buyers, anticipating that, make their offer on the
reasonable assumption that if it is accepted the car is probably at
lemon--and at lemon prices, few cream puffs are offered for sale.
The logic is the same here. Imagine that insurance companies start
out by charging a rate that just covers the costs due to an average
customer. At that rate, insurance is a much better deal for customers
with genes that make them likely to collect than for customers with
genes that make them unlikely to collect--so purchasers of insurance
contain a more than average number of bad risks. Insurance companies
discover that, on average, it costs more to insure their customers
than they expected and raise their rates--driving out still more of
the good risks. In the extreme case where all good risks are driven
out, the result is even worse than it would be with testing by the
insurance companies. Nobody can insure against genetic risk, because
the decision to buy insurance tells the seller that the buyer knows
he has bad genes. Those who have bad genes can still insure against
risk from other causes; those who have good genes cannot.
One solution would be to somehow make it possible to prove that you
have never been tested. Now people can get insured before they are
tested--at average rates--then arrange to be tested and modify their
life plans accordingly. The obvious practical problem is that such a
system provides a large incentive to cheat--to get tested on the
black market, or in a foreign country, so as not to leave any record,
and decide what insurance to buy after seeing the results.
A more nearly workable solution would be for parents to buy insurance
for their children before the children are conceived. The price might
still depend on the parents genes, but it cannot depend on the
children's, since that is information that nobody has.
Designer
Crops
Agricultural biotechnology is one of the oldest forms of high
tech, going back at least eight thousand years. That, by current
estimates, is when the breeding program that produced maize--the
cereal Americans call "corn"--possibly from Teosinte, a plant most of
us would describe as a weed, began. Similar programs of selective
breeding are responsible for creating all of our major food
plants.
Not only is the creation of genetically superior strains by random
mutation and selective breeding an ancient technology, so, in the
plant world, is cloning. It has been known for a very long time that
fruit trees do not breed true to seed. To prove it for yourself,
remove the seeds from a golden delicious apple, plant them, wait ten
or twenty years, and see what you get. The odds are overwhelmingly
high that it will not be a golden delicious, and moderately good that
it will not be anything you particularly want to eat.
The solution is grafting. Once your little apple tree has its roots
well grown, replace the top section of trunk with a piece of a branch
cut from a golden delicious tree. If you do it right, the new wood
grows onto the old--and everything above the graft will be golden
delicious, genetically speaking, including the apples. You have just
produced a clone--an organism (or at least most of an organism) that
is genetically identical to another. Like Dolly, the cloned sheep,
your cloned tree was created using cells from an adult.
To be even fancier, let your tree grow until it has a few little
branches and then replace the end of one branch with a piece of wood
from a golden delicious, a second with a piece from a Swaar (ugly but
delicious), and a third with a bit from a lady apple (tiny, pretty,
tasty). You now have that staple of plant catalogs, a three on one
apple. You have also just employed, in your own back yard, a form of
biotechnology that has been known at least since Roman times and is
in large part responsible for the quality of fruit, grapes, and wine
over the last few thousand years.
New Under
the Sun
Modern agricultural biotech adds at least two new elements to
the ancient technologies of selective breeding and grafting. One
gives us the ability to do what we have been doing better. The other
gives us the ability to do something almost entirely new.
The traditional way of breeding a better apple is to create a very
large number of seeds, plant them all, let them all grow up, and see
how they come out. If, by great good luck, one turns out to be a
superior variety, it can be propagated thereafter by grafting. With
enough expert knowledge, one can improve the odds a little by picking
the right parents--choosing a pair of trees that you have some reason
to hope might produce superior progeny, pollinating one with pollen
from the other, and using the resulting seeds. But it is still very
much a gamble.
As our knowledge of genetics and our ability to manipulate genes
improve, we may be able to do better than that. If we discover that
particular sequences of genes are related to particular desirable
traits, we can mix and match to produce trees--or grape vines, or
tomato plants--with the particular traits we want. The result will be
to do the same thing we could have done with the old technologies,
but in a lot less than eight thousand years.
An odder and more interesting possibility is to add to one species
genes from another, producing transgenic plants. A famous--and
commercially important--example uses Bacillus thuringiensis, or Bt, a
bacterium which produces proteins poisonous to some insects but not
to humans or other animals. Varieties of plants have been produced by
adding to them the genes from the Bt bacterium responsible for
producing those proteins. Such plants produce, in effect, their own
insecticide. Other transgenic plants are designed to be resistant to
widely used herbicides, thus permitting a farmer to such chemicals to
kill weeds without harming his crop.
The technology can also be used to alter the final crop--to produce
peanuts or tomatoes with longer shelf life, or sunflower oil low in
trans-fatty acids. It is also possible to insert genes into a plant
(or animal) that result in its producing something unrelated to its
normal crop. Examples include bacteria modified to produce insulin, a
cow whose milk contains human milk proteins and a sheep whose milk
contains a clotting factor missing from the blood of
hemophiliacs.
At first glance, all of these seem like unambiguously desirable uses
of the technology. Insect resistant plants permit us to grow crops at
lower cost and with much less use of insecticides. Other applications
of the technology increase crop yields, reduce costs, improve
quality, and provide potentially low cost of ways of producing
valuable pharmaceuticals, including some that cannot, at least so
far, be produced in any other ways. Yet the technology has been
fiercely attacked, and in some parts of the world, most notably
Europe, agriculture applications are severely restricted. Why?
A Nest of
Serpents?
Abu Hurairah (may Allah be pleased with him) reported that
the Prophet (peace and blessings of Allah be upon him) said:
"Allah, may He be exalted, says: 'Who does more wrong than the one
who tries to create something like My creation? Let him create a
grain of wheat or a kernel of corn.'" (Reported by al-Bukhari, see
Fath al-Baari, 10/385).
One reason is obvious--hostility to anything new, combined with a
romanticization of "nature." Thus we have extensive support for the
idea of "natural foods" despite the fact that practically nothing we
eat is "natural" in the sense of not having been substantially
altered by human activity. And we have the term "chemical" used
pejoratively, despite the fact that everything we eat--and everything
we are made out of--is a combination of chemicals. This is the
attitude that shows up in the description of the products of
agricultural biotech as "Frankenfoods." The Muslim tradition quoted
above reflects a religious version of this view-- that creating
living things is God's business, not ours.
This attitude is of considerable importance today, and may result in
European consumers getting lower quality food at higher prices than
they otherwise would over the next decade or two. One reason may be
that European farmers are subsidized and protected by trade barriers
from foreign competition. The more European consumers can be
persuaded that foreign foods are evil and dangerous, the easier it is
for European farmers to sell them their products.
But while irrational hostility may be important in the short run, it
is likely to be less so in the long. There are large parts of the
world where increasing agricultural output means fewer people going
hungry, making symbolic issues of natural or unnatural unimportant by
comparison. And over time, new things become old things.
Contraception was widely viewed as unnatural, wicked, dirty, and
sinful fifty or a hundred years ago. In vitro fertilization of humans
was met with considerable suspicion. Both are now widely accepted. So
it is probably more interesting, from a point of view that goes
beyond the next decade, to ask whether there are any real problems
associated with this sort of technology.
The answer is almost certainly yes. These are potentially powerful
technologies, and powerful things can do damage as well as good.
Consider a simple example.
Our common food plants were bred from preexisting wild plants. Many
of the latter are still around--and to some degree cross fertile with
their domesticated descendants. That means that genetic traits
introduced into crop plants may find their way, as pollen blown in
the wind, to related wild plants. Herbicide resistance is a useful
feature in a crop plant. It is a considerable nuisance in a weed.
How serious this sort of problem is depends on whether transgenically
improved crop plants are grown near wild relatives, whether the
modification is a benefit to weeds, and whether the modification
makes the weed more of a problem for humans.
Consider a transgenic tomato designed for better flavor or longer
shelf life. Even if there were related wild plants, those
characteristics would be of no particular use to them, so wild plants
with them would have no advantage over wild plants without them. And
wild plants with those characteristics would be no more of a problem
for farmers than ones without.
But the same does not hold for resistance to herbicides. Suppose weed
beets grow in or near the same fields as sugar beets that have been
transgenically modified to make them resistance to a particular
herbicide used in sugar beet farming. Naturally occurring hybrid weed
beets that have had the good luck to acquire the genes for resistance
will be more successful in that location than ones that have not--and
more of a nuisance.
Can We
Compete With Mother Nature?
Stepping back a moment, it is worth looking at the general
argument for why such problems do not exist and seeing why it is
sometimes wrong. That general arguments starts with the observation
that existing plants, including weeds, have been "designed" by
Darwinian evolution for their own reproductive success. Our current
biotechnology is a much more primitive design system than evolution.
That is why we produce new crops not by designing the whole plant
from scratch but by adding minor modifications to the plants provided
by nature. Hence one might think that if a genetic characteristic
were useful to a weed, the weed would already have it--and if
evolution had not succeeded in producing a useful characteristic,
humans would be unlikely to do better.
There are two things wrong with that argument. The first is that
evolution is slow. Weeds are adapted to their environment--but that
environment has only recently included farmers spraying herbicides on
them. Hence they are not adapted, or at least not yet very well
adapted, to resist those herbicides. This is especially true given
that the herbicides used are precisely those chemicals that the
targeted weeds are vulnerable to. So if we deliberately create crop
plants resistant to specific herbicides and the resistance spreads to
related weeds, we provide an evolutionary shortcut, generating
resistant weeds a great deal faster than nature would.
The second error in the argument is more complicated. Evolution works
not by designing new organisms from scratch but by continuous
changes. The more simultaneous changes are required to make a feature
work, the less likely it is to appear. Complicated structures--the
standard example is the eye--are produced by a series of small
changes, each of which results in at least a small gain in
reproductive success to the organism. Features that cannot be
produced in that way are unlikely to be produced at all.
[120]
Genetic engineering also works by small changes--introducing one gene
from a bacterium into a variety of corn, for instance. But the
available range of small changes--or, if you prefer, the meaning of
"small"--is different. So there may be some unambiguous
improvements--changes in an organism which result in greater
reproductive success, hence would have been selected for by
evolution--which can be produced by genetic engineering but are very
unlikely to come about naturally. The introduction of genes that code
for a particular protein lethal to particular insect pests--genes
borrowed from an entirely unrelated living creature--is an example.
This is a subject we will return to in a later chapter, when we
consider the still more ambitious attempts to compete with natural
design implicit in the idea of nanotechnology.
The possibility that engineered genes will spread into wild
populations and so produce improved weeds is only one example of a
class of issues raised by genetic technology. Others include the
possibility of indirect ecological effects--improved weeds, or crop
plants gone wild, that compete with other plants and so alter the
whole interrelated system. They also include such unanticipated
effects as crop plants designed to be lethal to insect pests turning
out to also be lethal to harmless, perhaps beneficial, species of
insects. I started with the case of transgenic weeds because I think
that is the clearest case of a problem that is likely to
happen--although not one likely to have any catastrophic
consequences. If, after all, weed beets become resistant to the
farmers' favorite herbicide, they can always switch to their second
favorite, putting them back in about the situation they started
with--an herbicide to which neither weeds nor crop is especially
resistant.
I am more skeptical about the other examples, mostly because I am
skeptical about the idea that nature is in a delicate balance likely
to produce catastrophe if disturbed. The extinction of old species
and the evolution of new is a process that has been going on for a
very long time. But while I am skeptical about particular examples, I
think they illustrate a real potential problem with technological
change--probably the most serious problem.
The problem arises when actions taken by one person have substantial
dispersed effects on many others far away from him. The reason it is
a problem is that we have no adequate set of institutions to deal
with such affects. Markets, property rights, trade provide a very
powerful tool for coordinating the activities of a multitude of
individual actors. But their functioning requires some way of
defining property rights such that most of the affect of my actions
is born by me, my property, and some reasonably small and
identifiable set of other people.
If there is no way of defining property rights that meets that
requirement, we have a problem. The alternative institutions--courts,
tort law, government regulation, intergovernmental negotiations, and
the like--that we use to try to deal with that problem work very
poorly--and the more dispersed the effects, the worse they work.
Hence if technological changes results in making actions with such
dispersed effects play a much larger role in our lives--if, for
example, genetic engineering means that my engineered genes
eventually show up in the weeds in your garden a thousand miles
away--we have a problem for which no known institutions provide a
reasonably good solution. This is an issue I will return to in later
chapters.
[Add somewhere a discussion of the
terminator gene issue, as last chapter's problem of needing
permission to have children. It's more realistic here, both because
corn doesn't have a right to reproduce and because generations are
much shorter than the patent term.]
And There
is Still Satan
So far we have been considering possible unintended bad
consequences of genetic engineering. There are also the intended
ones--biological warfare of one sort or another, using tailor made
diseases or, more modestly, tailor made weeds.
Here again it is worth taking a step back to think about the
implications of evolutionary biology. We think of deadly diseases,
naturally enough, as if they were enemies out to destroy us. But that
metaphor is fundamentally wrong. A plague bacillus not only has
nothing against you, it wishes you well--or would if it were capable
of wishing. It is a parasite, you are a host, and the longer you live
the better for it.
Lethal diseases are, from their own standpoint, a mistake--badly
designed parasites. That is why a disease that is really deadly is
typically new--either a new mutation, or an old disease infecting a
new population that has not yet developed resistance, or a disease
that has just jumped from one population to another and not yet
adapted to the change. Given time, evolution works not only to make
us less vulnerable to a disease but to make the disease less
vulnerable to us.
[121]
So when a James Bond villain sets out to create a disease that will
kill everyone but himself and his harem, he is not in competition
with nature--nature, aka Darwinian evolution, isn't trying to make
lethal diseases. That fact makes it more likely that he will
succeed--more likely that there are ways of making diseases more
deadly than the ones produced by natural evolution. The question then
becomes whether the technological progress that makes it easier to
design killer diseases--ultimately, perhaps, in your basement--does
or does not win out in the race with other technologies that make it
easier to cure or prevent such diseases. This is a special case of an
issue we will return to in the context of nanotechnology--which
offers to provide potential bad guys with an even wider toolkit for
mass murder--and may or may not also provide the rest of us with
adequate tools to defend against them.
XVI:
The Last Lethal Disease
Over the past five hundred years, the average length of a
human life in the developed world has more than doubled but the
maximum has remained essentially unchanged. We have eliminated or
greatly reduced almost all of the traditional causes of mortality,
including mass killers such as measles, influenza, and death in
childbirth. But old age remains incurable, and always lethal.
Why? On the face of it, aging looks like poor design. We have been
selected by evolution for reproductive success--and the longer you
live without serious aging, the longer you can keep producing babies.
Even if you are no longer fertile, staying alive and healthy allows
you to help protect and feed your descendants.
[122]
The obvious answer is that if nobody got old and died there would be
no place for our descendants to live and nothing left for them to
eat. But that confuses individual interest with group interest, and
although group selection may have played some role in
evolution,
[123]
it is pretty generally agreed that individual selection was more
important. If I stay alive, all of my resources go to help my
descendants; insofar as I am competing for resources, I am competing
mostly with other people's descendants. Besides, we evolved in an
environment in which we had not yet dealt with other sources of
mortality, so even if people didn't age they would still die, and on
average almost as young. In traditional societies, only a small
minority lived long enough for aging to matter.
A second possible answer is that immortality would indeed be useful,
but there is no way of producing it. Over time our bodies wear out,
random mutation corrupts our genes, until at last the remaining
blueprint is too badly flawed to continue to produce cells to replace
those that have died.
This answer too cannot be right. A human being is, genetically
speaking, massively redundant--every cell in my body contains the
same instructions. It is as if I were a library with a billion copies
of the same book. If some of them had misprints or missing pages, I
could always reconstruct the text from others. If two volumes
disagree, check a third, a fourth, a millionth. Besides, there are
organisms that are immortal. Amoebas reproduce by division--where
there was one amoeba, there are now two. There is no such thing as a
young amoeba.
A variety of more plausible explanations for aging have been
proposed. One I find persuasive starts with the observation that,
while the cells in my body are massively redundant, the single
fertilized cell from which I grew was not. Any error in that cell
ended up in every cell of my adult body.
Suppose one of those mutations had the effect of killing the
individual carrying it before he got old enough to reproduce.
Obviously, it would vanish in the first generation. Suppose instead
that it killed its carrier, on average, at age thirty. Now the
mutation would to some degree be weeded out by selection--but some of
my children, perhaps even some of my grandchildren, could still
inherit it.
Consider next a mutation that kills at age sixty--in a world where
aging does not yet exist, but death in childbirth, measles, and saber
tooth tigers do, with the result that hardly anyone makes it to
sixty. Possession of that mutation is only a very slight reproductive
disadvantage, hence it gets filtered out only very slowly. So we
would expect some tendency for lethal mutations that acted late in
life to accumulate, with new ones appearing as old ones are gradually
eliminated. The process reinforces itself. Once mutations that kill
you at sixty are common, mutations that kill you at seventy don't
matter very much--you can only die once. So one possible explanation
of aging--one of several--is that it is simply the working out of a
large collection of accumulated late acting lethal genes.
[124]
A more sophisticated version of this explanation starts with the
observation that in designing an organism--or anything else--there
are usually tradeoffs. We can give cars better gas mileage by making
them lighter--at the cost of making them more vulnerable to damage.
We can build cars that are invulnerable to anything much short of
high explosives--we call them tanks--but their mileage figures are
not impressive.
Similar tradeoffs presumably exist in our design. Suppose there is
some design feature, encoded in genes, which can provide benefits in
survival probability or fertility early in life at the cost of
causing increased breakdown after age sixty. Unless the benefits are
tiny relative to the costs, the net effect will be an increase in
reproductive success, since most people in the environment we evolved
in didn't make it to sixty anyway. So such a feature will be selected
for by evolution. Putting the argument more generally, the
evolutionary advantages to extending the maximum lifespan were small
in the environment we evolved in, since in that environment very few
people lived long enough to die of old age. So it isn't surprising if
the short term costs outweighed the long term benefits. My genes made
the correct calculations in designing me for reproductive success in
the environment of fifty thousand years ago--but I, living now and
with objectives that go beyond reproductive success, would prefer
they hadn't.
[125]
One reason to be interested in why we age is that it affects how hard
it is to do anything about it--a subject with I become increasingly
concerned as the years pass. If there is some single flaw in our
design--if aging is due to shrinking telomeres or is a deficiency
disease caused by a shortage of vitamin Z--then once we discover the
flaw we can deal with it. If aging is the combined effect of a
thousand flaws, the problem will be harder. But even in that case,
there might be solutions--either the slow solution of identifying and
fixing all thousand, or a fast solution, such as a microscopic cell
repair machine that can go through our bodies fixing whatever damage
all thousand causes have produced.
My own guess is that the problem of aging will be solved, although
not necessarily in time to do me any good. That guess is based on two
observations. The first is that our knowledge of biology has
increased at an enormous rate over the past century or so and
continues to do so. Hence if the problem is not for some reason
inherently insoluble--I cannot think of any plausible reasons why it
should be--it seems likely that scientific progress during the next
century will make a solution possible. The second is that solving the
problem is of enormous importance to old people, many of whom would
prefer to be young, and old people control very large resources, both
economic and political.
If I am correct, one implication is that the payoff to policies that
slow aging a little may be very large, since they might result in my
surviving long enough to benefit by more substantial breakthroughs.
There are currently a variety of things one can do which there is
some reason to believe will slow aging. It is only "some reason"
because the combined effect of the long human lifespan and the
difficulty of getting permission to do experiments on human beings
mean that our information on the subject is very imperfect. Most of
the relevant information consists of the observation that doing
particular things to particular strains of mice or
fruitflies--experimental subjects with short generations and no legal
rights--results in quite substantial increases in their lifespan.
Thus, for example, it turns out that transgenic fruitflies, provided
with a particular human gene, have a life expectancy up to 40 percent
longer than those without the extra gene.
[126]
Modifying the diet of some strains of mice--by, for example,
providing them a high level of anti-oxidant vitamins--can have
similar effects. When I was investigating the arguments for and
against consuming lots of anti-oxidants, one persuasive piece of
evidence came from an article in
Consumer Reports. It quoted a
researcher in the field as saying that of course it was too early to
recommend that people take anti-oxidant supplements--"but all of us
do." As an economist, I believe that what people do is frequently
better evidence than what they say.
One of the most effective ways of extending the lifespan of mice
turns out to be caloric deprivation--feeding them a diet at the low
end of the number of calories needed to stay alive but otherwise
adequate in nutrients. The result is to produce mice with very long
life expectancies. Whether it will work on humans is not yet
known--or, a question of more immediate interest to some of us,
whether it would work on humans who only started late in life.
Presumably a parent who chose to almost starve his children would
risk being found guilty of child abuse--but could argue, on the basis
of existing evidence, that he was actually the only parent who
wasn't.
The
Downside of Immortality?
Suppose my guess is correct; at some point in the not too
distant future, hopefully at some point in my future, we find the
cure for aging. What are the consequences?
On the individual level they are large and positive--one of the worst
features of human life has just vanished. People who prefer mortality
can still die--while suicide is illegal the law against it is in
practice unenforceable. Those of us with unfinished business can get
on with it.
But while I am unambiguously in favor of stopping my aging, it does
not follow that I must be in favor of stopping yours. One reason not
to be is concern with population growth, which will increase if
people stop dying unless they also stop having babies. As it happens,
I do not share that concern--I concluded long ago that, at anything
close to current population levels, mere number of people is not a
serious problem. That conclusion was reinforced over the years as
leading preachers of population doom proceeded to rack up a string of
failed prophecies unmatched outside of the nuttier religious sects.
Readers who disagree, as many do, may want to look at the works of
the late Julian Simon, possibly the ablest as well as the most
energetic critic of the thesis that increasing population leads to
poverty, starvation, and a variety of other catastrophes. I prefer to
pass on to what I regard as more interesting issues.
"Senator"
means "Old Man"
"An absolute monarchy is one in which the sovereign does as
he pleases so long as he pleases the assassins."
Ambrose Bierce, The Devil's
Dictionary
One is the problem of gerontocracy--rule by the old.
Under our political system, incumbents have an enormous advantage--at
the congressional level they almost always win reelection.
[127]
If aging stops and nothing else changes, we can expect our
representatives to grow steadily older. To the extent that an
incumbent is guaranteed reelection, he is also free to do what he
wants within a fairly large, although not unlimited, range. So one
result would be to make democratic control over democratic
governments even weaker than it now is. Another might be to create
societies dominated by the attitudes of the old--bossy, cautious,
conservative.
The effect on undemocratic systems might be still worse. In a world
without aging it seems likely that Salazar would still rule Portugal
and Franco Spain. Perhaps more seriously, it would have been Stalin,
equipped with an arsenal of thermonuclear missles, who presided
over--and did his best to prevent--the final disintegration of the
Soviet Union. With the aging problem solved, dictatorship could
become a permanent condition--provided dictators took sufficient
precautions against other sources of mortality.
The problem is not limited to the world of politics. It has been
argued that scientific progress typically consists of young
scientists adopting new ideas and old scientists dying.
[128]
It is frightening to imagine the sort of universities our system of
academic tenure might produce without either compulsory
retirement--now illegal in the U.S.--or mortality.
Implicit in many of these worries is a buried assumption--that we are
curing the physical effects of aging but not all of the mental
effects. Whether that assumption is reasonable depends on why it is
that old people think differently than young people.
One answer, popular with the old, is that it is because they know
more. If so, perhaps gerontocracy is not such a bad thing. Another is
that the brain has limited capacity.
[129]
Having learned one system of ideas, there may be no place to put
another--especially if they are mutually inconsistent. Humans, old
and young, demonstrate a strong preference for the beliefs they
already have--and old people have more of them.
The effect of aging on thought can be described in terms of the
tradeoff between fluid and crystallized intelligence.
[130]
Fluid intelligence is what you use to solve a new problem.
Crystallized intelligence consists of remembering the solution you
found last time and using that. The older you are, the more problems
you have already solved and the less the future payoff from finding
new and possibly better solutions. So there is a tendency for people
to shift from fluid to crystallized intelligence as they grow older.
The point was brought home to me in a striking fashion some years ago
when I observed a highly intelligent man in his early eighties
ignoring evidence of what turned out to be an approaching forest
fire--smells of smoke, reports from others who had seen it--until he
saw the flames with his own eyes.
It is possible, of course, that if we ended aging--better yet, made
it possible to reverse its effects--the result would be old people
with the minds of the young. It is also possible that we would
discover that the mental characteristics of the old, short of actual
senility, were a consequence not of biological breakdown but of
computer overload--the response of a limited mind to too much
accumulation of experience.
World
Enough and Time[131]
When contemplating an extra few centuries, one obvious
question is what to do with them. Having raised one family, grown
old, and then had my youth restored, would I decide to see if I could
do even better at a second try--or conclude that that was something I
had already done? Weak evidence for the former alternative is
provided by the not uncommon pattern of grandparents raising their
grandchildren when the children's parents prove unable or unwilling
to do the job.
The same question arises in other contexts. Having had one career as
an economist, would I continue along the lines of my past work or
decide that this time around I wanted to be a novelist, an
entrepreneur, an arctic explorer? It is a familiar observation that,
in many fields, scholars do their best and most original work young.
My father once suggested the project of funding successful scholars
past their prime to retrain in some entirely unrelated field, in
order to see if the result was a second burst of creativity.
[132]
In a world without aging, that pattern might become a great deal more
common. And a novelist or entrepreneur who had first been an academic
economist or a Marine officer might bring some interesting background
to his new profession.
[133]
An alternative is leisure. It is tempting to argue that we cannot all
retire, since there has to be someone left to mow the lawn, grow the
food, and do the rest of the world's work. But it isn't clear that it
is true. Capital is also productive--more and better machinery, other
forms of improved production, permit one person to do the work of ten
or a hundred, as demonstrated by the striking fall in the fraction of
the U.S. work force engaged in producing food--from almost everybody
to almost nobody in the space of a century.
[134]
How productive capital is at present is shown by the interest
rate--the price people are willing to pay for the use of capital. The
real interest rate--the rate after allowing for inflation--has
typically been about two percent. At that rate, fifty years of
savings (less than fifty allowing for accumulated interest over the
period) would provide an eternity of an income equal to the amount
annually saved. Someone could spend the first fifty years of
adulthood earning (say) eighty thousand dollars a year, spending
forty thousand, saving the rest--and then spend forever living on
forty thousand dollars a year of interest.
[135]
Alternatively, the same person could live on sixty thousand of his
eighty thousand during his working life, then retire to a low budget
future--twenty thousand a year for food, housing, and a good internet
connection. As a final, and perhaps more attractive, alternative he
could continue working half or a third time, picking those activities
that he liked to do and other people were willing to pay him
for.
[136] Good
work if you can get it. One can easily enough imagine a future along
these lines where a large fraction of the population, even a large
majority, was at least semi-retired.
Immortality also raises a few interesting issues for out legal
system. Consider a criminal sentenced to a life sentence. Do we
interpret that as "what a life sentence used to be"--say to age 100?
Or do we take it literally?
One approach to answering that question is to ask why we would lock
someone up for life in the first place. There are at least two
plausible reasons, coming out of two different theories of criminal
punishment. One is that we lock a murderer up for the same reason we
lock a tiger up--he is dangerous to others, so we want to keep him
where he cannot do much damage. That is the theory sometimes
described as "incapacitation." The other is that we lock a murderer
up in order to impose a cost on him--a cost high enough so that other
people contemplating murder will choose not to incur it. That is the
theory described as "deterrence." In practice, of course, we may
operate on both theories at once, believing that some criminals can
be deterred, some only incapacitated, and we cannot always be sure
which are which.
If our objective is deterrence, centuries of incarceration may be
overkill, so there is at least some argument for eventually letting
the convict out. If our objective is incapacitation, on the other
hand, we may want to keep him in. Under current circumstances, a
ninety year old murderer is unlikely to be of much danger to anyone
but himself--but if we conquer aging that will no longer be the
case.
A third justification sometimes offered for imprisonment is
rehabilitation--changing criminals so that they no longer want to
commit crimes. That is the theory that gives us "reformatories" to
reform people and "penitentiaries" to make people repent. It is hard
to see why, on that theory, we would have life sentences--but perhaps
one could argue that there are some people who take longer to be
rehabilitated than they are likely to last. If so one might
reinterpret "life" as "to age 100 or until rehabilitated, whichever
takes longer."
So far I have been considering the consequences of a solution to the
aging problem is that is only hypothetical. Next we turn to one that
is actual--in a manner of speaking.
A
Cold Century in Hell
Thus, the appropriate clinical trials would be to:
1. Select N subjects.
2. Preserve them.
3. Wait 100 years.
4. See if the technology of 2100 can indeed revive
them.
The reader might notice a problem: what do we tell the terminally ill
patient prior to completion of the trials? (Ralph Merkle, from a
webbed discussion of cryonics)
The idea of cryonic suspension--keeping people frozen in the hopes of
some day thawing them, reviving them, and curing what was killing
them--has been around for some time. Critics view it as a fraud or a
delusion, analogizing the problem of undoing the damage done to cells
by ice crystals in the process of freezing to converting hamburger
back into a living cow.
[137]
Supporters point out that as the technology of freezing improves it
becomes possible to decrease the damage by, among other things,
replacing the body's water with the equivalent of antifreeze during
the cooling process. They argue that as the technology needed to
revive a frozen body improves--ultimately, perhaps, through the
development of nanotechnology capable of doing repairs at the
cellular level--it will become easier to undo the remaining damage.
Finally and most convincingly, they point out that however poor your
chances are of being brought back from death if you have your body
frozen, they can hardly be worse than the chances if you let it rot
instead.
[138]
Suppose we accept their arguments--to the extent of regarding
freezing and then reviving as at least a possibility. We are then
faced with a variety of interesting problems, legal and social. Most
of them come down to a simple question--what is the status of a
corpsicle? Is it a corpse, a living person temporarily unable to act,
or something else? If I am frozen, is my wife free to remarry? If I
am then thawed, which of us is she married to? Do my heirs inherit,
and if so can I reclaim my property when I rejoin the living?
Many of these are issues that can be--and if suspension becomes
common will be--dealt with by private arrangements. If the law
regards my wife as a widow, she can still choose to regard herself as
a wife; if the law considers me frozen but alive, she can apply for a
divorce. I am in no position to contest it. If I am concerned about
having my wealth to support me in the second half of my life, there
are legal institutions--trusteeships and the like--that exist to
allow corpses at least some degree of continued control over their
assets. Such institutions are not perfect--I may be revived in a
hundred years to discover that my savings have been stolen by a
corrupt trustee, the I.R.S., or inflation--but they may be the best
we can do. Their chief limitation is one that applies to almost all
solutions--the fact that over a period of a century or more, legal
and social institutions might change in ways that defeat even prudent
attempts at planning for revival. One alternative might be to
transfer wealth in ways that do not depend on such institutions--for
example, by burying a collection of valuable objects somewhere and
preserving their location only in your own memory. That tactic faces
risks as well--you may be revived, dig up your treasure and discover
that gold coins and rare stamps are no longer worth very much. If
only you had known, you would have buried ten first editions of this
book instead.
Other problems involve adapting existing legal rules to a world where
a substantial number of people are neither quite dead nor quite
alive. If I commit a crime and then get frozen, does the statute of
limitations continue to run, providing me a get out of jail free card
if I stay frozen long enough? If I have been sentenced to fifty years
in jail and, after ten of them, "die" and am frozen, does my sentence
continue to run? What about a life sentence? Most of these are
unlikely to become serious issues until cryonic suspension and
revival becomes not merely a plausible possibility but a routine
procedure--at which point all sorts of interesting options become
available to those willing to take advantage of the technology.
A more immediate problem is faced by somebody who wants to get frozen
a little before he dies instead of a little after. Whether or not
freezing makes it impossible to revive me, dying surely makes it
harder. And some illnesses--cancer is an obvious example--might do
massive damage well before the point of actual death. So once it
looks as though death is certain, there might be much to be said for
getting frozen first.
Under current circumstances that is not an option, since if you are
not dead before you are frozen you will be afterwards. The law
against suicide cannot be enforced against the person most directly
concerned--at least, not until he is revived, at which point it
retroactively stops being suicide--but it can be enforced against the
people who help him. Hence in practice, under current law, being
frozen before death, even ten minutes before, is not a practical
option.
The simplest way of changing that is to interpret freezing not as
death but as a risky medical procedure--one whose outcome will not be
known for some time. It is both legal and ethical for a surgeon to
conduct an operation that might kill me--if the odds without it are
even worse. The probability of revival does not have to be very high
to meet that requirement if the other alternatives are sufficiently
bad.
XVII:
Very Small Legos
The principles of physics, as far as I can see, do not speak
against the possibility of maneuvering things atom by atom. It is not
an attempt to violate any laws; it is something, in principle, that
can be done; but in practice, it has not been done because we are too
big. (Richard Feynman, from a talk delivered in 1959)
We all know that atoms are small. Avogadro's number describes just
how small they are. Written out in full it is about
602,400,000,000,000,000,000,000. It is the ratio between grams, the
units we use to measure the mass of ordinarily small objects such as
pencils, and the units in which we measure the mass of atoms. An atom
of hydrogen has an atomic weight of about one, so Avogadro's number
is the number of atoms in a gram of hydrogen.
Looking at all those zeros, you can see that even very small objects
have a lot of atoms in them. A human hair, for example, contains more
than a million billion. The microscopic transistors in a computer
chip are small compared to us but large compared to an atom.
Everything humans construct, with the exception of some very recent
experiments, is built out of enormous conglomerations of atoms.
We ourselves, on the other hand, like all living things, are
engineered at the atomic scale. The cellular machinery that makes us
run depends on single molecules--enzymes, DNA, RNA and the like--each
a complicated assembly of atoms, every one in the right place. When
an atom in a strand of DNA is in the wrong place, the result is a
mutation. As we become better and better at manipulating very small
objects, it begins to become possible for us to build as we are
built--to construct machines at the atomic level, assembling
individual atoms into molecules that do things. That is the central
idea of nanotechnology.
[139]
One attraction of the idea is that it lets you build things that
cannot be built with present technologies. Thus, for example, since
the bonds between atoms are very strong, it becomes possible to build
very strong fibers made out of long strand molecules. It becomes
possible to use diamond--merely a particular arrangement of carbon
atoms--as a structural material. We may even be able to build
mechanical computers, inspired by Babbage's failed 19
th
century design. Mechanical parts move very slowly compared to the
movement of electrons in electronic computers. But if the parts are
on an atomic scale, they don't have to move very far.
In some cases, small is the objective. A human cell is big enough to
have room for the multitude of atomic machines that make us function.
With a sufficiently good nanotechnology, it ought to be possible to
build a cell repair machine much smaller than a cell--a sort of robot
submarine that goes into a cell, fixes whatever is wrong, then exits
that cell and moves on to the next. If we can build mechanical
nanocomputers, it could be a very smart robot submarine.
The human body contains about sixty trillion cells, so fixing all of
them with one cell repair machine would take a while. But there is no
reason to limit ourselves to one. Or ten. Or a million. Which brings
us to another advantage of nanotechnology.
Carbon atoms are all the same (more precisely, Carbon 12 atoms are
all the same, but I am going to ignore the complications introduced
by isotopes in this discussion). So are nitrogen atoms, hydrogen
atoms, iron atoms. Imagine yourself, shrunk impossibly small, putting
together nanomachines. From your point of view, the world is made up
of identical parts, like tiny Legos. Pick up four identical
hydrogens, attach them to one carbon atom, and you have a molecule of
methane. Repeat and you have another, perfectly identical.
We cannot shrink you that small, of course, since you yourself are
made up of atoms. So our first project, once we have the basics of
the technology worked out, is to build an assembler. An assembler is
a nanoscale machine for building other nanoscale machines. Think of
it as a tiny robot--where tiny might mean built out of fewer than a
billion atoms. It is small enough so that it can manipulate
individual atoms, assembling them into a desired shape. This is far
from trivial, since atoms are not really legos and can not be
manipulated and snapped together in the same way. But we know that
assembling atoms into molecules is possible, since we, and other
living creatures, do it routinely--and some of the molecules we build
inside ourselves are very complicated ones.
[140]
Organic chemists, with much less detailed control over material than
an assembler would have, succeed in deliberately assembling at least
moderately complicated molecules.
Once you have one assembler, you write it a program for building
another. Now you have two. Each of them builds another. Four. After
ten doublings you have more than a thousand assemblers, after twenty
more than a million. Now you write a program for building a cell
repair machine and set your assemblers to work.
It sounds like magic. But consider that your sixty trillion cells
started out as one cell and reached their present numbers by an
analogous--but much more complicated--process. Once you have a
billion or so cell repair machines you inject them into your body,
sit back and relax. When they are finished you feel like a new
man--and are.
A friend of mine (Albert R. Hibbs) suggests a very interesting
possibility for relatively small machines. He says that, although it
is a very wild idea, it would be interesting in surgery if you could
swallow the surgeon. (Feynman)
A cell repair machine would be a very complicated piece of
nanotechnology indeed, so although we may eventually get such things,
it is unlikely to happen very soon. Producing super strong materials,
or nanomachines that produce medical drugs designed on a
computer--one molecule at a time--are likely to be earlier
applications of the technology. To keep us going while we wait for
the cell repair machine, Ralph Merkle has sketched out an ingenious
proposal for an improved version of a red blood cell. His model is a
nano-scale compressed air tank--which holds a lot more oxygen then
the chemical bonds of the current model. Its advantage becomes clear
the day you have a heart attack and your heart stops beating. Instead
of dropping dead you pick up the phone, arrange an emergency
appointment with your doctor, get in the car and drive
there--functioning for several hours on the supply of oxygen already
in your bloodstream.
Nanotechnology could be used to construct large objects as well as
small ones. It takes a lot of assemblers to do it. But if we start
with one assembler, instructions in the form of programs it can read
and implement, lots of stuff--atoms of all the necessary sorts--and a
little time we can have a lot of assemblers. Once we have a lot of
assemblers and the software to control them, we can build almost
anything. If the idea of a very large object built by molecular
machinery strikes you as odd, consider a whale.
It doesn't cost anything for materials, you see. So I want to build a
billion tiny factories, models of each other, which are manufacturing
simultaneously, drilling holes, stamping parts, and so on.
(Feynman)
Like most new and unproven technology, nanotech is still
controversial, with some authors arguing that the proposal is and
always will be impossible for a variety of reasons.
[141]
The obvious counterexample is life--a functioning nanotechnology
based on molecular machines constructed largely of carbon. A more
persuasive argument against the technology is that, although nanotech
may be possible, it is unlikely to be very useful, since anything
really good that it can produce will already have been produced by
nature. Compressed air blood cells, for example, would be useful to
us and other living things quite a long time ago, so if the design
works why don't we already have them?
The answer is that although evolution is a powerful design system, it
has some important limitations. If a random mutation changes an
organism in a way that increases its reproductive success, that
mutation will spread through the population; after a while everyone
has it, and the next mutation can start from there. So evolution can
produce large improvements that occur through a long series of small
changes, each itself a small improvement. Evolutionary biologists
have actually traced out how complicated organs, such as the eye, are
produced through such a long series of small changes.
[142]
But if a large improvement cannot be produced that way--if you need
the right twenty mutations all happening at once in the same
organism, and nineteen are no use--evolution is unlikely to produce
it. The result is that evolution has explored only a small part of
the design space--the set of possible ways of assembling atoms to do
useful things.
[143]
Human beings also design things by a series of small steps--the F111
did not leap full grown from the brains of the Wright Brothers, and
the plane they did produce was powered by an internal combustion
engine whose basic design had been invented and improved by others.
But what seems a small step to a human thinking out ways of arranging
atoms to do something is not necessarily small from the standpoint of
a process of random mutation. Hence we would expect that human
beings, provided with the tools to build molecular machines, would be
able to explore different parts of the design space, to build at
least some useful machines that evolution failed to build. Very small
compressed air tanks, for example.
Readers interested in arguments for and against the workability of
nanotechnology can find and explore them online. For the purposes of
this chapter I am going to assume that the fundamental
idea--constructing things at the atomic scale using atomic scale
assemblers--is workable and will, at some point in the next century,
happen. That leaves us to consider what sort of world that technology
would give us.
Software
Writ Large
In order to build a nanotech car I need assemblers--produced
in unlimited numbers by other assemblers--raw material, and a
program, a full description of what atoms go where. The raw material
should be no problem. Dirt is largely aluminum, along with large
amounts of silicon, oxygen, possibly carbon and nitrogen. If I need
additional elements that the dirt does not contain, I can always dump
in a shovel full of this and that. Add programmed assemblers, stir,
and wait for them to find the necessary atoms and arrange them. When
they are done I have a ton or two less dirt, a ton or two more car.
It sounds like magic--or the process that produces an oak tree.
I have left out one input--energy. An acorn contains design
specifications and machinery for building an oak tree, but it also
needs sunlight to power the process. Similarly, assemblers will need
some source of energy. One obvious possibility is chemical
energy--disassembling high energy molecules to get both power and
atoms. Perhaps we will have to dump a bucket of alcohol or gasoline
on our pile of dirt before we start stirring.
Once we have the basic technology, the hard part is the design--there
are a lot of atoms in a car. Fortunately we don't have to calculate
the location of each atom, --once we have one wheel designed, the
others can be copied, and similarly with many other parts. Once we
have worked out the atomic structure for a cubic micron or so of our
diamond windshield we can duplicate it over and over for the rest,
with a little tweaking of the design when we get to an edge. But even
allowing for all plausible redundancy, designing a car--as good a car
as the technology permits you to build--is going to be a big
project.
What I have just described is a technology in which most of the cost
of producing a product is in creating the initial design--once you
have that, it is relatively inexpensive to make the product itself.
We already have a technology with those characteristics--software.
Producing the first copy of Microsoft Office took an enormous
investment of time and effort by a large number of programmers. The
second copy required a CD burner and a CDR disk--cost about a dollar.
One implication of nanotechnology is an economy for producing cars
very much like the economy that presently produces word processing
programs.
A familiar problem in the software economy is piracy. Not only can
Microsoft produce additional copies of Office for a dollar apiece, I
can do it too. That raises problems for Microsoft, or anyone else who
expects to be rewarded for producing software with money paid to buy
it. Nanotechnology raises the same problem, although in a somewhat
less severe fashion: I cannot simply put my friend's nanotech car or
nanotech computer into a disk drive and burn a copy.
I can, however, disassemble it. To do that, I use nanomachines that
work like assemblers, but backwards. Instead of starting with a
description of where atoms are to go and putting them there, they
start with an object--an automobile, say--and remove the atoms, one
by one, keeping track of where they all were.
Disassembling an automobile with one disassembler would be a tedious
project, but I am not limited to one. Using my army of assemblers I
build an army of disassemblers, each provided with some way of
getting the information it generates back to me--perhaps a miniature
radio transmitter, perhaps some less obvious device. I set them all
to work. When they are done the car has been reduced to its
constituent elements--and a complete design description. If there
were computers big enough to design the car, there are computers big
enough to store the design. Now I program my assemblers and go into
the car business.
One possible solution to the problem of making it in someone's
interest to design the car in the first place is an old legal
technology--copyright. Having created my design for a car, I
copyright it. If you go into business selling duplicates, I sue you
for copyright violation. This should work at least a little better
for cars than it now does for computer programs, both because the
first stage of copying--disassembling, equivalent to reading a
computer program from a disk--is a lot harder for cars, and because
cars are bigger and harder to hide than programs.
The solution may break down if instead of selling the car the pirate
sells the design--to individual consumers, each with his own army of
assemblers ready to go to work. We are now back in the world of
software. Very hard software. The copyright owner now has to enforce
his rights, copy by copy, against the ultimate consumer, which is a
lot harder than enforcing them against someone pirating his property
in bulk and selling it.
Suppose that, for these reasons or others, copyright turns out not to
work. How else might people who design complicated structures at the
molecular level get paid for doing so?
One possibility is tie-ins with other goods or services that cannot
be produced so cheaply--land, say, or backrubs. You download from a
(very broad bandwidth) future internet the full specs for building a
new sports car, complete with diamond windshield, an engine that
burns almost anything and gets a thousand miles a gallon, and a
combined radar/optical/pattern recognition system that warns you of
any obstacle within a mile and, if the emergency autopilot is
engaged, avoids it. You convert the information into programmed
tapes--very small programmed tapes--for your assemblers, find a
convenient spot in the back yard, and set them to work. By next
morning the car is sitting there in all its splendor.
You get in, turn the key, appreciate the purr of the engine, but are
less happy with another feature--the melodious voice telling you
everything you didn't want to know about the lovely housing
development completed last week, designed for people just like you.
On further investigation, you discover that turning off the
advertising is not an option. Neither is disabling it--the audio
system is a molecular network spread through the fabric of the car.
If you want the car without the advertising you will have to design
it yourself. You cast your mind back to the early years of the
internet, thirty or forty years ago--and the solution found by web
sites to the problem of paying their bills.
Another possibility is a customized car. What you download--this time
after paying for it--is a very special car indeed, one of a kind.
Before starting, it checks your fingerprints (read from the steering
wheel), retinal patterns (scanner above the windshield) and DNA
(you'll never miss a few dead skin cells). If they all match, it
runs. The car is safe from thieves, since they cannot start it; you
don't even have to carry a key, since you are the key. But if you go
to all the trouble of disassembling it and making lots of copies,
they will not be very useful to anyone but you. If your neighbor
wants a car, he will have to buy one--customized to him.
This again is an old solution, although not much used for consumer
software. While we do not have adequate biometric identification just
yet, the equivalent for computers is fairly easy--all it requires is
a cpu with its own serial number. Given that, or some equivalent, it
is possible to produce a program that will only run on one machine.
One version of this approach uses a hardware dongle--a device not
easily copied that attaches to the computer and is recognized by the
program.
A third possibility is open source production--a network of
individuals cooperating to produce and improve designs, motivated by
some combination of status, desire for the final product, and
whatever else motivated the creators of Linux, Sendmail, and
Apache.
As these examples suggest, a mature nanotechnology raises issues very
similar to those raised by software, and those issues can be dealt
with in similar ways--imperfectly, but perhaps well enough. It also
raises other issues that may be more serious.
The
Grey Goo Scenario
"Plants" with "leaves" no more efficient than today's solar
cells could out-compete real
plants, crowding the biosphere with an inedible foliage. Tough,
omnivorous "bacteria" could
out-compete real bacteria: they could spread like blowing pollen,
replicate swiftly, and reduce the biosphere to dust in a matter of
days. Dangerous replicators could easily be too tough, small, and
rapidly spreading to stop - at least if we made no preparation.
We have trouble enough controlling viruses
and fruit flies. (Drexler, Engines)
Life is, on the whole, a good thing--but we are willing to make an
exception for certain forms of life, such as smallpox. Molecular
machines are, on the whole, a good thing. But there too, there might
be exceptions.
An assembler is a molecular machine capable of building a wide
variety of molecular machines, including copies of itself. It should
be much easier to build a machine that copies only itself--a
replicator. For proof of concept, consider a virus, a bacterium, or a
human being--although the last doesn't produce an exact copy.
Now consider a replicator designed to build copies of itself, which
build copies, which ... . Assume it uses only materials readily
available in the natural environment, with sunlight as its power
supply. Simple calculations suggest that, in a startlingly short
time, it could convert everything from the dirt up into copies of
itself, leaving only whatever elements happen to be in excess supply.
That is what has come to be referred to, in nanotech circles, as the
grey goo scenario.
If you happen to be the first one to develop a workable
nanotechnology, certain precautions might be in order. One obvious
one is to avoid, so far as possible, building replicators. The
problem with this is that you will want assemblers--and one of the
things an assembler can assemble is another assembler. But at least
you can make sure nothing else is designed to replicate--and an
assembler, being a large and very complicated molecular machine, may
pose less of a threat of going wild than simpler machines whose only
design goal is reproduction.
A precaution you could apply to assemblers as well as other possible
replicators is to design them to require some input, whether matter
or energy, not available in the natural environment. That way they
can replicate usefully under your control but pose no hazard if they
get out. Another is to give them a limited lifetime--a counter that
keeps track of each generation of copying and turns the machine off
when it reaches its preset limit. With precautions like these to
supplement the obvious precaution of keeping your replicators in
sealed environments, it should be possible to make sure that no
replicator you have designed to be safe poses any serious threat of
turning the world into grey goo. Unfortunately, that doesn't solve
the problem.
One reason it doesn't is that nanotech replicators, like natural
biological replicators, can mutate. A cosmic ray, for example, might
knock an atom off the instruction tape that controls copying,
producing defective copies--and one defect might turn off the limit
on number of generations. It might even, although much less probably,
somehow eliminate the need for the one element not available in a
natural environment. Once freed of such constraints, one could
imagine wild nanotech replicators gradually evolving, just as
biological replicators do. Like biological replicators, their
evolution would be towards increased reproductive success--getting
better and better at converting everything else in existence into
copies of themselves. And it is at least possible that, by exploiting
design possibilities visible to a human designer and designed into
their ancestors but inaccessible to the continuous processes of
evolution, they would do a better job of it than natural
replicators.
It should be possible to design replicators, if one is sufficiently
clever, that cannot mutate in such a way. One obvious way is through
redundancy. You might, for example, give the replicator three copies
of its instruction tape and design it to executes an instruction only
if all three agree; the odds that three cosmic rays will each remove
the same atom from each tape are pretty low. Of course, you have to
make sure the design is such that any random damage that turns off
the checking process also disables the machine. And one might also
want to be sure that elements not available in the natural
environment play a sufficiently central role in the working of the
replicator so that there is no plausible way of mutating around the
constraint.
Almost
Worse Than the Disease
I have described a collection of precautions that could
work--in a world in which only one organization has access to the
tools of nanotechnology and that organization acts in a prudent and
benevolent fashion. Is that likely?
On the face of it such a monopoly seems extraordinarily unlikely in
anything much like our world. But perhaps not. Suppose the idea of
nanotechnology is well understood and accepted by a number of
organizations, probably governments, with substantial resources--at a
point well before anyone has succeeded in building an assembler. Each
of those organizations engages in extensive computerized design work,
figuring out exactly how to build a variety of useful molecular
machines once it has the assemblers to build them with. Those
machines include designer plagues, engineered obedience drugs, a
variety of superweapons, and much else.
One organization makes the breakthrough; it now has an assembler.
Very shortly--after about forty doublings--it has a trillion
assemblers. It sets them to work building what it has already
designed. A week later it rules the world--and one of its first acts
is to forbid anyone else from doing research in nanotechnology.
It seems a wildly implausible scenario, but I am not sure it is
impossible--I do not entirely trust my intuition of what can or
cannot happening, given a technology with such extraordinary
possibilities. The result would be a world government with very
nearly unlimited power. I can see no reason, in nanotechnology or
anything else, to expect it to behave any better than past
governments with such power. It would, I suppose, be an improvement
on gray goo, but not much of an improvement.
Between a
Rock and a Hard Place
Suppose we avoid world dictatorship and end up with a world
of multiple governments, some of them reasonably free and democratic,
and fairly widespread knowledge of nanotechnology. What are the
consequences?
One possibility is that everyone treats nanotech as a government
monopoly, with the products but not the technology made available to
the general public. Eric Drexler describes in some detail a version
of this in which everybody is free to experiment with the technology,
but to do it only in a (nanotechnologically constructed) sealed and
inaccessible environment, with the actual implementation of the
resulting designs under strict controls. Presumably, once the basic
information is out, the enforcement of such regulations against
private individuals who want to violate them will itself depend on
the government's lead in the nanotech arms race--providing it with
devices for surveillance and control that will make the video
mosquitoes of an earlier chapter seem primitive in comparison. Again
not a very attractive picture, but an improvement on all of us
turning into gray goo.
The problem with this solution is that it looks very much like a case
of setting the fox to guard the hen house. Private individuals may
occasionally do research on how to kill large numbers of people and
destroy large amounts of stuff, but the overwhelming bulk of such
work is done by governments for military purposes. The very
organizations that, in this version, have control over the
development and use of nanotech are the ones most likely to spend
substantial resources finding ways of using the technology to do what
most of the rest of us regard as bad things.
In the extreme case, we might get gray goo deliberately designed as a
doomsday machine--by a government that wants the ability to threaten
everyone else with universal suicide. In a less extreme case, we
could expect to see a lot of research on designing molecular machines
to kill large numbers of (selected) people or destroy large amounts
of (other nations') property. Governments doing military research,
while they prefer to avoid killing their own citizens in the process,
are willing to take risks--as suggested by incidents such as the
accident in a Soviet germ warfare facility that killed several
hundred people in a nearby city.
[144]
And they work in an atmosphere of secrecy that may make it hard for
other people to notice and point out risks in their work that have
not occurred to them. Hence there is a very real possibility that
deliberately destructive molecular machines will turn out to be even
more destructive than their designers intended--or get released
before their designers want them to be.
Consider two possible worlds. In the first, nanotechnology is a
difficult and expensive business, requiring billions of dollars of
equipment and skilled labor to create workable designs for molecular
machines that do useful things. In that world, gray goo is unlikely
to be produced deliberately by anybody but a government--and any
organization big enough to produce it entirely by accident is
probably well enough organized to take precautions. In that world
defenses against gray goo--more generally, molecular machines
designed to protect human beings and their property from a wide
variety of risks, including destructive molecular machines, tailored
plagues, and more mundane hazards--will be big sellers, hence have
very large resources devoted to designing them commercially, assuming
that designing them is legal. In that world, making nanotech a
government monopoly will do little to reduce the downside risk, since
governments will be the main source of that risk, but might
substantially reduce the chance of protecting ourselves against
it.
In the second world--perhaps the first world a few decades
later--nanotech is cheap. Not only can the U.S. department of defense
design gray goo if it wants to, you can design it too--on your
desktop. In this world, nothing much short of a small number of
dictatorships maintained in power--over rivals and subjects--by a
lead in the nanotech arms race is going to keep the technology out of
the hands of anyone who wants it. And it is far from clear that that
even that would suffice.
In this second world, the nanotech equivalent of designer plagues
will exist for much the same reasons that computer viruses now exist.
Some will come into existence the way the original Internet worm did,
the work of someone very clever, with no bad intent, who makes one
mistake too many. Some will be designed to do mischief and turn out
to do more mischief than intended. And a few will be deliberately
created as instruments of apocalypse by people who for one reason or
another like the idea.
Before you conclude that the end of the world is upon you, consider
the other side of the technology. With enough cell repair machines on
duty, designer plagues may not be a problem. Human beings want to
live and will pay for the privilege. Hence the resources that will go
into designing protections against threats, nanotechnological or
otherwise, will be enormously greater than the (private) resources
that go into creating such threats--as they are at present, with the
much more limited tools available to us. Unless it turns out that,
with this technology, the offense has an overwhelming advantage over
the defense, nanotech defenses should almost entirely neutralize the
threat from the basement terrorist or careless experimenter. The only
serious threat will be from organizations willing and able to spend
billions of dollars creating molecular killers--almost all of them
governments.
The previous paragraph contained a crucial caveat--that offense not
be a great deal easier than defense. The gray goo story suggests that
it might be, that simple molecular machines designed to turn
everything in the environment into copies of themselves might have an
overwhelming advantage over their more elaborate opponents.
The experiment has been done; the results so far suggest that that is
not the case. We live in a world populated by molecular machines. All
of them, from viruses up to blue whales, have been designed with the
purpose of turning as much of their environment as they can manage
into copies of themselves--we call it reproductive success. So far,
at least, the simple ones have not turned out to have any
overwhelming advantage over the complicated ones: Blue whales, and
human beings, are still around.
That does not guarantee safety in a nanotech future. As I pointed out
earlier, nanotechnology greatly expands the region of the design
space for molecular machines that is accessible--human beings will be
able to create things that evolution could not. It is conceivable
that, in that expanded space of possible designs, gray goo will turn
out to be a winner. All we can say is that so far, in the more
restricted space of carbon based life capable of being produced by
evolution, it hasn't turned out that way.
In dealing with nanotechnology, we are faced with a choice between
centralized solutions--in the limit, a world government with a
nanotech monopoly--and decentralized solutions. As a general rule I
much prefer the latter. But a technology that raises the possibility
of a talented teenager producing the end of the world in his basement
makes the case for centralized regulation look a lot better than it
does in most other contexts--good enough to have convinced some
thinkers, among them Eric Drexler, to make it at least a partial
exception to their usual preference for decentralization, private
markets, laissez-faire.
But while the case for centralization is in some ways strongest for
so powerful a technology, so is the case against. There has been only
one occasion in my life when I thought there was a significant chance
that many of those near and dear to me might die. It occurred a
little while after the 9/11 terrorist attack, when I started looking
into the subject of smallpox.
Smallpox had been officially eliminated; so far as was publicly
known, the only remaining strains of the virus were held by U.S. and
Russian government laboratories. Because it had been eliminated, and
because public health is a field dominated by governments, smallpox
vaccination had been eliminated too. It had apparently not occurred
to anybody in a position to do anything about it that it was worth
maintaining sufficient backup capacity to reverse that decision
quickly. The U.S. had supplies of vaccine, but they were adequate to
vaccinate only a small fraction of the population--so far as I could
tell, nobody else had substantial supplies either.
Smallpox, on an unvaccinated population, produces mortality rates as
high as thirty percent. Most of the world's population is now
unvaccinated; those of us who were vaccinated forty or fifty years
ago may or may not still be protected. If a terrorist had gotten a
sample of the virus, either stolen from a government lab or cultured
from the bodies of smallpox victims buried somewhere in the arctic at
some time in the past--nobody seems to know for sure whether or not
that is possible--he could have used it to kill hundred of millions,
perhaps more than a billion, people. That risk existed because the
technologies to protect against replicators--that particular class of
replicators--had been under centralized control. The center had
decided that the problem was solved.
Fortunately, it didn't happen.
XVIII:
Dangerous Company
"The specialness of humanity is found only between our ears;
if you go looking for it anywhere else, you'll be disappointed." (Lee
Silver)
What and where in my body is me is a very old puzzle. An early
attempt to answer it by experiment is described in Jomviking saga,
written in the 13
th c. After a battle, captured warriors
are being executed. One of them suggests that the occasion provides
the perfect opportunity to settle an ongoing argument about the
location of consciousness. He will hold a small knife point down
while the executioner cuts off his head with a sharp sword; as soon
as his head is off, he will try to turn the knife point up. It takes
a few seconds for a man to die, so if his consciousness is in his
body he will succeed; if it is in his head, no longer attached to his
body, it will fail. The experiment goes as proposed, the knife falls
point down.
[145]
We still do not know with any confidence what conciousness is, but we
know more about the subject than the Jomvikings did. It seems clear
that it is closely connected to the brain. A programmed computer
comes closer to acting like the human mind than anything else whose
working we understand. And we know enough about the mechanism of the
brain to plausibly interpret it as an organic computer. That suggests
an obvious and interesting conjecture--that what I am is a program,
software, running on the hardware of my brain. Current estimates
suggest that the brain has enormously greater processing power than
any existing computer, so it is not surprising that computers can do
only a very imperfect job of emulating human thought.
This conjecture raises an obvious, interesting and frightening
possibility. Computers have, for the past thirty years or so, been
doubling their power every year or two--a pattern known, in several
different formulations, as "Moore's Law." If that rate of growth
continues, at some point in the not very distant future--Raymond
Kurzweil's estimate is about thirty years--we should be able to build
computers that are as smart as we are.
Building the computer is only part of the problem; we still have to
program it. A computer without software is only an expensive
paperweight. In order to get human level intelligence in a computer,
we have to find some way of producing a software equivalent of
us.
The obvious way is to figure out how we think--more generally, how
thought works--and write the program accordingly. Early work in A.I.
followed that strategy, attempting to write software that could do
very simple tasks--recognize objects, for example--of the sort our
minds do. It turned out to be a surprisingly difficult problem,
giving A.I. a reputation as a field that promised a great deal more
than it performed.
It might conjecture that the problem is not only difficult but
impossible, that a mind of a given level of complexity--exactly how
one would define that is not clear--can only understand simpler
things than itself, hence cannot understand how it itself works. But
even if that is true, it does not follow that we cannot build
machines at least as smart as we are--because one does not have to
understand things to build them. We ourselves are, for those of us
who accept evolution rather than divine creation as the best
explanation of our existence, a striking counterexample. Evolution
has no mind. Yet it has constructed minds--including ours.
This suggests one strategy for creating smarter software, and one
that has come into increasing use in recent years. Set up a virtual
analog of evolution, a system where software is subject to some sort
of random variation, tested against a criterion of success, and
selected according to how well it meets that criterion, with the
process being repeated a very large number of time, using the output
of one stage as the input for the next. It is through a version of
that approach that the software currently used to recognize faces--a
computer capability discussed in an earlier chapter--was created. In
principle, if we had powerful enough computers and some simple way of
judging the intelligence of a program, we could apply the same
approach to creating programs with human level intelligence.
A second alternative is reverse engineering. We have, after all, an
example of human level intelligence readily available. If we could
figure out in enough detail how the brain functions--even if we did
not fully understand why functioning that way resulted in an
intelligent, self-aware entity--we could emulate it in silicon, build
a machine analog of a generic human brain. Our brains must be to a
significant degree self programming, since the only information they
start with is contained in the DNA of a single fertilized cell, so
presumably, with enough trial and error, we could get our emulated
brain to wake up and learn to think.
A third alternative is to reverse engineer not a generic brain but a
particular brain. Suppose one could somehow build sufficiently good
sensors to construct a precise picture of both the structure and the
state of a specific human brain at a particular instant--not only
what neuron connects to what how but what state every neuron is in.
Suppose you can then precisely emulate that structure in that state
in hardware. If all I am is software running on the hardware of my
brain, and you can fully emulate that software and its current state
on different hardware, you ought to have an artificial intelligence
that, at least until it evaluates data coming in after its creation,
thinks it is me. This idea, commonly described as "uploading" a human
being, raises a very large number of questions, practical, legal,
philosophical and moral. They become especially interesting if we
assume that our sensors are delicate enough to observe my brain
without damaging it--leaving, after the upload, two David Friedman's,
one running in carbon and one in silicon.
A
New World
"I don't think we're in Kansas anymore,
Toto."
A future with human level artificial intelligence,
however produced, raises considerable problems for existing legal,
political and social arrangements. Does a computer have legal rights?
Can it vote? Is killing it murder? Are you obliged to keep promises
to it? Is it a person?
Suppose we eventually reach what seems the obvious conclusion--that a
person is defined by something more fundamental than human DNA, or
any DNA at all, and some computers qualify. We now have new
problems--coming from the fact that these people are different in
some very fundamental ways from all the people we have known so
far.
At present, a human being is intricately and inextricably linked to a
particular body. A computer program can run on any suitable hardware.
Humans can sleep, but if you turn them off completely they die. You
can save a computer program's current state to your hard disk, turn
off the computer, turn it back on tomorrow, and bring the program
back up. When you switched it off, was that murder? Does it depend on
whether or not you eventually switch it on again?
Humans claim to reproduce themselves, but it isn't true. My wife and
I jointly produced children--she did the hard part--but neither of
them was a reproduction of either of us. Even with a clone, only the
DNA would be identical--the experiences, thoughts, beliefs, memories,
personality would be its own.
A computer program, on the other hand, can be copied to multiple
machines; you can even run multiple instances of the same program on
one machine. When a program that happens to be a person is copied,
which copy gets property that person owns? Which is responsible for
debts? Which gets punished for crimes committed before the
copying--and how?
We have strong legal and moral rules against owning other people's
bodies, at least while they are alive and perhaps even afterwards.
But an A.I. program runs on hardware somebody built, hardware that
could also be used to run other sorts of software. When someone
produces the first human level A.I.--presumably on cutting edge
hardware costing many millions of dollars--does the program get
ownership of the computer it is running on? Does it have a legal
right to its requirements for life, most obviously power? Do its
creators, assuming they still have sufficient physical control over
the hardware, get to save it to disk, shut it down, and start working
on the Mark II version?
Suppose I make a deal with a human level A.I. I will provide a
suitable computer onto which it will transfer a copy of itself. In
exchange it agrees that for the next year it will spend half its
time--twelve hours a day--working for me for free. Is the copy bound
by that agreement? "Yes" means slavery. "No" is a good reason why
nobody will provide hardware for the second copy. Not, at least,
unless he retains the right to turn it off.
Dropping
the Other Shoe
I have been discussing a variety of interesting puzzles
associated with the problem of adapting our institutions to human
level artificial intelligence. That problem, if it occurs, is not
likely to last very long.
Earlier I quoted Kurzweil's estimate of about thirty years to human
level A.I. Suppose he is correct. Further suppose that Moore's law,
or something similar, continues to hold--computers continue to get
twice as powerful every year or two. In forty years, that makes them
something like a hundred times as smart as we are. We are now
chimpanzees--and had better hope that our new masters like us.
Kurzweil's solution to this problem is that we become computers
too--at least in part. The technological developments leading to
advanced A.I. are likely to be associated with much greater
understanding of how our own brains work. That ought to make it
possible to construct much better brain to machine
interfaces--lettings us move a substantial part of our thinking to
silicon too. Consider 89352 times 40327 and the answer is obviously
3603298104. Multiplying five figure numbers isn't all that useful a
skill, but if we understand enough about thinking to build computers
that think as well as we do--whether by design, evolution, or reverse
engineering--we should understand enough to offload a lot of more
useful parts of our onboard information processing to external
hardware. Now we can take advantage of Moore's law too.
The extreme version of this scenario merges into uploading. Over
time, more and more of your thinking is done in silicon, less and
less in carbon. Eventually your brain, perhaps your body as well,
come to play a minor role in your life--vestigial organs kept around
mainly out of sentiment.
Short of becoming partly or entirely computers ourselves, or ending
up as (optimistically) the pets of computer superminds, I see two
other possibilities. One is that, for some reason, the continual
growth of computing power that we have observed in recent decades
runs into some natural limit and slows or stops. The result might be
a world where we never get human level A.I.--although we might still
have much better computers than we now have. Less plausibly, the
process might slow down just at the right time, leaving us with peers
but not masters--and a very interesting future. The only argument I
can see for expecting that outcome is that that is how smart we
are--and perhaps there are fundamental limits to thinking ability
that our species ran into a few hundred thousand years back. But it
doesn't strike me as very convincing.
[Third possibility--computers that are
super intelligent but not self-motivated, perhaps not self
aware]
The other possibility is that perhaps we are not software after all.
The analogy is persuasive, but until we have either figured out in
some detail how we work or succeeded in producing programmed
computers a lot more like us than any so far, it remains a
conjecture. Perhaps my consciousness really is an immaterial soul, or
at least something more accurately described as an immaterial soul
than as a program running on an organic computer. It isn't how I
would bet, but it could still be true.
Chapter
XIX: All in Your Mind
Some years ago I gave a public lecture in Italy--over the
telephone from my office in San Jose. From my end it was not a very
satisfactory experiment--too much like talking into a void. A year or
two later I repeated it with better technology. This time I was
sitting in a video-conferencing room. My audience (in the
Netherlands) could see me and I could see them on video screens.
Still not quite real, but a good deal closer.
The next time it might be closer still. Not only do I save on the air
fare, the audience does too. I am at home, so are they. Each of us is
wearing earphones and goggles, facing a small video camera. The
lenses of the goggles are video screens; what I see is not what is in
front of me but what they draw. What they are drawing is a room
filled with people. Each of them is seeing the same room, looking the
other direction--at me, standing at a virtual podium as I deliver my
talk.
Virtual reality not only saves on air fares, it has other advantages
as well. The image from my video camera is processed by my computer
before being sent on to everyone in my audience. That gives me an
opportunity to improve it a little first--replace my bathrobe with a
suit and tie, give me a badly needed shave, remove a decade or so of
aging. My audience, too, looks surprising attractive, tidy, and well
dressed. And while, from my point of view, they are evenly
distributed about the hall, each of them has the best seat in the
house. There is no need for my virtual reality and his to be
identical.
Long ago I was given the secret of public lectures--always speak in a
room a little too small for the audience. In virtual reality, it is
automatic. However many people show up, that is the number of seats
in the lecture hall. And for each of them, the lecture hall is custom
designed--gold plated if his taste is sufficiently lavish. In virtual
reality, gold is as cheap as anything else. If you don't believe me,
take a look at a good fantasy video game--Diablo II, say.
Video games are our most familiar form of virtual reality. Staring
through the screen you are looking at a world that exists only in the
computer's memory, represented by a pattern of colored dots on its
screen. In that world, multiple people can and do interact, each at
his own computer. In first person video games, each sees on the
screen what he would be seeing if he were the character he is playing
in the game. In some, the virtual world comes complete with realistic
laws of physics. Myth, so far as I can tell, calculates the
flight of every arrow--if a dwarf throws a hand grenade up hill, it
rolls back. As the technology gets better, we can expect it to move
beyond entertainment. Perhaps I should stay out of airline stocks for
a while.
We already know how to do everything I have described. As computers
get faster and computer screens--including goggle sized
ones--sharper, we will be able to do it better and cheaper. Within a
decade, probably less, we should be able to do this sort of virtual
reality inexpensively at real world resolution--with video good
enough to make the illusion convincing. The audio already is.
However good our screens, this sort of virtual reality suffers from a
serious limitation--it only fools two senses. With a little more work
we might add a third, but smell does not play a large a role in our
perceptions. Touch and taste and the kinesthetic senses that tell us
what our body is doing are a much harder problem. If my computer
screen is good enough the villain may look entirely real, but if I
try to punch him I will be in for an unpleasant surprise.
Our present technology for creating a virtual reality depends on the
brute force approach--using the sensorium, the collection of tools
with which we sense the world around us. Want to hear things? Vibrate
air in the ear. Want to see things? Beam photons at the retina. Doing
the equivalent with the remaining senses is harder--and still leaves
us the problem of coordinating what our body is actually doing in
realspace with what we are seeing, hearing, and feeling it do in
virtual space.
The solution is the form of virtual reality that many of us
experience very night. We call it dreaming. In a dream, when you tell
your arm to move your virtual arm moves, your body (usually) doesn't.
Dreams are not limited to sight and sound.
Suppose we succeed in cracking the dreaming problem--figuring out
enough about how the brain works so that we too can create full sense
illusions and control them with the illusion of controlling our
bodies. We then have deep VR--and a very interesting world.
Tomorrow:
The World of Primitive VR
If you play video games, virtual reality--the brute force
version--is already part of your life. As it gets better and cheaper,
one result will be better games. There will be others.
Communication is the obvious application. You still won't be able to
reach out and touch someone, save metaphorically. But seeing and
hearing is better than just hearing. A conference call becomes more
like a meeting when you can see who is saying what to whom and read
the cues embodied in facial expressions and body movements.
Which raises an interesting problem. We all, automatically and
routinely, judge the people around us not only by what they say but
by how they say it--tone of voice, facial expression and the like.
Most people are poor liars--one of the reasons why honesty is the
best policy. Having people believe you are honest while taking
whatever actions best serve your purposes would be an even better
policy--for you, not for those you deal with--but for most of us it
is not a practical option.
There are exceptions--we call them con men. They are people who,
through talent or training, have mastered the ability to divorce what
they are actually thinking and doing from the system of nonverbal
signals, the monolog about the inside of our heads, that all of us
are continually delivering. Fortunately, not many are really good at
it.
Virtual reality will feel quite a lot like ordinary physical reality.
There will be a strong temptation to carry over to interactions in
virtual reality the habits and strategies we have already developed
for face to face dealings.
My computer can make me look younger. It can also make me look more
honest. Once someone has done an adequate job of deciphering the
language by which we communicate thoughts and emotions from facial
expressions and body postures--for all I know someone already has,
probably someone in the business of training salesmen--we can create
computerized con men. Even if I have no talent for lying, my computer
may.
The flip side is that on the internet nobody knows you are a dog. Or
a woman. Or a twelve year old. Or crippled. In virtual reality, once
we have the real time editing software worked out properly, you can
be anything you can imagine. Homely women can leave their faces
behind, precocious children can be judged by the mental age reflected
in what they say and do, not the physical age reflected in their
faces.
Present-day interaction on the net, like ordinary correspondence, is
a form of virtual reality--a very low bandwidth form. When you argue
with people on Usenet News or in an email group you are projecting a
persona, giving them a mental picture of what sort of person you are.
Some years ago, someone suggested a game for the Newsgroup
rec.org.sca: Have participants write and post physical descriptions
of other participants they had never met. I gained almost nine
inches. In virtual reality I never have to be short again.
Unless, of course, I want to be.
In the modern world, we no longer have to worry much about escaping
predators or running down prey. We no longer have to scratch in the
ground with sharp sticks to grow food. For most of us, "work"
involves little physical exertion. But there is still
play--basketball, soccer, tennis. One objection to video games at
present and one that will be raised to the much better video games
that technology will make possible in the near future is that they
remove one of the few incentives modern people have to exercise.
Observe someone--perhaps yourself--playing an absorbing video game.
Just as with other games, involvement in winning dominates all other
sensations. Long ago I discovered the sign of a first rate game--that
when I finally left the computer to use the bathroom, it was because
I really had to. And lots of players of lots of games have noticed
just how tired their thumb is from pushing the button on the end of
the joystick only after the game is over.
If you are concerned with the problem of exercise, the obvious
solution is to use much bigger joysticks. Combine a video game, with
an exercise machine. Working the exercise machine controls what is
happening in the game. Just as with real world athletics, you only
notice how tired you are after you have won--or lost. Primitive
implementations of this idea already exist.
[146]
In my mark II version, virtual games become better exercise than real
games--because the environment that the computer creates is being
tailored, second by second, to your body's needs.
The game is set in the Pacific during the second world war. You are
controlling an anti-aircraft gun on the Yamato, the world's biggest
battleship, desperately trying to defend it against waves of American
bombers seeking to destroy, by sheer brute force, the glory of the
Japanese navy. You traverse the gun left and right with your arms,
lower or elevate the barrel with foot controls; when you release the
controls it swings back to center. Your strength is physically moving
the gun, so it isn't surprising that it's a lot of work.
After the third wave, the computer controlling the game notices that
you are having trouble swinging the gun rapidly to the left--your
left arm is tiring. The next attack comes from the right. As the
right arm becomes equally tired, more and more of the attacks require
you to adjust the elevation of the gun, shifting the work to your
legs. When your heartbeat reaches the upper boundary of your aerobic
target zone, there is a break in the attack, during which you hear
martial music. As your heartbeat slows, the next wave comes in. Judo
was a lot of fun, and good exercise as well, but art, well done art,
improves on nature.
A sophisticated exercise game is one not entirely obvious way in
which we can use virtual reality. Another is to do dangerous things
while only getting virtually killed. Consider the problem of
engineering in dangerous environments--the bottom of the Mariana
trench, say, five miles below sea level, or the surface of the moon.
One solution is for the operator of the equipment to be only
virtually there. His body is sitting in a safe environment--wearing
goggles, manipulating controls. His point of view, just as in a first
person video game, is the point of view of the machine he is
operating.
In the lunar case, we have a small technical problem--the speed of
light. If the operator is on earth and the machine on the moon there
will be a noticeable lag between when the machine sends him
information and when his response, based on that information, gets
back to the machine. Some of us have been virtually killed by similar
lags in video games, due either to transmission delay or processing
time. In the case of lunar engineering, while the death would be only
virtual for the operator it might be real for the machine--and
putting machines on the moon is not cheap. Perhaps we had better have
the operator on the moon too, or in orbit around it--somewhere safer
than the tunnel he is digging, closer than the earth he came
from.
As these examples suggest, virtual reality, even implemented using
the crude technologies we now have, can have interesting uses in the
real world. When we go beyond those technologies, things get
stranger.
Deep
VR--Beyond the Dreaming Problem
Suppose we have a fully developed virtual reality. Anyone who
wants it can have a socket at the back of his neck or the wireless
equivalent. By sending signals into that socket we can create a full
sense illusion--anything our senses could have experienced.
[147]
In imagining the world that technology would make possible, a useful
first step is to distinguish between information transactions and
material transactions. An example of an information transaction would
be reading this book or having a conversation. The book is a physical
object, but reading an illusion of a book--with the same words on the
virtual pages--would do just as well. When you hold a conversation,
you are using physical vocal cords to vibrate physical air in order
to transmit your communications and using a physical eardrum to pick
up those vibrations in order to receive the other person's
communications. But that apparatus is merely a means for transmitting
information--what each of you is saying. If what you were really
doing was sending electronic signals to each other that created the
illusion of your voice saying the same words, the same effect would
be achieved.
For a material transaction, consider growing wheat. You could, in
virtual reality, create the illusion of growing wheat--give yourself
the sensory experience of planting, weeding, harvesting. But if you
tried to live on the virtual wheat you grew you would eventually die
of starvation--since it isn't really there its doesn't have any
calories with which to sustain your real body.
A sufficiently advanced form of virtual reality can provide for all
information transactions. It might assist with some material
transactions--the harvester could be run by an operator located
somewhere else, giving real instructions to a real machine while
using the illusion of being in the machine to help figure out what
instructions to give. The operator's physical presence would be an
illusion, the information he was using real--provided by cameras and
microphones on the harvester. But if you want real transactions to
occur and produce real results--food, houses, or whatever--you have
to really do them in the real world.
Beam Me Up,
Scotty
In Star Trek, people get beamed from one place to another. I
know of no reason to expect it to happen, and if it did I would be
reluctant to use it, since it is not clear whether what it is doing
is moving me or killing me and creating a copy somewhere else that
thinks it is me. But as long as all we are interested in are
information transactions, virtual reality can do the same job much
more easily, without generating any philosophical problems.
Why do I want to go somewhere to visit friends? To see them, to feel
them, to hear them, to do things with them. Unless one of the things
is building a house or planting a garden, and it really has to be
built or planted, the whole visit is an information transaction. With
good VR, my body stays home and my mind does the walking.
If you find this an odd idea, consider a phone call. It too is a
substitute for a visit. VR increases the bandwidth to the point where
the person appears to be present in all the ways he would appear to
be present if he really were. Travel by virtual reality will not
limited to social calls, anymore than the telephone is. It provides a
way for any group of two or more people to get together for any sort
of information transaction they wish to engage in--a meeting, a peace
conference, a trial, a love affair. And since all that is happening
is information moving back and forth over networks--information that
can readily be encrypted--we are back in a world of strong privacy.
Surveillance technology may make everything in realspace public, but
if we are not doing anything in realspace that matters any more that
isn't a problem.
Future
Fiction
The potential for the entertainment industry is equally
striking. Works of fiction can be experienced fully, just as the
author intended--no imagination required. Whether that is an
improvement can be left to the individual judgment. Role playing
games will become a great deal more vivid when you actually get to
see, hear, feel, smell and touch the monsters. Just how vividly you
get to feel the monster's claws tearing you to bits will be one of
the options to be deciding when you are selecting preferences--I may
go for the low end of that one.
One form of virtual entertainment will be a work of fiction, a
synthesized experience. Another may be a tape recording. You too can
climb Everest, plumb the depths of the sea. If there are real wars
going on, a few of the soldiers may moonlight in cinema
verité, everything that happens to them recorded complete.
Pornography will finally become serious competition for sex.
These are only the crudest and most obvious applications of the
technology, creating as an illusion the sort of experiences that
already exist in reality. Consider in contrast a symphony. It
corresponds to nothing in nature. The composer has taken one sense,
hearing, and used it to create an aesthetic experience out of his own
head. It will be interesting to see what happens when he can use all
the senses.
One issue in designing fiction to be experienced in deep virtual
reality is how strong we want the suspension of disbelief to be.
While the story is happening, do you know it is a story? Is there a
little red light glowing at the edge of your peripheral vision to
tell you that none of it is real? Might the experience be more
moving, more profound, better art, if while it is happening you think
it is real? Just like a dream.
What
matters
I hope you find this disturbing as well as interesting. If
not, consider a fully implemented version.
Virtual reality cannot make things, so somewhere in the world food is
being grown, coal mined--perhaps by people, perhaps by robots,
perhaps by machines operated by people somewhere else. But human
beings do not need very much stuff to stay alive. If you don't
believe that, do some quick calculations for yourself. Price the
cheapest bulk flour, oil, lentils you can find. For each, calculate
how much 2000 calories a day for a year would cost. You now have a
rough estimate of the lowest cost diet high in carbohydrate, fat or
protein, as you prefer. Just to be safe, throw in a big jar of
vitamins. It won't taste very good--but that's not a problem. Eating
is a material transaction, tasting an informational transaction. Tape
record ten thousand meals from the world's best restaurants and your
lentils are filet mignon, sushi, ice cream sundays.
Much the same holds for other material requirements. My body only
occupies five or ten cubic feet of space. With a mind free to rove
the virtual world, who needs a living room--or even a double bed.
Viewed in realspace, it does not look like much of a world. Everyone
is consuming the cheapest food that will keep his body in good
condition, living in the human equivalent of coin operated airport
storage, exercising by moving against resistance machines--perhaps as
part of virtual reality games, perhaps under automatic control while
his mind is somewhere else.
To the people living in it, this world is a paradise. All women are
beautiful--and enough are willing. All men are handsome. Everyone
lives in a mansion that he can redecorate at will--gold plated if he
wishes.
[148]
Anyone, anywhere, any experience in storage, any life that can be
created as an illusion, is an instant away. Eat all you want of
whatever you like and never put on a pound.
Which version is true--slum or paradise? Part of the answer depends
on what matters. If all that matters is sensation, what humans
perceive, it really is paradise, even if the fact is not obvious to a
superficial inspection.
If what matters is what really happens, the situation is more
complicated. Having someone read a book I wrote, enjoy and be
persuaded by my ideas, pleases me just as much if he reads it in
virtual reality. But what about only thinking someone read my book?
What if I wake up from a long lifetime as a successful author--or
basketball star, or opera singer, or Casanova--to discover it was all
a dream? Is that just as good as the real thing. Is it all right as
long as I die before I wake up?
The philosopher Robert Nozick raised the question in the form of an
imaginary experience machine. Plug someone in and he will have
experiences just as vivid and just as convincing as in real life--a
lifetime of them.
Imagine, as Nozick does, that the owner of the experience machine
somehow knows the life you are going to live. He offers you a
slightly improved version. Plug into his machine for an imaginary
life in which your babies cry a little less, your salary is a little
higher, your career in a firm with a little more status than in the
real life you would have led. Assuming you believe him, do you take
the deal? Do you trade a real life for a fictional life? Is what
ultimately matters rearing children, making a career, planting fruit
trees, writing books--or is what ultimately matters the feelings you
would have as a result of doing all of those things?
You will have to decide for yourself. I wouldn't touch the thing with
a ten foot pole.
Chapter
XX:The Final Frontier
In some respects, the future has been a great disappointment.
Back when I was first reading science fiction, space travel was
almost a defining characteristic of the genre --interplanetary at the
least, with luck interstellar. In many other respects we are well
ahead of schedule--computers for example are a great deal smaller
than most authors expected and are used for a much wider variety of
everyday purposes, such as word processing and video games. But
activity in space has been limited mostly to near earth orbit--our
back yard. Even scientific activity has not gotten humans past a very
brief visit to the moon. We have sent a few small machines a little
farther, and that is about it.
The reason for our failure is not entirely clear. One possibility is
that it is due to the control of space exploration by
governments--itself in part a result of the obvious military
applications. Another is that the problem was harder than the earlier
writers thought. What is puzzling about that explanation is that we
have already done the hardest part--getting into space. From there
the rest, at least within our solar system, should be relatively
easy. Perhaps, after a brief pause for rest and refreshment, it will
be. Perhaps not. The first step in thinking about the question is to
consider our starting point.
Speaking
From the Bottom of a Well
In one of Poul Anderson's more improbable science fiction
stories,
[149] a
man and a crow successfully transport themselves from one asteroid to
another in a spaceship powered by several kegs of beer. From what I
know of the author, he probably did the arithmetic first to make sure
the thing would fly. It wouldn't have gotten far on earth, but
getting around the asteroid belt is in some ways a much easier
problem.
Mankind's present home, considered from the standpoint of anyone
wanting to get to space, is inconveniently located at the bottom of a
very deep well, the gravity field of a fair sized planet. Getting out
of that well, lifting something from the surface of earth into space,
takes a lot of work. The price, the charge for satellite launches and
similar services, is measured in thousands of dollars a pound.
The science fiction writers of the fifties and sixties took it for
granted that the point of getting off earth was to get to Mars, or
Venus, or some other planet. At some point between then and now it
occurred to someone that that made about as much sense as climbing,
with enormous effort, out of one well only to jump down into another.
If what you want is space, planets are traps. Having gotten out of
one, why go back into another?
There are at least two obvious places to live that are not at the
bottom of wells. One is an orbital habitat, a giant spaceship without
engines, permanently located in orbit around the earth (or perhaps
the sun) with people living on the inside. The ecology of such a
miniature world, like the ecology of the world we live in, would
consist of closed cycles powered by the sun. Recycling on an almost
total scale.
A solar orbit, unless it is very close to the earth's, hence
potentially made unstable by the earth's gravity, puts you a long way
from home.
[150]
Orbiting around the earth, perhaps far out to avoid the clutter of
communication satellites and orbital trash, looks more attractive.
Unfortunately, although such an orbit does not decay quite as fast in
the real world as on Star Trek, it is, in the long run, unstable.
The solutions are the Langrangian Points 4 and 5, L4 and L5 for
short. They are locations in orbit around the earth 60° ahead
and behind the moon. As Joseph Louis Lagrange proved in 1772, both L5
and L4 are stable equilibria. A satellite, or a space habitat, placed
at one of them stays there. Like a ball bearing at the bottom of a
bowl, if something pushes it a little away from the center, it moves
back.
One difficulty in building a space habitat, at L5 or elsewhere, is
finding something to build it out of--at five thousand dollars a
pound, building materials from earth are a bit pricy for casual
living. That problem suggests an alternative location--the asteroid
belt, which consists of a large number of chunks of rock located
between the orbits of Mars and Jupiter. There are additional
asteroids outside of the belt, some on orbits that come quite close
to that of earth--raising problems that we will discuss later in the
chapter.
Asteroids are small enough so that their gravity is negligible. Many
are large enough to provide very large quantities of building
material. One way of using that matter would be to colonize an
asteroid, perhaps drilling tunnels in its interior. An alternative,
for those who prefer a shorter commute to the neighborhood of the
home planet, is to mine an asteroid and ship what you get back to
somewhere near earth--L5, say. It is a long way from the asteroid
belt to the earth, but transportation is a lot easier if you aren't
starting at the bottom of a well. Delivering material from the
asteroid belt might take months, even years, but the forces required
are much smaller than those needed to lift the same amount from
earth. If you are not in too much of a hurry you could even try
beer.
Consider a future in which some significant number of people are
permanent residents of space--habitats, asteroids, perhaps fleets of
mining ships. That future raises a variety of interesting issues.
They include the ovious political question--are such habitats the
equivalent of ships under a national flag, independent states, or
something else? They include some interesting legal and economic
questions--how, for example, does one define property rights to an
orbit (already a problem for communication satellites), chunks of
matter floating through space, or the like.
Why people would choose to live in space. So far the only answer I
have offered is that it is much easier to get to space from there.
Some readers may be reminded of the story of the man who explained to
his friends that he played golf to stay fit, and when asked what he
was staying fit for answered "golf." There are better answers.
One possibility in the near future is space manufacturing. An
environment with zero gravity and an unlimited supply of almost
perfect vacuum has real advantages for some forms of production.
Going a little farther forward, asteroids provide a very large and
potentially very inexpensive source of raw materials. While their
most obvious use is to build things in space, that does not have to
be their only use. Getting things down a well is a lot less work than
getting them up.
A second answer is that space provides an enormous source of power,
almost all of which is presently going to waste. Until quite
recently, everything our species has done has been fueled, directly
or indirectly, by the sunlight that falls on earth; almost all of it
still is.
[151]
The sunlight that falls on earth is less than one billionth of the
total put out by the sun. The rest is potentially available to a
sufficiently developed spacefaring civilization.
A third answer is that, if earth gets crowded, there is a lot of room
off earth. By mining the asteroid belt, we could build structures
that would provide living space for enormously more people than
presently exist.
The final answer, to which we will return in the next chapter, is
that we may not want to put all our eggs in one basket. It is
possible, perhaps even probable, that life on earth will get better
and better over the next few decades. But it is far from certain. One
can imagine a range of possible catastrophes, from grey goo to global
government, that would make somewhere else to be an attractive
option. There is a lot of space in space.
At present, the biggest barrier to the future I have been sketching
is the cost of getting off earth. While a space civilization, once
started, might be self sustaining, it requires a big start. And even
if it can sustain itself, if the cost of getting there is on the
order of five thousand dollars a pound, not many of us are likely to
go. Which raises the question of whether there might be a better way
of going up than perched on top of a giant firework.
Take
A Very Long Rope ...
"Artsutanov proposed to use the initial cable to multiply
itself, in a sort of boot-strap operation, until it was strengthened
a thousand fold. Then, he calculated, it would be able to handle 500
tons an hour or 12000 tons a day. When you consider that this is
roughly equivalent to one Shuttle flight every minute, you will
appreciate that Comrade Artsutanov is not thinking on quite the same
scale as NASA. Yet if one extrapolates from Lindbergh to the state of
transatlantic air traffic 50 yr later, dare we say that he is
over-optimistic? It is doubtless a pure coincidence, but the system
Artsutanov envisages could just about cope with the current daily
increase in the world population, allowing the usual 22 kg of baggage
per emigrant...." Arthur C. Clarke
For a really efficient form of transport, consider the humble
elevator. With a suitable design, lifting the elevator itself takes
almost no energy, since as the box goes up the counterweight goes
down. Energy consumption is reduced to something close to its
absolute minimum--the energy required to lift the passengers from one
point to a higher point. And if your design is good enough, you can
get some of that energy back when they come down again. The
application of this approach to space transport--like the less
efficient approach we currently use--is due to a Russian.
[152]
A multistage rocket was first proposed by Tsiolkovsky in 1895. The
space elevator was first proposed by
Yuri
Artsutanov, a Leningrad engineer, in 1960--and has been independently
invented about half a dozen times since.
You start with a satellite in geosynchronous orbit, where it goes
around the earth once a day. Locate the orbit over the equator with
the satellite moving in the direction of the earth's rotation. From
the viewpoint of someone on the ground, the satellite is standing
still--going around the earth at exactly the same rate that the earth
rotates.
Let out two cables from this satellite, one going up, one down. For
the one going up, centrifugal force more than balances gravity, so it
tries to pull the satellite up. For the one going down, gravity more
than balances centrifugal force, so it pulls the other way. Let out
your cables at the right speed and the two effects exactly balance.
Continue letting out the cables until the lower one touches the
ground. Attach it to a convenient island. Run an elevator up it. You
now have a way of getting into space at dollars a pound instead of
thousands of dollars a pound.
A space elevator has a number of odd and interesting characteristics,
some of which we will get to shortly. Unfortunately, it's
construction raises one very serious technical problem--finding
something strong enough and light enough to make a very long
rope.
Consider a steel cable hanging vertically. If it is longer than about
fifty kilometers, its weight exceeds its strength and it breaks.
Making the cable thicker does not help, since each time you double
its strength you also double its weight. What you need is a material
that is stronger for its weight. Kevlar, used for purposes that
include bullet proof garments, is considerably stronger for its
weight than steel. A Kevlar cable can get to about two hundred
kilometers before it breaks under its own weight.
Geosynchronous orbit is thirty-five thousand kilometers up. Kevlar
isn't going to do it.
At first glance, it looks as though we need a material almost two
hundred times stronger for its weight than Kevlar, but the situation
is not quite that bad. As you go up the cable, you are getting
farther from the earth--gravity is getting weaker and, since the
cable is going around with the satellite (and the earth), centrifugal
force is getting stronger. When you actually get to the satellite,
the two balance--long before that, their difference has gotten small.
So it is only the bottom end of the cable that will be really heavy.
Furthermore, the lower you go on the cable the less weight is below
it to be held up, so you can make a cable longer before it breaks by
tapering it. Building a space elevator requires something quite a lot
stronger for its weight than Kevlar, but not two hundred times
stronger.
Such materials exist. Microscopic carbon fibers appear to have the
necessary properties. So, according to theoretical calculations,
would buckytubes--long fibers of carbon atoms bonded to each other.
Neither is in industrial production in the necessary sizes just now,
but that may change. One nice feature of carbon--aside from its
ability to make very strong materials--is that some asteroids are in
large part made out of it. Move one of them into orbit, equip it with
a factory capable of turning carbon into superstrong cable, ... .
When you are done, use what is left of the asteroid for a
counterweight, attached to the cable that goes from the satellite
away from the earth--that lets you hold the lower cable up with a
much shorter upper cable. Nobody is taking bids on the project just
at the moment, but in principle it should be doable.
[153]
Consider a cargo container moving up such an elevator. At the bottom,
its motors have to lift its full weight. As it gets higher, gravity
gets weaker, centrifugal force gets stronger, so it becomes easier
and easier to move it up. When it reaches the satellite at
geosynchronous orbit, the two exactly balance--inside the container,
you float. Suppose you keep going, following the upper cable into
space. Now centrifugal force more than balances gravity, so with no
motor and no brakes you go faster and faster.
One possibility is to let the process work--with careful timing--and
use the accumulated velocity to launch you into space in the
direction you want to go. In principle, it would be possible to build
space elevators on a number of different planets and use them instead
of rockets for interplanetary transport. Think of it as a giant game
of catch. You get launched from earth by letting go of its space
elevator at just the right time and place. As you approach mars you
adjust your trajectory a little--we probably still need rockets for
fine tuning the system--so that you match velocities with the space
elevator that is whipping around mars. Let go of that at the right
time, after moving a suitable distance in or out, and you are on your
way to the asteroid belt, or perhaps Jupiter--although building a
space elevator on Jupiter might raise problems even for the best
cable nanotechnology could spin. It's a dizzying thought.
An alternative is to equip your cargo capsule with brakes. Ideally,
they are regenerative brakes, an idea already implemented in electric
and hybrid cars. A regenerative brake is an electric generator that
converts the kinetic energy of a car into electricity--slowing the
car down and recharging its batteries. On the space elevator, the
electricity generated by the brakes keeping one cargo capsule from
taking off for mars could be used to lift the next one from earth to
the satellite.
It may occur to some readers that there is a problem with this--where
is all this energy, used to fling spaceships around the solar system
or lift capsules from earth, coming from? The answer is that it is
coming from the rotation of the earth. If you work out the physics of
a space elevator carefully, it turns out that every time you lift a
load up the elevator it is being accelerated in the direction of the
earth's rotation, since the higher it is the faster it has to move in
order to circle the earth once a day, which is what the elevator and
the piece of earth it is attached to are doing. For every action
there is a reaction--conservation of angular momentum implies that
the earth is slowing down. Fortunately, the earth has a lot of
rotational energy, being very much larger than either us or the
things we are likely to send up the elevator, so it would be a very
long time before the effect became significant.
"I have not attempted to calculate how much mass one could shoot off
into space before the astronomers complained that their atomic clocks
were running fast." Arthur S. Clarke
The space elevator I have described cannot be built with presently
available materials, although it could be built with materials we can
describe and may some day be able to produce in suitable quantity.
That raises an obvious question--are there modifications of the
design that reduce the problem enough to make it soluble, either now
or soon? At least two have been suggested.
One of them is called a skyhook. It was proposed in the U.S. by Hans
Moravec in 1977--but Artsutanov had published the idea back in 1969.
Here is how it works:
Start, this time, with a satellite much closer to the earth. Again
release two cables, one up, one down. Since this satellite is not in
geosynchronous orbit, it is moving relative to the surface of the
earth. That makes it difficult to attach the bottom end of the
cable--so we don't. Instead we rotate the cable--one end below the
satellite, one above--like two spokes of an enormous wheel rolling
around the earth.
The satellite is moving around the globe, but when one end of the
cable is at its lowest point it is standing still, since the cable's
motion relative to the satellite just cancels the satellite's motion
relative to the earth. If that sounds odd, consider thatwhen you are
driving the bottom of your tire--the part touching the road--is
standing still relative to the pavement, since the rotation of the
wheel moves it backwards relative to the car just as fast as the car
is moving forwards. If that isn't true the car is skidding. The
skyhook applies the same principle scaled up a bit.
Seen from earth, the end of the cable comes vertically down from
space, hesitates a moment at the bottom of its trajectory, then goes
back up. To use it for space transport, you put your cargo capsule on
an airplane, fly up to where the cable is going to be, hook on just
as the bottom of the cable stops going down and starts going up. It's
big advantage over the space elevator is that a much lower orbit
means a much shorter cable, so you can come a lot closer to building
it with presently available materials. The physics works, but I'm not
sure the Civil Aeronautics Board is going to approve it for carrying
passengers anytime soon.
A version that might be workable with current technology has been
proposed by researchers at Lockheed Martin's Skunk Works, source of
quite a lot of past aeronautical innovation. It starts with a simple
observation: getting something to orbit is much more than twice as
hard as getting it half way to orbit. If you have two entirely
different technologies for putting something in orbit, why not let
each of them do half the job?
The Skunkworks proposal uses a short space hook, reaching from a low
orbit satellite to a point above the atmosphere. It combines it with
a spaceplane--a cross between an airplane and the space shuttle,
capable of taking off from an ordinary airport and lifting its cargo
a good deal of the way, but not all the way, to orbit. The spaceplane
takes the cargo capsule to the skyhook, the skyhook takes it the rest
of the way. The engineers that came up with the design believe that
it could be built today and would bring the cost of lifting material
into space down to about five hundred and fifty dollars a pound.
That's quite a lot more than the cost estimated for a space elevator
that goes all the way down, but about a tenth the cost of using a
rocket.
Concentrating
the Mind: The Problem of Near Earth Objects
"Nothing concentrates a man's mind as the
prospect of being hanged in the morning."
Samuel Johnson
A little less than a century ago--in 1908--Russia was hit
with a fifteen megaton airburst. Fortunately, the target was not
Moscow but a Siberian swamp. The explosion leveled trees over an area
about half the size of the state of Rhode Island. While there is
still considerable uncertainty as to precisely what the Tunguska
event was, most researchers agree that it was something from
space--perhaps a small asteroid or part of a comet--that hit the
earth. A rough estimate of its diameter is about 60 meters. While it
was, so far as we know, the largest such event in recorded history,
there is geological evidence of much larger strikes. One, occurring
about 65 million years ago, left a crater 180 km across and a
possible explanation for the brief period of mass extinctions that
eliminated the dinosaurs.
2002 CU11 is an asteroid in an orbit that will bring it near the
earth. It's estimated diameter is 730 meters--more than ten times
that of the (conjectural) Tunguska meteor. Since volume goes as the
cube of diameter, that means that it would have more than a thousand
times the mass and do a comparably greater amount of damage--quite a
lot more than the largest H-bomb ever tested. Nasa's current estimate
is that there is about a one in a hundred thousand chance that it
will strike the earth in 1049.
2000 SG344 is a much smaller rock--about forty meters. Nasa estimates
that it has about a one in five hundred chance of hitting the earth
sometime between 2068 and 2101. Even a rock that small would produce
an explosion very much more powerful than the bomb dropped on
Hiroshima.
The current estimate is that there are about a thousand near earth
objects of 1 km diameter or above, and a much larger number of
smaller ones. We think we have spotted more than half of the big
ones--none of which appear to be on a collision course for the earth.
I started with 2002 CU11 because it is the largest known asteroid to
have an impact probability estimated at above one in a million.
The best guess at the moment is that really big asteroids--2 km and
over--hit the earth at a rate of about one or two every million
years. That makes the odds that such a strike will occur during one
person's lifespan about one in ten thousand. Smaller strikes are much
more common--one that we know of in the megaton range, making it
comparable to a hydrogen bomb, in the last century.
The odds of a big strike are low, but given how much damage it could
do it is still worth worrying about. The odds of a small strike,
which could do significant damage if it happened to hit a populated
area or the sea near a populated coast, are larger. What can we
do?
The first step is to see if anything is coming our way. Nasa has been
working at it--which is why I can quote sizes and probabilities for
known near earth objects. Since the objects are moving in orbits
determined by the laws of physics, once we have spotted one of them
several times we can make a fairly accurate projection of where it
will be for many years into the future. One particularly well
observed asteroid,
[154]
a little more than a kilometer in diameter, is expected to make a
close approach to earth on March 16
th of the year
2880.
Suppose we spot an asteroid coming our way. If it is about to hit,
there is not much we can do, other than trying to get well above sea
level on the theory that the odds are in favor of it hitting the sea
and making a very large splash. If, on the other hand, we spot it now
but the estimated impact is ten, fifty, or a hundred years from now
we may be able to prevent it.
The obvious way is to nudge it out of our way. Moving a large
asteroid is hard, but with a decade or more to push you can disturb
its orbit at least a little--and even a small change in the orbit,
acting over a long time, can turn a hit into a near miss.
One way of doing it would be to land on the asteroid, equipped,
perhaps, with a small nuclear reactor for power and suitable
equipment. Set up the reactor so that its power is vaporizing rock on
the surface of the asteroid and blowing it at space--gently pushing
the asteroid in the other direction. If you have done your
calculations correctly, you are altering its orbit to miss earth.
A less elegant solution, but one that uses off the shelf hardware
currently available in excess supply, is to nuke it. Explode a
nuclear--better a thermonuclear--bomb on, slightly under, or slightly
above the surface of the asteroid. Exploded under the surface it
blows chunks of the asteroid in one direction and moves the rest in
the other. On or near the surface it vaporizes some of the surface
and drives that in one direction, giving the asteroid a brief but
very hard shove the other way. Calculations suggests that, for an
asteroid with a diameter of a kilometer or so spotted a decade or
more before it hits us, such an approach might do the job. One would,
of course, want to worry a bit about the risk of breaking up the
asteroid and sending some of the pieces in the wrong direction--in a
path that eventually intersects ours. That's one argument against
setting off the bomb inside the asteroid.
Ad
Astra
We are currently active in earth's back yard, putting up
communication satellites, spies in the sky, and the like. I have just
suggested some ways of taking the next step--lowering the cost of
getting off earth by enough to make it possible to establish
substantial human populations in space, perhaps living in space
habitats or suitably modified asteroids. And I have offered one
potentially large reason for doing so. If we are in space in
reasonable numbers, it should be a lot easier to spot large objects
that might hit earth and do something about them.
The step after that is much harder, because the stars are much
farther away than the planets. Current physics holds that nothing can
move faster than the speed of light. If that remains true, trips to
other stars are likely to take a long time--at least years, probably
decades, possibly centuries. So we might as well start thinking about
them now.
Earlier chapters have provided three solutions to the problem of
keeping the crew of an interstellar expedition alive long enough to
get somewhere. One is life extension. Another is cryonic suspension.
A third is to have the ship crewed by programmed
computers--artificial intelligences. If someone gets bored, he can
always save himself to hard disk--or whatever the equivalent then
is--and shut down. After, of course, reminding another A.I. to reboot
him when they arrive.
What about propulsion? Getting a starship moving at a significant
fraction of the speed of light requires something considerably better
than current rockets. A number of proposals have been made and
analyzed. One of my favorites is a light sail.
[155]
We start with a form of propulsion proposed some decades back for
interplanetary flight and currently being experimented with--sails.
There is no wind in the vacuum of space,
[156]
but there is quite a lot of light--and light has pressure. A solar
sail is a very thin film of reflective material, probably with an
area of many square kilometers, perhaps many thousands of square
kilometers. The ship attached to the sail controls its angle to the
sun, rather as an ordinary sail is controlled on earth.
The sunlight is all going one way--away from the sun. Its pressure
can be used to sail away from the sun or, to some degree, sideways by
angling the sail. But how do you get back home? The answer is
gravity--the sun's gravity. Just as an ordinary sailboat combines the
pressure of its keel against the water and the pressure of the wind
against its sail, a solar sailboat combines the pressure of light
with the pull of solar gravity.
A solar sail provides a form of propulsion that requires no fuel. But
the farther you are from the sun, the less there is to push you--and
an interstellar voyage will take you far enough so that the sun
becomes just one more star. The solution is to provide your own
sunlight. Build a very powerful laser somewhere in the solar system,
aim it at the sail of your interstellar spaceship, and blow it across
space. The ship goes; the power source remains behind.
A solar sail backed by a very large laser cannon is an elegant
solution to the problem of getting to the stars, but there is one
problem--stopping. Unless the star you are going to is also equipped
with a laser cannon, you are flying a ship without brakes.
My favorite solution was offered by Robert Forward.
[157]
His ship has two solar sails, a circle inside a larger circle. When
you approach the target system you cut loose the outer ring sail and
angle everything so that the laser beam misses the sail still
attached to the ship, hits the other, bounces off it, and is
reflected back into the first sail. The detached sail accelerates
into space, driven by the beam, while the spaceship is slowed by the
reflected beam hitting the sail still attached.
Before the second ship arrives, the first builds a second laser
cannon to provide brakes. Nobody could expect a maneuver that
complicated to work twice. Once you have a laser at each end of
things traveling back and forth gets a lot easier.
Fifty
Years After They Stop Laughing
How likely are any of these things to happen? Nanotechnology
would make a space elevator technically possible, but there could
still be a lot of political problems getting a project on that scale
built. Whether those problems will prove insurmountable depends on
the climate of opinion thirty or forty years from now, which is hard
to predict. Even without a space elevator, nanotechnology should make
possible much stronger and lighter materials, which would strikingly
lower launch costs--perhaps by enough to establish a real human
presence in space. Defense against near earth objects is one obvious
reason to do it--and a reason that could become urgent if we spot
something big on a collision course.
Interstellar travel is a harder project. It may happen, and it is
interesting to think about, but it is very unlikely that any human
will arrive at another star any time in the next fifty years, which
is about as far forward as we have any reasonable hope of predicting
future technology. If it does happen, Forward's magnificant kludge,
the sail within a sail, is as likely as anything else.
"I'll adapt the reply that Arthur Kantrowitz gave, when someone asked
a similar question about his laser propulsion system. The Space
Elevator will be built about 50 years after everyone stops laughing."
Arthur C. Clarke
[1]
CALEA–add link.
[2]
Square brackets, as here, indicate notes to myself for future
revision.
[3]
Unless two of them are being used on the same network; some versions
of Office refuse to run if they can see another copy with the same
serial number.
[4]
Link to info on expansion to criminalize not-for-profit
infringement.
[5]
Link to cases.
[6]
W.F.B. Let Us Talk of Many Things, Forum, 2000. pp.
xxii-xxiii.
[7]
Link to info on News reading software, Google, other Usenet
stuff.
[8]
You can get that information already, by using Google to search for
pages that link to a particular page. That is possible because Google
has already indexed the entire web and so has a complete list of
links–readable from either end. A back link browser would use
such an index to locate back links to display.
[9]
Link to Baby M case.
[10]
District court of California, S.F. 1998. 72 Cal. Rptr. 2d 280, 293
(Ct. App. 1998).
link to some of the other cases.
http://www.surrogacy.com/legals/jaycee/jaycee.html
Johnson v. Calvert (1993) 5 Cal.4th
84, 94-95, 19 Cal.Rptr.2d 494, 851 P.2d 776 (in support of intent
based parentage)
Marjorie Maguire Shultz, Reproductive
Technology and Intent-Based Parenthood: An Opportunity for
Gender-Neutrality (1990) Wis. L. Rev. 297, 343
[11]
http://wg524.human.waseda.ac.jp/project2000/theme1-2/CaseStudy2.html
[12]
Donaldson v. Van de Kamp, 4 Cal.
Rptr. 2d 59 (Cal. Ct. App. 1992); see Miles Corwin, “Tumor
Victim Loses Bid to Freeze Head Before Death,” L.A. Times,
Sept. 15, 1990, at A28; Cynthia Gorney, “Cryonics and Suicide:
Avoiding 'the Slippery Slope,' “ Washington Post, May 1, 1990,
at D6.
[13]
link to coma case reference, precise time of death inheritance
case
[14]
Link to evidence on adultery rates in humans and birds
[15]
Link to longer discussion of this in Price Theory.
[16]
Matthew Prior, "An English Padlock," link to a webbed text if it
exists.
[17]
Lee Silver, Reinventing Eden, pp. ???
[18]
For a fictional portrayal of the problem, see Bruce Sterling, Holy
Fire, Bantam 1997.
[19]
http://www1.ics.uci.edu/~ics54/doc/security/pkhistory.html. Explain
the distinction between Diffie-Helman key exchange and RSA public key
encryption.
[20]
Link to RSA
[21]
Exceptions: Single Pad encryption. Possibly quantum
encryption.
[22]
Link to PGP
[23]
Add a discussion of ways of spoofing the system–one fake phone
book, man in the middle intercepting and reencrypting.
[24]
Link to discussions of the meaning of the 2nd
amendment.
[25]
Link to the discussion of a militia in Adam Smith's Wealth of
Nations. Link to the webbed transcript of my debate with Ed Meese
on encryption regulation.
[26]
link to reference to Capitalism and Freedom's discussion of
certifying vs. licensing.
[27]
Describe the facts of the court order in the Steve Jackson case,
apparently procured by perjury–at least, the person who was
supposed to be the source of the evidence on which it was based
denied it.
[28]
Link to longer version of this in Price theory.
[29]
Cite to data in Atlas of World Population History.
[30]
www.w3.org/P3P/
http://www.businessweek.com/bwdaily/dnflash/dec2001/nf20011214_1114.htm
[31]
*explain in more detail, with double level of
intermediaries
[32]
Link to cases, stories.
[33]
Cite to Ward Elliot's published work on this, newer stuff in
the new technology book?
[34]
Cite to forthcoming technology book–Klein et. Al.
[35]
Cite to SC case.
[36]
Cite sources. Margaret Mead and Samoa : The Making and Unmaking of
an Anthropological Myth by Derek Freeman?
[37]
The novel was Golden Lotus, regarded as a classic of Chinese
literature. Presumably the theory was that readers well enough
educated to translate the latin were beyond carnal temptation. There
is now a more modern translation, entirely in English (give the
cite).
[38]
Explain the Appeal of Felony. Quote Blackstone.
[39]
Link to Law's Order, Icelandic piece, 18th c. English
piece.
[40]
For a more extensive discussion of the issues of this chapter, see D.
Friedman, “Privacy and Technology,” in which I use the
term “privacy rights” for what I here call privacy. For a
more general discussion of rights from a related perspective see
Friedman, David “A Positive Account of Property Rights,”
Social Philosophy and Policy 11 No. 2 (Summer 1994):.
1-16.
[41]
One example occurs in the context of a takeover bid. In order for the
market for corporate control to discipline corporate managers, it
must be in the interest of someone to identify badly managed
corporations and take them over. That depends on a takeover bid
remaining secret long enough for the person responsible to accumulate
substantial ownership at the pre-takeover price. In a very public
world, that is hard to do. Currently it is also hard to do in the
U.S. because of legal rules deliberately designed to limit the
secrecy of takeover bids. The result is not, of course, to eliminate
all takeover bids, or all market discipline over corporate managers—merely
to reduce both below what they would be in a more private and less
regulated market.
[42]
An exception is the case where the relevant information is negative–the
fact that I have not declared bankruptcy, say—and third parties
have no way of knowing whether I have acted to suppress it. If
individuals have control over such information, then the absence of
evidence that I have declared bankruptcy provides no evidence that I
have not—so borrowers who have not declared bankruptcy in the
past will be better off in a world where privacy rights with regard
to such information are weak. The problem disappears if I can take an
observable action—such as signing a legally enforceable waiver
of the relevant legal privacy rights—that demonstrates that the
information is not being suppressed.
[43]
Many of the points made in this section of the article can be found,
in somewhat different form, in Posner, Richard “The Right of
Privacy,” Georgia Law Review 12 no. 3 (Spring 1978):
393-428 and “An Economic Theory of Privacy,”
Regulation, May/June 1978, at 19. He finds the case for the
general desirability of privacy to be weak.
[44]
Link to discussion of bilateral monopoly, possibly in Price
Theory.
[45]
This might not be the case if we are frequently faced with situations
where my gains from the bargain provide the incentive for me to
generate information that is of value to other people as well. There
is little point to spending time and money predicting a rise in wheat
prices if everything you discover is revealed to potential sellers
before you have a chance to buy from them. This is a somewhat odd
case because, while successful speculation is both socially useful
and profitable, there is no particular connection between the two
facts, as pointed out in Hirschleifer, J. (1971). “The Private
and Social Value of Information and the Reward to Inventive Activity,”
American Economic Review 61, no. 3 (1971): 562-574. Hence the
opportunity for speculative gain may produce either too much or too
little incentive to obtain the necessary information.
A second qualification is that it
might not be the case if there were some completely verifiable way of
dropping my privacy—of letting you know the highest price I was
willing to pay in exchange for your letting me know the lowest price
you were willing to accept. This brings us back to a point made in an
earlier note—the difficulty of conveying believable information
in a context where I have the power to select what information is
conveyed.
[46]
Real law along those lines, due to incident during Bork
hearings.
[47]
There may be very costly ways of doing so. At one point during
litigation involving conflicts between the Church of Scientology and
discontented ex-members, information which the CoS wished to keep
private became part of the court record. The CoS responded by having
members continually checking out the relevant records, thus keeping
anyone else from getting access to them. And I might preserve my
privacy in a world where court records were public by changing my
name.
[48]
For a discussion of why it makes sense to treat some things as
property and some as commons, see Friedman, David, "Standards As
Intellectual Property: An Economic Approach," University of Dayton
Law Review 19, No. 3 (Spring 1994): 1109-1129 and chapter 10 of
Friedman, David, Law’s Order: An Economic Account, ”forthcoming
from Princeton University Press c. 3/2000.
[49]
This assumes that A’s possession of information does not impose
a cost on B. But the argument generalizes to the case where the cost
to B of A possessing information is typically lower than the benefit
to A, which brings us back to the earlier discussion of reasons why
privacy is likely to result in net costs.
[50]
Or, of course, by political activity–lobbying Congress or
making contributions to the police benevolent fund. For most
individuals such tactics are rarely worth their cost.
[51]
Link to Paypal page.
[52]
Note that there may be a conflict between informational privacy and
attentional privacy. Letting out a lot of information about me makes
it easier for sellers to figure out whether I want what they are
selling without asking me–but also makes it easier for people
to learn things about me I might not want them to know.
[53]
Link to explanations of public key encryption, blind
signatures.
[54]
Link to webbed explanation of how the double spending trick
works.
[55]
Cite to Laurence White’s book on free banking.
[56]
Link to Price Theory discussion of some of this.
[57]
Digicash link.
[58]
Links to Psion pages.
[59]
Link to Google
[60]
Cite Lisa Bernstein, link to her pieces if webbed.
[61]
As Bruce Benson has pointed out, this development is closely
analogous to the development of the Lex Mercantoria in the early
Middle Ages. That too was a system of private law enforced by
reputational penalties, in an environment where state law was
inadequate for contract enforcement, due in part to legal diversity
across jurisdictions. See Benson (1998b,c)
[62]
The first discussion of privacy through anonymity online which I am
aware of was in a work of fiction by a Computer Science Professor,
Verner Vinge's novelette "True Names," included in True Names and
Other Dangers. A good recent description of the combination of
anonymity with online reputation occurs early in Marc Siegler's novel
Earthweb.
[63]
Earthweb contains an entertaining illustration of this point.
A central character has maintained two online personae, one for legal
transactions, with a good reputation, and one for quasi-legal
transactions, such as purchases of stolen property, with a
deliberately shady reputation. At one point in the plot, his good
persona is most of the way through a profitable honest transaction
when it occurs to him that it would be even more profitable if,
having collected payment for his work, he failed, at the last minute,
to deliver. He rejects that option on the grounds that having a
persona with a good reputation has just given him the opportunity for
a profitable transaction, and if he destroys that reputation it will
be quite a while before he is able to get other such
opportunities.
[64]
See, for example, Nelson (1974), Williamson (1983), Klein and Leffler
(1981).
[65]
A real world version of this solution to the problem is the use of
escrow agents by parties buying valuable goods on ebay.
[66]
Link to stuff on Napster and less centralized
alternatives.
[67]
Name such a system, link to info.
[68]
I like to argue that since a K, a binary thousand, is actually 1024,
the Digital Millenium Copyright Act doesn't go into effect until
2048. I doubt I could persuade a court.
[69]
Link to Intertrust, stuff on to IBM crytolope.
[70]
Discuss the case of the Church of Scientology's secret scriptures
chained to the desk–which still got out.
[71]
Explain Patri's open source approach to making the information in a
protected database publicly available.
[72]
Explain how trusted intermediaries could be used to make this
work.
[73]
For a counterexample, consider Smith, who had clearly read an
enormous amount before writing The Wealth of Nations.
[74]
Link to webbed stuff on information futures.
[75]
For a much more extensive discussion of some of these issues, see
Oliver Williamson, Market and Hierarchy .
[76]
For more information on this particular controversy, see:
[77]
WSJ 12/24/01, p. B4. The web site is www.innocentive.com
[78]
And evolutionary psychologists–link to the summary of the
Adapted Mind ideas and cite the book.
[79]
Note on "hacker" vs "cracker," with elephant example. "It picks up
things how? What a hack."
[80]
link to examples.
[81]
Found unprotectable in White-Smith.
[82]
Link to Law's Order chapter on I.P.
[83]
Cite and link to U.S. v Seidlitz.
[84]
Lund v. Virginia (1977)
[85]
Or perhaps it should. A Dean of the University of Chicago Law School,
noting that three of the Federal Appeals Court judges in the seventh
circuit had previously been members of his faculty, suggested that it
might raise constitutional issues if the government had delegated to
him the power to select judges.
[86]
Link to http://www.lightlink.com/spacenka/fors/ for general stuff,
http://www.lightlink.com/spacenka/fors/police/intelrep.txt for the
initial report]
[87]
More generally, by creating some form of one way encryption or
hashing--a way of scrambling information that does not require you to
have the information necessary to unscramble it.
[88]
Cites and links to some cases of this sort.
[89]
Give more details of case, cite.
[90]
Information on that case.
[91]
Link to stories on cases, including FBI lab scandal
[92]
Link to online info on CALEA
[93]
http://www.eff.org/Privacy/Surveillance/CALEA/fbi_fedreg_101695.rfc-
-explain complications of capacity, 3
different zones, etc.
[94]
http://www.eff.org/Privacy/Surveillance/CALEA/leahy_freeh_110395.letter
[95]
Add details, links.
[96]
Robin Hanson, Communications of the ACM, December 1994, “Can
Wiretaps Remain Cost Effective,” (also available on the Web at
http://www.hss.caltech.edu/~hanson/wiretap-cacm.html).
[97]
For some possible explanations, see (link to Law's Order
discussion and notes)
[98]
Give figures for percentages of mammalian and avian species. About
90% of avian species are monogamous, but a smaller fraction are long
term monogamous. The explanation is that both human and avian infants
require biparental child care.
[99]
Baker, R. R. and Bellis, M.A. 1992. Human sperm competition:
infidelity, the female orgasm and "kamikaze" sperm. Paper delivered
to the fourth annual meeting of the Human Behaviior and Evolution
Society, Albuquerque, N.M., 22-26 July 1992, cited in Matt Ridley,
The Red Queen, "In a block of flats in Liverpol, they found by
genetic tests that fewer than four in every five people were the sons
of their ostensible fathers. .... They did the same tests in southern
England and got the same results."
Numerous studies making estimates of
cuckoldry rates among humans are summarized in: Baker, R. and M. A.
Bellis 1995. Human Sperm Competition. Copulation, Masturbation, and
Infidelity. Chapman and Hall. (from www.meangenes.org--interesting
site). See specifically: http://www.meangenes.org/notes/notes.html#c8
One Swiss study, in contrast, finds rates of "misidentified
paternity" slightly below one percent. [need more work
checking]
[100]
Attributed to Henry Kissinger.
[101]
Conjectures on reasons; check literature.
[102]
Cite evidence from Pinker.
[103]
"The Man Who Mistook His Wife for a Chattel," Margot Wilson and
Martin Daly, in The Adapted Mind, pp. 292-297 discuss behavior
in birds related to male sexual jealousy; p. 294 gives the specific
case of varying parental investment. The human evidence is pp.
306-308.
[104]
Brief reference to Tibetan polyandry and the system of partial
fathers as exceptions.
[105]
Link to information on the controversy over whether Mead's
description of Samoa was fictional.
[106]
Mark Flinn (1988a) cited in Wilson and Daly, p. 302.
[107]
A still cruder form, exposing sickly infants, predates Heinlein by
several thousand years.
[108]
Strictly speaking, clones created from an adult cell are identical
only in the nuclear DNA; the mitochondrial DNA, which comes from the
egg, is different unless the egg the nucleus is inserted into is from
either the same individual as the cell or that individual's maternal
ancestor (mother, mother's mother, ...). In the rest of the
discussion I will ignore this complication for purposes of
simplicity.
[109]
Lee Silver discusses this possibility in Reinventing
Eden.
[110]
Link to webbed info.
[111]
In the case of two women, if the child is entirely theirs it must be
a daughter, since neither has a Y chromosome to contribute. In the
case of two men, it might be either a son or a daughter, since a male
carries one X and one Y chromosome.
Lee Silver describes in his book two
other technologies that could be used to produce children for same
sex couples. Each, however, gives a child who is only genetically 25%
the product of each parent. In one case the child is a chimera--an
organism produced by fusing two fertilized eggs, giving an individual
half of whose cells come from one egg and half from the other. In the
other case the child is, genetically speaking, the grandchild of the
parental couple--the intervening generation having been aborted and
the cells necessary to produce a fertilized egg harvested.
[112]
Link to info on selective abortion in India, China? In India illegal
but happens anyway.
[113]
Info on centrifuging sperm, timing intercourse, etc.
[114]
For a discussion of the relevant economics, see (price theory
chapter)
[115]
Two interesting fictional discussions of these issues are Robert
Heinlein, The Moon is a Harsh Mistressi, pp. And J. Neil
Schulman, The Rainbow Cadenza. The latter describes a world
with a very high male female ratio where women are drafted into
temporary prostitution.
[116]
Check how broadly this is true.
[117]
Reference to Casanova passage implying deliberate surrogate
fatherhood to produce an heir for a man who was impotent.
[118]
Give cite.
[119]
The obvious exceptions were hermaphrodites and eunuchs.
[120]
Reference Dawkins--probably The Blind Watchmaker.
[121]
Reference Plagues and Peoples and something more recent on the
subject.
[122]
For a delightful fictional picture of a species where the old are no
longer fertile but utterly devoted to the welfare of their
descendants, see Larry Niven Protector and other books set in
the same universe.
[123]
Reference a discussion of this contentious issue
[124]
Medawar, P.B. 1946. Old age and natural death. Mod. Quart. 1:
30-56. For simplicity, I am ignoring here the distinction between
lethal dominants and lethal recessives.
[125]
Williams, G.C. 1957. Pleiotropy, natural selection, and the evolution
of senescence. Evolution 11: 398-411.
[126]
Add fruitfly reference
[127]
In 1998, for example, of 401 incumbent congressional representatives
who sought reelection, 395 won--a success rate of better than 98%.
That somewhat exaggerates the advantage, since an incumbent who is
confident of losing may decide not to run--but in that year there
were a total of only 435 incumbents and presumably some of them
decided to retire for other reasons.
[128]
Reference some famous version of this argument. Popper?
[129]
Perhaps most famously stated by Sherlock Holmes (lumber room
quote)
[130]
Reference Posner's book on aging.
[131]
Link to "To His Coy
Mistress"--http://www.luminarium.org/sevenlit/marvell/coy.htm among
other places.
[132]
And I pointed out that, given how controversial his ideas were in his
own field, there should be no trouble finding people willing to pay
him to switch to something, indeed anything, else.
[133]
Elizabeth Moon was a military officer before she retired and started
writing moderately original science fiction and fantasy. For a more
famous example, consider Conrad, first a sailor and later a
novelist.
[134]
Give data.
[135]
Give numbers allowing for accumulated interest during the 50
years.
[136]
This discussion ignores a variety of complications, such as the
effect of such a pattern of saving and consumption on the market
interest rate, which would carry us well beyond the llimits of this
book.
[137]
"Believing cryonics could reanimate somebody who has been frozen is
like believing you can turn hamburger back into a cow." cryobiologist
Arthur Rowe.
A webbed faq supporting cryonics
points out in response that there are some vertebrates that can
survive freezing but none that can survive grinding.
http://www.faqs.org/faqs/cryonics-faq/part4/
[138]
For an intelligent discussion by a proponent of cryonics,
see:
http://www.merkle.com/cryo/
[139]
Readers interested in the subject should probably start with Eric
Drexler's classic Engines of Creation. It is webbed at:
http://www.foresight.org/EOC/index.html
[140]
Feinman, in his 1959 speech, discusses building small tools, using
them to build smaller tools, and so on all the way down.
Interestingly enough, the same idea appears in Robert Heinlein's
story Waldo (in Waldo and Magic Incorporated) published
in 1950.
[141]
For a webbed discussion of an attack on nanotech published in
Scientific American, see:
http://www.foresight.org/SciAmDebate/SciAmOverview.html#TOC
[142]
Richard Dawkins, The Blind Watchmaker.
[143]
The idea that new kinds of
nanomachinery will bring new, useful abilities may seem startling: in
all its billions of years of evolution, life has never abandoned its
basic reliance on protein machines. Does this suggest that
improvements are impossible, though? Evolution progresses through
small changes, and evolution of DNA cannot easily
replace DNA. Since the DNA/RNA/ribosome system is specialized
to make proteins, life has had no real opportunity to evolve an
alternative. Any production manager can well appreciate the reasons;
even more than a factory, life cannot afford to shut down to replace
its old systems. (Drexler, Engines)
[144]
Give detailed reference.
[145]
Reference Jomviking saga
[146]
Describe dance game etc. Links if I can find them. Rowing "game."
Cycling.
[147]
This idea sounds like pure science fiction, but has gotten
considerably closer in recent years. Link to monkey operating
joystick with his mind story.
[148]
Some readers may be reminded of the world described in C.S. Lewis's
The Great Divorce. For some reason he called it
"Hell."
[149]
Poul Anderson, The Makeshift Rocket, Ace 1962.
[150]
The unstable Lagrangian points on the line between the earth and the
sun, one inside and one outside the earth's orbit, are possible
locations for a habitat with enough maneuverability to avoid drifting
out of them.
[151]
The only exceptions I can think of are nuclear and geothermal
power.
[152]
Thus the multi-stage rocket, the way man actually got into space, was
first described by Tsiolkovsky in 1895.
[153]
While it sounds like wild eyed speculation, the idea of using
buckytubes to support a space elevator was seriously proposed by
Richard Smalley of Rice University, who was awarded the 1996 Nobel
Prize in chemistry for his discovery of fullerenes, the family of
carbon molecules to which buckytubes belong. Images of a variety of
fullerenes can be found at http://cnst.rice.edu/pics.html
[154]
1950 DA
[155]
The forthcoming launch of the first solar sail vehicle is described
at:
http://www.planetary.org/solarsail/who_where_when.htm
[156]
There is the "solar wind," which isn't really a wind at all, but it
doesn't provide enough pressure to be useful.
[157]
I am simplifying his solution by leaving out the early stage, when
the ship is using sunlight and the earth's gravity to pick up
speed.