\documentclass{article}
\usepackage{amssymb}
\usepackage{amsmath}
%\usepackage{slide-article}
\usepackage{slide-article-tom}
\ifx\pdfoutput\undefined
\usepackage[dvips]{graphicx}
\else
\usepackage[pdftex]{graphicx}
%% \usepackage{type1cm}
%% \usepackage{color}
\pdfcompresslevel9
\fi
% \usepackage{epsfig}
%\usepackage{graphics}
\usepackage{hyperref}
%\definecolor{Emerald}{cmyk}{1,0,0.50,0}
\hypersetup{colorlinks,
linkcolor=blue,
%pdfpagemode=FullScreen
pdfpagemode=None
}
%\usepackage{hyper}
%\usepackage{hthtml}
%\def\hyperref#1#2#3#4{\hturl{#1}}
\def\pagedone{\newpage}
\def\tthdump#1{#1}
\tthdump{\def\sectionhead#1{\begin{center}{\LARGE\hypertarget{#1}
{#1}\hyperlink{Our general topics:}{\hfil$\leftarrow$}}\end{center}}}
%%tth:\def\sectionhead#1{{\LARGE#1\hypertarget{#1}{#1}}
%%tth: \special{html: Top}}
\tthdump{\def\quotesection#1{\begin{center}{\LARGE\hypertarget{#1}
{#1}\hyperlink{The quotes}{\hfil$\twoheadleftarrow$}}\end{center}}}
%%tth:\def\quotesection#1{{\LARGE#1\hypertarget{#1}{#1}}
%%tth: \special{html: <-}}
%%tth:\def\makehyperlink#1{\special{html: }{\large#1}\special{html: }}
%%tth:\def\binom#1#2{\left(\begin{array}{c}#1\\#2\end{array}\right)}
%\def\sectionhead#1{\begin{center}{\LARGE #1}\end{center}}
%\def\sectionhead#1{\section{#1}}
% defines a 2 element column vector.
\def\col#1#2{\left(\begin{array}{c}#1\\#2\end{array}\right)}
\def\tcol#1#2{(#1, #2)^T}
\begin{document}
\raggedright
%%tth:\special{html: }
\pagestyle{myfooters}
%\pagestyle{plain}
\thispagestyle{empty}
%%tth:\special{html: Assessing Risks}
%Slide 1
\title{\ \newline \newline{\LARGE\bf Assessing Risks}\newline \newline \newline}
\author{Tom Carter
\newline
\newline
\newline
\tthdump{\href{http://cogs.csustan.edu/\~tom/SFI-CSSS}{http://cogs.csustan.edu/\~\ tom/SFI-CSSS}}
%%tth:\href{http://cogs.csustan.edu/~tom/SFI-CSSS}{http://cogs.csustan.edu/\~tom/SFI-CSSS}
\vfill
Complex Systems
\newline
}
\date{July, 2002}
\maketitle
%Slide 2
\sectionhead{Our general topics:}
%%tth:\begin{itemize}
%%tth:\item
\tthdump{\hyperlink{Assessing risks}
{\ $\circledcirc$ Assessing risks\newline}}
%%tth:\makehyperlink{Assessing risks}
%%tth:\item
\tthdump{\hyperlink{Using Bayes' Theorem}
{$\circledcirc$ Using Bayes' Theorem\newline}}
%%tth:\makehyperlink{Using Bayes' Theorem}
%%tth:\item
\tthdump{\hyperlink{The `doomsday argument'}
{$\circledcirc$ The `doomsday argument'\newline}}
%%tth:\makehyperlink{The `doomsday argument'}
%%tth:\item
\tthdump{\hyperlink{References}
{$\circledcirc$ References\newline}}
%%tth:\makehyperlink{References}
%%tth:\end{itemize}
\pagedone
\quotesection{The quotes}
%%tth:\begin{itemize}
%%tth:\item
\tthdump{\hyperlink{Science and wisdom}
{\ $\circledcirc$ Science and wisdom\newline}}
%%tth:\makehyperlink{Science and wisdom}
%%tth:\item
\tthdump{\hyperlink{Miracles}
{$\circledcirc$ Miracles\newline}}
%%tth:\makehyperlink{Miracles}
%%tth:\end{itemize}
%\thepage
\tthdump{\hyperlink{Our general topics:}{\hfil To topics $\leftarrow$}}
%%tth:{\special{html: Back to top of file}}
\pagedone
%Slide 3
\quotesection{Science and wisdom}
%%tth:\begin{quote}
``Science is organized knowledge. Wisdom is organized life.''
- Immanuel Kant
``My own suspicion is that the universe is not only stranger than we suppose,
but stranger than we can suppose.''
- John Haldane
``Not everything that can be counted counts, and not everything that counts can be counted.''
- Albert Einstein (1879-1955)
``The laws of probability, so true in general, so fallacious in particular .''
- Edward Gibbon
%%tth:\end{quote}
\pagedone
\sectionhead{Assessing risks}
\begin{itemize}
\item The task of assessing risks in our lives is notoriously difficult.
One thing we can try to do is calculate the probabilities of various
events happening.
Unfortunately, humans (even well trained scientists, mathematicians,
and probabilists) often do a poor job of estimating probabilities.
There is an apocryphal story of a statistician who always packed a bomb
in his luggage when flying in a plane. When asked why, he explained that
he knew that if the probability of there being one bomb on a plane was
$\frac{1}{1000}$, then the probability of there being two
bombs would be $$\frac{1}{1000} * \frac{1}{1000} = \frac{1}{1,000,000},$$
and he felt much safer with the $\frac{1}{1,000,000}$ chance \ldots
\pagedone
\item Obviously there is something wrong with this statistician's calculation
of probabilities.
I sometimes wonder whether we in the US, holding on to our nuclear
weapons, have fallen into a similar sort of confusion about calculating
risks and ``feeling safer.''
\item I won't go through much, but some probability basics, where $a$ and $b$ are events: \newline
$ P(not\ a) = 1 - P(a).$\newline
$ P(a\ or\ b) = P(a) + P(b) - P(a\ \mathrm{and}\ b).$
The probability of two events happening, $P(a\ \mathrm{and}\ b)$ (often denoted
by $P(a, b)$), can be quite difficult to calculate, since we often do not
know how the events $a$ and $b$ are related to each other.
\pagedone
\item Conditional probability: \newline\newline
$ P(a \vert b) $ is the probability of $a$, given that we know $b$.
The joint probability of both $a$ and $b$ is given by:
$$P(a, b) = P(a \vert b) P(b).$$
Since $P(a, b) = P(b, a)$, we have Bayes' Theorem:
$$P(a \vert b)P(b) = P(b \vert a) P(a),$$
or
$$P(a \vert b) = \frac{P(b \vert a) P(a)}{P(b)}.$$
\item If two events $a$ and $b$ are such that
$$P(a \vert b) = P(a),$$
we say that the events $a$ and $b$ are {\em independent}.
\pagedone
\item Note that if $a$ and $b$ are {\em independent},
$$P(a \vert b) = P(a),$$
then from
Bayes' Theorem, we will also have that
$$P(b \vert a) = P(b),$$
and therefore,
$$P(a, b) = P(a \vert b)P(b) = P(a)P(b).$$
This last equation is often taken as the definition of {\em independence}.
\item We have in essence begun here the development of a mathematized methodology for drawing
inferences about the world from uncertain knowledge.
\end{itemize}
\pagedone
\quotesection{Miracles}
%%tth:\begin{quote}
``The opposite of a correct statement is a false statement. The opposite of a profound truth
may well be another profound truth.''\newline
- Niels Bohr (1885-1962)
``Groundless hope, like unconditional love, is the only kind worth having.''\newline
- John Perry Barlow
``There are only two ways to live your life. One is as though nothing is a miracle.
The other is as though everything is a miracle.''\newline
- Albert Einstein (1879-1955)
``The Universe is full of magical things patiently waiting for our wits to
grow sharper.''\newline
-Eden Phillpotts
\pagedone
\
\newline \newline \newline
``Nature uses only the longest threads to weave her patterns, so that each
small piece of her fabric reveals the organization of the entire tapestry.''
- Richard Feynman
%%tth:\end{quote}
\pagedone
\sectionhead{Using Bayes' Theorem}
\begin{itemize}
\item A quick example: \newline
Suppose that you are asked by a friend to help them understand the results of
a genetic screening test they have taken. They have been told that they
have tested positive, and that the test is 99\% accurate. What is the
probability that they actually have the anomaly?
%% (Hint: We don't yet have enough information \ldots)
%% \pagedone
You do some research, and find out that the test screens for a genetic anomaly
that is believed to occur in one person out of 100,000 on average. The lab that
does the tests guarantees that the test is 99\% accurate. You push the question,
and find that the lab says that one percent of the time, the test falsely reports
the absence of the anomaly when it is there, and one percent of the time the test
falsely reports the presence of the anomaly when it is not there.
The test has come
back positive for your friend. How worried should they be? Given this much
information, what can you calculate as the probability they actually have the anomaly?
In general, there are four possible situations for an individual being tested:
\begin{enumerate}
\item Test positive (Tp), and have the anomaly (Ha).
\item Test negative (Tn), and don't have the anomaly (Na).
\item Test positive (Tp), and don't have the anomaly (Na).
\item Test negative (Tn), and have the anomaly (Ha).
\end{enumerate}
\pagedone
We would like to calculate for our friend the probability they actually have
the anomaly (Ha), given that they have tested positive (Tp):
$$P(Ha \vert Tp).$$
We can do this using Bayes' Theorem.
We can calculate:
\begin{eqnarray*}
P(Ha \vert Tp) & = & \frac{P(Tp \vert Ha) * P(Ha)}{P(Tp)}.
\end{eqnarray*}
We need to figure out the three items on the right side of the equation. We can
do this by using the information given.
\pagedone
Suppose the screening test was done on
10,000,000 people. Out of these $10^7$ people, we expect there to be
$10^7/10^5 = 100$ people with the anomaly, and 9,999,900 people without the
anomaly. According to the lab, we would expect the test results to be:
\begin{itemize}
\item Test positive (Tp), and have the anomaly (Ha):
$$ 0.99 * 100 = 99\ \mathrm{people}.$$
\item Test negative (Tn), and don't have the anomaly (Na):
$$ 0.99 * 9,999,900 = 9,899,901\ \mathrm{people}.$$
\item Test positive (Tp), and don't have the anomaly (Na):
$$ 0.01 * 9,999,900 = 99,999\ \mathrm{people}.$$
\item Test negative (Tn), and have the anomaly (Ha):
$$ 0.01 * 100 = 1\ \mathrm{person}.$$
\end{itemize}
\pagedone
Now let's put the the pieces together:
\begin{eqnarray*}
P(Ha) & = & \frac{1}{100,000}\\
\\
& = & 10^{-5}\\
\\
P(Tp) & = & \frac{99 + 99,999}{10^7}\\
\\
& = & \frac{100,098}{10^7}\\
\\
& = & 0.0100098\\
\\
P(Tp \vert Ha) & = & 0.99
\end{eqnarray*}
\pagedone
Thus, our calculated probability that our friend actually has the anomaly is:
\begin{eqnarray*}
P(Ha \vert Tp) & = & \frac{P(Tp \vert Ha) * P(Ha)}{P(Tp)}\\
\\
& = & \frac{0.99 * 10^{-5}}{0.0100098}\\
\\
& = & \frac{9.9 * 10^{-6}}{1.00098 * 10^{-2}}\\
\\
& = & 9.890307 * 10^{-4}\\
\\
& < & 10^{-3}
\end{eqnarray*}
In other words, our friend, who has tested {\em positive}, with a test that
is 99\% correct, has less that one chance in 1000 of actually having the anomaly!
%% \pagedone
%% \item There are a variety of questions we could ask now, such as, ``For this anomaly,
%% how accurate would the test have to be for there to be a greater than 50\%
%% probability that someone who tests positive actually has the anomaly?''
%%
%% For this, we need fewer false positives than true positives. Thus, in
%% the example, we would need fewer than 100 false positives out of the
%% 9,999,900 people who do not have the anomaly. In other words, the proportion
%% of those without the anomaly for whom the test would have to be correct
%% would need to be greater than:
%% $$\frac{9,999,800}{9,999,900} = 99.999\%$$
%% \pagedone
%% \item Another question we could ask is, ``How prevalent would an anomaly have to be
%% in order for a 99\% accurate test (1\% false positive and 1\% false negative)
%% to give a greater than 50\% probability of actually having the anomaly when
%% testing positive?''
%%
%% Again, we need fewer false positives than true positives. We would therefore
%% need the actual occurrence to be greater than 1 in 100 (each false positive
%% would be matched by at least one true positive, on average).
%% \pagedone
%% \item Note that the current population of the US is about 280,000,000 and the
%% current population of the world is about 6,200,000,000. Thus, we could
%% expect an anomaly that affects 1 person in 100,000 to affect about 2,800
%% people in the US, and about 62,000 people worldwide, and one affecting
%% one person in 100 would affect 2,800,000 people in the US, and 62,000,000
%% people worldwide \ldots
%% %%\pagedone
%% \item Another example: suppose the test were not so accurate? Suppose the test
%% were 80\% accurate (20\% false positive and 20\% false negative). Suppose
%% that we are testing for a condition expected to affect 1 person in 100.
%% What would be the probability that a person testing positive actually has
%% the condition?
%% \pagedone
%% We can do the same sort of calculations.
%%
%% Let's use 1000
%% people this time. Out of this sample, we would expect 10 to have the
%% condition.
%% \begin{itemize}
%% \item Test positive (Tp), and have the condition (Ha):
%% $$ 0.80 * 10 = 8\ \mathrm{people}.$$
%% \item Test negative (Tn), and don't have the condition (Na):
%% $$ 0.80 * 990 = 792\ \mathrm{people}.$$
%% \item Test positive (Tp), and don't have the condition (Na):
%% $$ 0.20 * 990 = 198\ \mathrm{people}.$$
%% \item Test negative (Tn), and have the condition (Ha):
%% $$ 0.20 * 10 = 2\ \mathrm{people}.$$
%% \end{itemize}
%% \pagedone
%% Now let's put the the pieces together:
%% \begin{eqnarray*}
%% P(Ha) & = & \frac{1}{100}\\
%% \\
%% & = & 10^{-2}\\
%% \\
%% P(Tp) & = & \frac{8 + 198}{10^3}\\
%% \\
%% & = & \frac{206}{10^3}\\
%% \\
%% & = & 0.206\\
%% \\
%% P(Tp \vert Ha) & = & 0.80
%% \end{eqnarray*}
%% \pagedone
%% Thus, our calculated probability that our friend actually has the anomaly is:
%% \begin{eqnarray*}
%% P(Ha \vert Tp) & = & \frac{P(Tp \vert Ha) * P(Ha)}{P(Tp)}\\
%% \\
%% & = & \frac{0.80 * 10^{-2}}{0.206}\\
%% \\
%% & = & \frac{8 * 10^{-3}}{2.06 * 10^{-1}}\\
%% \\
%% & = & 3.883495 * 10^{-2}\\
%% \\
%% & < & .04
%% \end{eqnarray*}
%%
%% In other words, one who has tested {\em positive}, with a test that
%% is 80\% correct, has less that one chance in 25 of actually having this
%% condition. (Imagine for a moment, for example, that this is a drug test
%% being used on employees of some corporation \ldots)
%% \pagedone
%% \item We could ask the same kinds of questions we asked before:
%% \begin{enumerate}
%% \item How accurate would the test have to be to get a better than 50\%
%% chance of actually having the condition when testing positive?
%%
%% (99\%)
%% \item For an 80\% accurate test, how frequent would the condition
%% have to be to get a better than 50\% chance?
%%
%% (1 in 5)
%% \end{enumerate}
\pagedone
\item Some questions:
\begin{enumerate}
\item Are examples like this realistic? If not, why not?
\item What sorts of things could we do to improve our results?
\item Would it help to repeat the test? For example, if the
probability of a false positive is 1 in 100, would that mean
that the probability of two false positives on the same
person would be 1 in 10,000 ($\frac{1}{100} * \frac{1}{100}$)?
If not, why not?
\item In the case of a medical condition such as a genetic anomaly,
it is likely that the test would not be applied randomly, but would
only be ordered if there were other symptoms suggesting the anomaly.
How would this affect the results?
\end{enumerate}
\end{itemize}
\pagedone
\sectionhead{The `doomsday argument'}
\begin{itemize}
\item There is a line of reasoning called the ``doomsday argument'' (attributed
to Brandon Carter in the 1980's) suggesting that we consistently
underestimate the likelihood that the human race will end soon.
What follows is a brief summary of the general theme of the argument.
\item Suppose you have before you two urns, and are told that one contains
ten balls labeled 1 through 10, and the other contains one thousand
balls labeled 1 through 1000. You choose one of the urns at random.
A ball is drawn at random from the urn you chose, and the ball drawn
has on it the label `7'. What is the probability that the urn you chose
is the one with ten balls in it?
In the beginning, before drawing the ball labeled `7', we have
$$P(ten) = P(thousand) = \frac{1}{2}.$$
After drawing the ball, however, we can use Bayes' theorem to calculate:
\begin{eqnarray*}
P(ten\ \vert\ \mathrm{draw}\ `7`) & = &
\frac{P(\mathrm{draw}\ `7`\ \vert\ ten) * P(ten)}{P(\mathrm{draw}\ `7`)}\\
& = & \frac{\frac{1}{10} * \frac{1}{2}}
{\frac{1}{2}*\frac{1}{10} + \frac{1}{2}*\frac{1}{1000}}\\
& = & \frac{\frac{1}{20}}
{\frac{1}{20} + \frac{1}{2000}}\\
& = & \frac{\frac{1}{20}}
{\frac{101}{2000}}\\
& & \\
& = & \frac{2000}{2020}\\
& & \\
& = & 0.990099
\end{eqnarray*}
\pagedone
\item Now the `doomsday argument' \ldots Consider the two possibilities:
D: We humans destroy ourselves before we leave earth and colonize the
universe.
U: We colonize the universe.
We now estimate: In the case `D', no more than $100,000,000,000 = 10^{11}$
humans will ever live, and I am one of those $10^{11}$.
In the case `U', many more humans will live, say $10^{15}$.
I know that I am among the first $10,000,000,000$ people to live, so in
terms of human birth order, we can say that my `label' is say $8,000,000,000$
(call this `L8B').
\pagedone
We now use Bayes' theorem:
\begin{eqnarray*}
P(D\ \vert\ \mathrm{L8B}) & = &
\frac{P(\mathrm{L8B}\ \vert\ D) * P(D)}{P(\mathrm{L8B})}\\
& = & \frac{\frac{1}{10^{11}} * P(D)}
{P(D)*10^{-11} + (1 - P(D))*10^{-15}}\\
& = & \frac{\frac{1}{10^{11}} * P(D)}
{\frac{10^4*P(D)+(1 - P(D))}{10^{15}}}\\
& = & \frac{10^4*P(D)}
{10^4*P(D)+(1-P(D))}
\end{eqnarray*}
Now put in a value for $P(D)$, and see what happens:
If we start with an estimate $P(D) = \frac{1}{100}$, then
\begin{eqnarray*}
P(D\ \vert\ \mathrm{L8B}) & = & \frac{10^4*10^{-2}}
{10^4*10^{-2}+(1-10^{-2})}\\
& = & \frac{10^2}
{10^2 + 0.99}\\
& = & 0.990197
\end{eqnarray*}
\pagedone
\item In other words, if our original estimate for the likelihood of `doomsday'
was $P(D) = \frac{1}{100}$, we should revise that estimate upward to
0.990197!
You can try other values for the various pieces on your own \ldots
\end{itemize}
\pagedone
\pagedone
\footnotesize
\bibliographystyle{plain}
\tthdump{\hypertarget{References}{}\hyperlink{Our general topics:}{\hfil \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ To top $\leftarrow$}}
%%tth:{\special{html: Top}}
\vspace{-1.0 in}
\begin{thebibliography}{12}
%%tth:{\special{html: }}
\bibitem{bostrom1}
Bostrom, Nick,
{\em Existential Risks -
Analyzing Human Extinction Scenarios and Related Hazards},
http://www.nickbostrom.com/existential/risks.html.
\bibitem{bostrom2}
Bostrom, Nick,
{\em The Doomsday Argument: a Literature Review},
http://anthropic-principle.com/preprints/lit/index.html.
\bibitem{feller}
Feller, W.,
{\em An Introduction to Probability Theory and Its Applications},
Wiley, New York,1957.
\bibitem{hamming2}
Hamming, R. W.,
{\em Coding and information theory}, 2nd ed,
Prentice-Hall, Englewood Cliffs, 1986.
\bibitem{neumann}
von Neumann, John,
Probabilistic logic and the synthesis of reliable organisms
from unreliable components,
in {\em automata studies( Shanon,McCarthy eds)}, 1956 .
\bibitem{pierce}
Pierce, John R.,
{\em An Introduction to Information Theory -- Symbols, Signals and Noise},
(second revised edition),
Dover Publications, New York, 1980.
\end{thebibliography}
\tthdump{\hyperlink{Our general topics:}{\hfil To top $\leftarrow$}}
%%tth:{\special{html: Back to top of file}}
%
% \sectionhead{On-line references}
%
%
% Some of the references listed above are available on line. They are listed again here for easy access:
%
% %\bibitem{abrams2}
% Abrams D S and Lloyd S,
% Non-Linear Quantum Mechanics implies Polynomial Time
% solution for NP-complete and $\#$P problems,
% %in {\it LANL e-print} quant-ph/9801041, http://xxx.lanl.gov (1998)
% \hyperref{http://xxx.lanl.gov/abs/quant-ph/9801041}{}{}
% %\hyperURL{http}{xxx.lanl.gov/abs/quant-ph}{9801041}
% {http://xxx.lanl.gov/abs/quant-ph/9801041}
%
%
% %\bibitem{aharonov5}
% Aharonov D, Beckman D, Chuang I and Nielsen M,
% What Makes Quantum Computers Powerful?
% \hyperref{http://wwwcas.phys.unm.edu/\~mnielsen/science.html}{}{}
% %\hyperURL{http}{wwwcas.phys.unm.edu/~mnielsen}{science.html}
% {http://wwwcas.phys.unm.edu/\~mnielsen/science.html}
% %
% %
%
% %\bibitem{decoherence2}
% Chuang I L, Laflamme R and Paz J P,
% Effects of Loss and Decoherence on a Simple Quantum Computer,
% %in {\it LANL e-print} quant-ph/9602018, http://xxx.lanl.gov (1996)
% \hyperref{http://xxx.lanl.gov/abs/quant-ph/9602018}{}{}
% %\hyperURL{http}{xxx.lanl.gov/abs/quant-ph}{9602018}
% {http://xxx.lanl.gov/abs/quant-ph/9602018}
%
% %\bibitem{grover2}
% Grover L K,
% A framework for fast quantum mechanical algorithms,
% %in {\it LANL e-print} quant-ph/9711043, http://xxx.lanl.gov (1997)
% \hyperref{http://xxx.lanl.gov/abs/quant-ph/9711043}{}{}
% %\hyperURL{http}{xxx.lanl.gov/abs/quant-ph}{9711043}
% {http://xxx.lanl.gov/abs/quant-ph/9711043}
%
% %\bibitem{grover4}
% Grover L K,
% A fast quantum mechanical algorithm for estimating the median,
% %in {\it LANL e-print} quant-ph/9607024, http://xxx.lanl.gov (1997)
% \hyperref{http://xxx.lanl.gov/abs/quant-ph/9607024}{}{}
% %\hyperURL{http}{xxx.lanl.gov/abs/quant-ph}{9607024}
% {http://xxx.lanl.gov/abs/quant-ph/9607024}
%
%
% %\bibitem{knill4}
% Knill E, Laflamme R and Zurek W H 1997
% Resilient quantum computation: error models and thresholds
% %in {\it LANL e-print} quant-ph/9702058, http://xxx.lanl.gov (1997)
% \hyperref{http://xxx.lanl.gov/abs/quant-ph/9702058}{}{}
% %\hyperURL{http}{xxx.lanl.gov/abs/quant-ph}{9702058}
% {http://xxx.lanl.gov/abs/quant-ph/9702058}
%
% \pagedone
%
%
% %\bibitem{preskill2}
% Preskill J 1997
% Fault tolerant quantum computation,
% %in {\it LANL e-print} quant-ph/9712048, http://xxx.lanl.gov (1997),
% to appear in {\it Introduction to Quantum
% Computation}, edited by H.-K. Lo, S. Popescu, and T. P. Spiller
% \hyperref{http://xxx.lanl.gov/abs/quant-ph/9712048}{}{}
% %\hyperURL{http}{xxx.lanl.gov/abs/quant-ph}{9712048}
% {http://xxx.lanl.gov/abs/quant-ph/9712048}
%
% %\bibitem{preskill3}
% Preskill J, Kitaev A, Course notes for Physics 229, Fall 1998, Caltech Univ.,
% \hyperref{http://www.theory.caltech.edu/people/preskill/ph229}{}{}
% %\hyperURL{http}{www.theory.caltech.edu/people/preskill}{ph229}
% {http://www.theory.caltech.edu/people/preskill/ph229}
%
%
% %\bibitem{rieffel}
% Rieffel E, Polak W
% An Introduction to Quantum Computing for Non-Physicists
% %{\it LANL e-print} quant-ph/9809016, http://xxx.lanl.gov (1998),
% \hyperref{http://xxx.lanl.gov/abs/quant-ph/9809016}{}{}
% %\hyperURL{http}{xxx.lanl.gov/abs/quant-ph}{9809016}
% {http://xxx.lanl.gov/abs/quant-ph/9809016}
%
% %\bibitem{Steane-97}
% Steane A,
% Quantum Computation, Reports on Progress in Physics 61 (1998) 117,
% %(preprint in {\it LANL e-print} quant-ph/9708022, http://xxx.lanl.gov)
% \hyperref{http://xxx.lanl.gov/abs/quant-ph/9708022}{}{}
% %\hyperURL{http}{xxx.lanl.gov/abs/quant-ph}{9708022}
% {http://xxx.lanl.gov/abs/quant-ph/9708022}
%
% %\bibitem{zalka2}
% Zalka C,
% Grover's quantum searching algorithm is optimal,
% %in {\it LANL e-print} quant-ph/9711070http://xxx.lanl.gov (1997)
% \hyperref{http://xxx.lanl.gov/abs/quant-ph/9711070}{}{}
% %\hyperURL{http}{xxx.lanl.gov/abs/quant-ph}{9711070}
% {http://xxx.lanl.gov/abs/quant-ph/9711070}
%
\end{document}