-
Notifications
You must be signed in to change notification settings - Fork 0
/
Reversal.tex
355 lines (250 loc) · 23.4 KB
/
Reversal.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
\documentclass{llncs}
\usepackage{graphicx}
\usepackage{float}
\usepackage{amssymb,amsmath}
\usepackage{enumerate}
\usepackage{todonotes}
\begin{document}
\title{Ultrametric Turing Machines with Limited Reversal Complexity
\thanks{%
The research was supported by Grant No. 09.1570 from the
Latvian Council of Science and by Project 2009/0216/1DP/1.1.1.2.0/09/IPIA/VIA/044
from the
European Social Fund.
}
}
\author{
Rihards Kri\v slauks,
Ieva Ruk\v s\= ane,
Kaspars Balodis,
Ilja Kucevalovs,\\
R\= usi\c n\v s Freivalds,
Ieva N\= agele}
\institute{Institute of Mathematics and Computer Science,
University of Latvia,\\ Rai\c na bulv\= aris 29, Riga, LV-1459, Latvia
}
\maketitle
\begin{abstract}
String theory in physics and molecular biology use $p$-adic numbers as a tool to
describe properties of microworld in a more adequate way. We consider a notion of
automata and Turing machines using $p$-adic numbers to describe random branching of
the process of computation.
We prove that Turing machines of this type can have advantages in reversal complexity over
deterministic and probabilistic Turing machines.
\end{abstract}
\section{Introduction}
Pascal and Fermat believed that every event of indeterminism can be described by a real number between 0 and 1 called
{\em probability}. Quantum physics introduced a description in terms of complex numbers called {\em amplitude of
probabilities} and later in terms of probabilistic combinations of amplitudes most conveniently described by {\em density
matrices}.
String theory \cite{VVZ95}, chemistry \cite{K06} and molecular biology \cite{DD09,Kh97} have introduced $p$-adic numbers to describe
measures of indeterminism.
There were no difficulties to implement probabilistic automata and algorithms practically. Quantum computation \cite{H01} has made a considerable
theoretical progress but practical implementation has met considerable difficulties. However, prototypes of quantum computers exist, some
quantum algorithms are implemented on these prototypes, quantum cryptography is already practically used. Some people are skeptical concerning
practicality of the initial spectacular promises of quantum computation but nobody can deny the existence of quantum computation.
We consider a new type of indeterministic algorithms called {\em ultrametric} algorithms. They are very similar to probabilistic algorithms but while probabilistic algorithms use real numbers $r$ with $0 \leq r \leq 1$ as parameters, ultrametric algorithms use {\em $p$-adic} numbers as the parameters.
Slightly simplifying the description of the definitions one can say that ultrametric algorithms are the same probabilistic algorithms, only the interpretation of the probabilities is different.
Our choice of $p$-adic numbers instead of real numbers is not quite arbitrary. In 1916 Alexander Ostrowski
proved that any non-trivial absolute value on the rational numbers $Q$ is equivalent to either the usual real absolute value or a $p$-adic absolute value.
This result shows that using $p$-adic numbers is not merely one of many possibilities to generalize the definition of deterministic algorithms but rather the only remaining possibility not yet explored.
There are many distinct $p$-adic absolute values corresponding to many prime numbers $p$. These absolute values are traditionally called {\em ultrametric}.
Absolute values are needed to consider {\em distances} among objects. An important feature that differentiates $p$-adic numbers from real numbers: real numbers (both rational and irrational) are linearly ordered whereas $p$-adic numbers cannot be linearly ordered. This is why {\em valuations} of $p$-adic numbers are considered. The situation is similar in quantum computation. Quantum amplitudes are complex numbers which also cannot be linearly ordered. The counterpart of valuation for quantum algorithms is {\em measurement} translating a complex number $a + bi$ into a real number $a^2 + b^2$. Valuations of $p$-adic numbers are rational numbers.
Ultrametric finite automata and ultrametric Turing machines are reasonably similar to probabilistic finite automata and Turing machines. Hence for the first papers on ultrametric algorithms it is important to show the features that differ ultrametric algorithms from probabilistic ones.
Below we consider ultrametric versus deterministic Turing machines with one input tape which can be read only 1-way and a work tape which is empty at the beginning of the work.
The complexity measure is the number of reversals on the work tape (no reversals are allowed on the input tape). If the head on the work tape stays on the same cell for some time and then continues it's
walk at the same direction, it is not counted as a reversal. In our examples we compare advantages of ultrametric Turing machines with a constant number of reversals over deterministic and probabilistic Turing machines with a constant number of reversals.
\section{$p$-adic numbers}
In this section we give a an introduction to $p$-adic numbers. The main definition used throughout the paper however is the notion of $p$-adic norm. Therefore the introduction is very brief, however the theory of $p$-adic numbers is presented in full detail in \cite{M00,G83,K84}.
\subsection{Introduction to $p$-adic numbers}
A $p$-adic digit is a natural number between 0 and $p-1$ (inclusive) where $p$ is an arbitrary prime number. A $p$-adic integer is a sequence $(a_i)_{i \in N}$, where $a_i$ is a $p$-adic integer. This can also be written as $\cdots a_i \cdots a_2a_1a_0$.
This sequence corresponds to the natural number $\sum\limits_{i=0}^{+\infty}a_ip^i$, where $p$ is the chosen prime number and the base of the number system. This sequence is infinite to the left side. For natural numbers, only a finite number of digits will be non-zero. Given a natural number $n$, the non-zero part of its $p$-adic representation will be exactly the same as the number's representation in base $p$. For example, the decimal number 42 is written in base 5 as 132 and its representation in 5-adic numbers is $\cdots 0 \cdots 0132$.
For negative numbers and non-integers, however, the situation is considerably different. Essentially negative $p$-adic numbers are the ones that give the result 0 when added to the same positive number. For example $5$-adic $-2$ is $\cdots 4443$. And similarly we find the fraction $\frac{1}{2}$. Adding $\frac{1}{2}$ to itself returns 1, whose 5-adic representation is $\cdots 0001$. Knowing this, $\frac{1}{2}$ can be expressed in 5-adic notation as $\cdots 2223$.
\subsection{$p$-adic absolute values}
Let $X$ be a nonempty set. A function $d: X \times X \rightarrow R_{\geq 0}$ is called a metric in this set if it satisfies the following properties:
\begin{enumerate}
\item $d(x,y) = 0$ if and only if $x = y$,\\
\item $d(x,y) = d(y,x)$,\\
\item $d(x,y) \leq d(x,z) + d(z,y)$ for all $z \in X$.
\end{enumerate}
A metric function is used to determine the distance between two elements of a set. An element's distance from zero $d(x,0)$ is called its norm or absolute value and is denoted by $||x||$.
The norm of an element satisfies the following properties:
\begin{enumerate}
\item $||x||=0$ if and only if $x=0$,\\
\item $||x*y|| = ||x||*||y||$,\\
\item $||x+y|| \leq ||x||+||y||$ (the triangle inequality).
\end{enumerate}
If the third property can be replaced by its stronger variant - the strong triangle inequality $||x+y|| \leq max(||x||,||y||)$, the norm is called ultrametric. Otherwise, it is called Archimedean. \cite{Rus12}
\begin{definition}
For every rational non-zero number $\alpha$ there is a unique prime factorization - $\alpha = \pm 2^{\alpha_2}3^{\alpha_3}5^{\alpha_5}7^{\alpha_7} \cdots$, where all $\alpha_i$ are integers, only a finite number of which are non-zero. The \emph{$p$-adic absolute value} (also called the \emph{$p$-norm}) of a rational number $\alpha$ is
$||x||_p = \begin{cases}
p^{-\alpha_p}, \text{if $\alpha \neq 0$} \\
0, \text{if $\alpha = 0$}
\end{cases} $
\end{definition}
\section{Ultrametric machines}
Ultrametric machines are defined in the same way as probabilistic machines, only the parameters called {\em probabilities of transition} from one state
to another one are real numbers between 0 and 1 in probabilistic machines, and they
are $p$-adic numbers called {\em amplitudes} in the ultrametric machines \cite{Rus12}. Formulas to calculate the amplitudes after one, two, three, $\cdots $ steps of computation are exactly the same as the formulas to calculate the probabilities in the probabilistic machines.
The definition of ultrametric machines requires that the input word on the input tape is followed by a special
marker denoting the end of the input data. While reading the end-marker the ultrametric machine makes the final change of the amplitudes of the involved states and additinally makes {\em measurement} transforming all the amplitudes (being $p$-adic numbers) into probabilities (being norms of the measured amplitudes).
Ultrametric machine has a threshold $\lambda $ and the input word $x$ is accepted if the measured probability of the accepting state exceeds the threshold.
It is needed to note that if there is only one accepting state then the possible probabilities of acceptance are discrete values $0, p^1, p^{-1}, p^2, p^{-2}, p^3,
p^{-3}, \cdots $. Hence there is no natural counterpart of {\em isolated cut-point} or {\em bounded error} for ultrametric machines. On the other hand, Paavo Turakainen proved in 1969 that there is no need to demand in cases of probabilistic branchings that total of probabilities for all possible continuations equal 1. He considered generalized probabilistic finite automata where the "probabilities" can be arbitrary real numbers, and that languages recognizable by these generalized probabilistic finite automata are the same as for ordinary probabilistic finite automata. Hence we also allow usage of all possible $p$-adic numbers in $p$-ultrametric machines.
On the other hand, $p_1$-ultrametric and $p_2$-ultrametric machines for recognition of the same language can differ very much.
\section{Results}
At first we consider the language $Q \subseteq \{0,1,2,3\}^*$. By $|x|_a$ we denote the number of occurrences of the symbol $a$ in the word $x$.
The language $Q$ is defined as
$$
Q = \{ x3y \mid x \in \{0,1,2\}^* \wedge y \in \{0,1,2\}^* \wedge |x|_0 = |y|_0 \wedge |x|_1 = |y|_1 \wedge |x|_2 = |y|_2 \}.
$$
\begin{theorem}
\label{Th7a}
(1) For arbitrary $\epsilon > 0$ there is a probabilistic 1-way Turing machine recognizing the language $Q$ with no more than $1$ reversal such that any word in $Q$ is accepted with probability $1$,
and any word not in $Q$ is rejected with a probability no less than $1 - \epsilon $.\\
(2) For arbitrary prime number $p \geq 5$ there is an $p$-ultrametric 1-way Turing machine with no more than $1$ reversal recognizing $Q$,\\
(3) For arbitrary deterministic 1-way Turing machine recognizing the language $Q$ there is a constant $c > 0$ such that for infinitely many input words $x$ the number of the reversals of the machine
exceeds $c \log |x|$ .\\
(4) For arbitrary $c > 0$ there is a deterministic 1-way Turing machine recognizing the language $Q$ with no more than $c \log |x|$ reversals.
\end{theorem}
\begin{proof}
(1) At first a the probabilistic machine picks a random number $i$ from the set $\{1, 2, ... , \lceil \frac{\epsilon}{3}\rceil \}$.
The work tape of the machine is used as a counter by writing natural numbers in unary alphabet on it. While the word $x$ is being read from the input tape, a number $1\cdot|x|_0 + i^1\cdot|x|_1 + i^2\cdot|x|_2$ is written on the work tape. Afterwards, while the word $y$ is being read, a number $1\cdot|y|_0 + i^1\cdot|y|_1 + i^2\cdot|y|_2$ is subtracted from the counter. A given word is accepted if and only if $1\cdot|x|_0 + i^1\cdot|x|_1 + i^2\cdot|x|_2 = 1\cdot|y|_0 + i^1\cdot|y|_1 + i^2\cdot|y|_2$.
The essence of this part of the proof is to observe that a system of linear equations
$$
\left\{
\begin{array}{lcl}
a_0 + ia_1 + i^2a_2 & = & 0\\
a_0 + ja_1 + j^2a_2 & = & 0\\
a_0 + ka_1 + k^2a_2 & = & 0
\end{array}
\right.
$$
is a system with a Vandermonde determinant which is not equal to 0 if $i,j,k$ are distinct. Hence in our algorithm
$|x|_0 = |y|_0 \wedge |x|_1 = |y|_1 \wedge |x|_2 = |y|_2$ iff $1\cdot|x|_0 + i^1\cdot|x|_1 + i^2\cdot|x|_2 = 1\cdot|y|_0 + i^1\cdot|y|_1 + i^2\cdot|y|_2$ for
at least 3 values of $i$.
\bigskip
(2) The $p$-ultrametric Turing machine picks a number $i$ from the set $\{1, 2, ... ,\\ 3+p\cdot\lceil \frac{\epsilon}{3}\rceil \}$ and works (in parallel mode) with one of these $i$.
The work tape of the machine is used as a counter by writing natural numbers in unary alphabet on it. While the word $x$ is being read from the input tape, a number $1\cdot|x|_0 + i^1\cdot|x|_1 + i^2\cdot|x|_2$ is written on the work tape. Afterwards, while the word $y$ is being read, a number $1\cdot|y|_0 + i^1\cdot|y|_1 + i^2\cdot|y|_2$ is subtracted from the counter. There is a special state of the machine, and the machine starts with amplitude $(-3)$ of this state.The machine
adds 1 to the amplitude of this state
if and only if $1\cdot|x|_0 + i^1\cdot|x|_1 + i^2\cdot|x|_2 = 1\cdot|y|_0 + i^1\cdot|y|_1 + i^2\cdot|y|_2$ for some $i$. The properties of the Vandermonde determinant imply that either the equality holds for all $p\lceil \frac{\epsilon}{3}\rceil $ values of $i$ and hence the valuation of the amplitude of the special state does not exceed $p^{-1}$
(when the input word is in the language) or the amplitude equals $-3, -2, -1$ (because no more than 2 distinct values of $i$ can give equality
(when the input word is not in the language). This is a recognition of the language.
\bigskip
(3) Let us denote by $\mu$ a deterministic 1-way Turing machine recognizing the language $Q$.
Let us at first consider some simple properties of $\mu$.
1) While a letter $3$ is not read from the input tape, the number of zeros and twos read beforehand must be memorized. More precisely: if two distinct words $x_1$ and $x_2$ not containing the letter $3$, having $|x_1|_0 \neq |x_2|_0$ or $|x_1|_1 \neq |x_2|_1$ or $|x_1|_2 \neq |x_2|_2$, have been read then the configuration of machine at the moment of reading the last letter of these words is different.
2) Let us consider a situation when not a single letter $3$ has been read yet and let us denote by $l$ the length of some zone on the work tape. Let the head of work tape at some moment be positioned over this zone while only letters different from $3$ are read. Then after at most $a\cdot b\cdot l$ steps (where $a$ - the number if inner states and $b$ - the number of letters of the work tape) the head will either make a reversal or leave the designated zone. (If this is not the case two distinct words would lead to the same configuration which would contradict property 1).)
Let us suppose by contradiction that the machine $\mu$ on a random word $w$ makes $o(\log |w|)$ reversals.
Let $n$ be a sufficiently large natural number. Exact requirements for it are specified later. Let us consider the following set $M$ in alphabet $\{0, 1, 2, 3\}$. Each word in $M$ will have a single letter $3$ and at most $O(n^2)$ ones. The number of zeros and the number of twos will be in between 1 and $n$. Let us now define the form of word in $M$ containing $n_0$ zeros and $n_2$ twos (there is going to be only one such word). As we do this we will consider how it is processed by $\mu$.
Let us denote by $c(w, i)$ the number of intersections of a point $i$ on the work tape of machine $\mu$ and a word $w$ and let us denote by $f(n)$ $\max c(w, i)$, where the maximum is taken over all words of length $\leq (2a^2\cdot b+1)n^2$ and all points of the tape. Let us note that $f(n)=o(\log n)$ which follows from the supposition by contradiction. %TODO varbÅ«t vajag pÄrformulÄ“t.
The word starts with $n_0$ zeros. The part of the work tape which is traversed while reading these zeros will be called the zone of zeros. The length of the zone of zeros does not exceed $2a\cdot (n_0+f(n))$.
After reading the first $n_0$ zeros our word is constructed as follows. While the machine is going to read a letter from the input tape and its head is positioned over the zone of zeros the letter is set to be $1$. %TODO varbÅ«t vajag pÄrformulÄ“t
If however the head is not positioned over the zone of zeros the letter is set to be $2$. This continues while the number of twos in our word reaches $n_2$. After that we set the next letter to be $3$. Subsequently while the head is positioned over the zone of zeros the corresponding input letters are set to be $2$, otherwise they are set to be $0$. This continues until there is a total of $n_0$ zeros or $n_2$ twos after the letter $3$. Hereupon the remaining zeros (or twos) and ones are written at the end of the word so that it belongs to the language $Q$.
Let us note the following peculiarity of how these words are processed by the machine $\mu$. If $n_0$ zeros on the second half of a word appear before $n_2$ twos then all of the zeros there are processed by $\mu$ while not in the zone of zeros. However if $n_2$ twos on the second half of the word appear before $n_0$ zeros then all of the twos there are processed by $\mu$ wile in zone of zeros. (Note that all of twos of the first part of the word are processed while not in the zone of zeros.)
Since the machine $\mu$ has property 2), there can be no more than $a\cdot b\cdot (2a\cdot (n_0+f(n)))$ consequent ones after each of letters $2$ before a letter $3$ if a reversal is not made at that place. Furthermore, each of the reversals can add no more than $a\cdot b\cdot (2a\cdot (n_0+f(n)))$ ones. This implies that the total number of ones before a letter $3$ can be at most $n_2+f(n)\cdot a\cdot b\cdot (2a\cdot (n_0+f(n)))$. For a sufficiently large $n$ this value does not exceed $(2a^2\cdot b+1)$. Let us require $n$ to be this large. (Later we consider another restraint on $n$.) As a result the length of the word does not exceed $O(n^2)$.
Let us divide $M$ in two subsets. A word $w$ is in subset $M'$ if it's last zero appears before it's last two. Otherwise $w$ is put in $M''$.
We consider these two cases:
\begin{enumerate}[a)]
\item $|M'| \geq |M''|$;
\item $|M'| < |M''|$.
\end{enumerate}
Let us denote by $M_1$ the subset $M'$ in the case of a), and the subset $M''$ in the case of b). $$|M_1|\geq \frac{n^2}{2}$$
In the case of a) we split $M_1$ into subsets based on the number of twos in a word. Let us denote by $M_2$ the largest of these subsets. In the case of b) we split $M_1$ in subsets based on the number of zeros in a word. Similarly, we denote by $M_2$ the largest of these subsets. $$|M_2|\geq \frac{n^2}{2}$$
Let us split $M_2$ in subsets based on the crossing sequences at the two points restricting the zone of zeros from the left and right. Let us denote by $M_3$ the largest of these subsets. $$M_3 \geq \frac{n}{2\cdot (2a+1)^{2\cdot f(n)}}$$
Now we are able to make precise the size of $n$. It is to be such that the estimate for $|m_3|$ exceeds 2. We fix two distinct words $w_1$ and $w_2$ in $M_3$.
We construct a word $w_3$ the following way. The initial fragment of $w_3$ (till the symbol 3) copies $w_1$. In order to define subsequent symbols of $w_3$ we simulate the processing of the words $w_1$ and $w_2$. We cut these words into pieces in accordance of the place of the the head, namely, over the zone of zeros or not. The number of these pieces for $w_1$ and $w_2$ is the same because the lengths of the crossing sequences are the same. In the case of a) we construct $w_3$ from the pieces of $w_1$ read in the zone of zeros (these pieces do not contain zeros) and the pieces of $w_2$ read outside the zone of zeros (all the zeros of the second half of $w_2$ are in these pieces). In the case of b) we construct $w_3$ from the pieces of $w_2$ read over the zone of zeros (all symbols 2 of the second half of $w_2$ are in these pieces), and the pieces of $w_1$ read outside the zone of zeros (these pieces do not contain zeros).
The machine accepts $w_3$ but the word is not in $Q$.
\qed
\end{proof}
\bigskip
Let $A = \{a,b,c,d,e,f,g,h,k,l,m,p,q,r,s,t,u,v\}$.
Now we consider a language $T$ in the alphabet $A \cup \{\#\} $ for which both the deterministic and probabilistic reversal complexity is linear but an ultrametric Turing machine can recognize this language with a constant number of reversals.
The language $T$ is defined as the set of all the words $x$ in the input alphabet such that either $x$ is in all 9 languages $T_i$ described below or in exactly 6 of them or in exactly 3 of them or in none of them where
$$
T_1 = \{x\#y \mid x \in A^* \wedge y \in A^* \wedge proj_{ab}(x) = proj_{ab}(y)\},
$$
$$
T_2 = \{x\#y \mid x \in A^* \wedge y \in A^* \wedge proj_{cd}(x) = proj_{cd}(y)\},
$$
$$
T_3 = \{x\#y \mid x \in A^* \wedge y \in A^* \wedge proj_{ef}(x) = proj_{ef}(y)\},
$$
$$
T_4 = \{x\#y \mid x \in A^* \wedge y \in A^* \wedge proj_{gh}(x) = proj_{gh}(y)\},
$$
$$
T_5 = \{x\#y \mid x \in A^* \wedge y \in A^* \wedge proj_{kl}(x) = proj_{kl}(y)\},
$$
$$
T_6 = \{x\#y \mid x \in A^* \wedge y \in A^* \wedge proj_{mp}(x) = proj_{mp}(y)\},
$$
$$
T_7 = \{x\#y \mid x \in A^* \wedge y \in A^* \wedge proj_{qr}(x) = proj_{qr}(y)\},
$$
$$
T_8 = \{x\#y \mid x \in A^* \wedge y \in A^* \wedge proj_{st}(x) = proj_{st}(y)\},
$$
$$
T_9 = \{x\#y \mid x \in A^* \wedge y \in A^* \wedge proj_{uv}(x) = proj_{uv}(y)\}.
$$
\begin{thebibliography}{99}
\bibitem{Rus12}
R\= usi\c n\v s Freivalds.
{\em Ultrametric automata and Turing machines}
EasyChair Proceedings, vol. 10, "Turing-100. The Alan Turing Centenary", pp. 98-112, 2012.
\bibitem{AF86} %TODO uz šo vajag atsauci
Farid M. Ablayev, R\= usi\c n\v s Freivalds.
Why Sometimes Probabilistic Algorithms Can Be More Effective.
{\em Lecture Notes in Computer Science,} 233 :1--14, 1986.
\bibitem{DD09}
Branko Dragovich and Alexandra Dragovich.
A $p$-adic Model
of DNA Sequence and Genetic Code.
{\em $p$-adic Numbers, Ultrametric Analysis, and Applications,}
vol. 1, No 1, 34--41, 2009
\iffalse
\bibitem{F99} %TODO uz šo vajag atsauci
R\= usi\c n\v s Freivalds.
How to Simulate Free Will in a Computational Device.
{\em ACM Computing Surveys,} vol. 31, No. 3es, p. 15, 1999.
\bibitem{F91} %TODO uz šo vajag atsauci
R\= usi\c n\v s Freivalds.
Complexity of Probabilistic Versus Deterministic Automata.
{\em Lecture Notes in Computer Science,} 502:565-613, 1991.
\bibitem{FK94} %TODO uz šo vajag atsauci
R\= usi\c n\v s Freivalds, Marek Karpinski.
Lower Space Bounds for Randomized Computation.
{\em Lecture Notes in Computer Science,} 820:580-592, 1994.
\fi
\bibitem{G83}
Fernando Quadros Gouvea.
$p$-adic Numbers: An Introduction (Universitext),
Springer; 2nd edition, 1983.
\bibitem{H01}
Mika Hirvensalo.
{\em Quantum Computing.} Springer-Verlag, Berlin-Heidelberg, 2001.
\bibitem{Kh97}
Andrei Yu. Khrennikov.
{\em NonArchimedean Analysis: Quantum Paradoxes, Dynamical Systems and
Biological Models,} Kluwer Academic Publishers, 1997.
\bibitem{K84}
Neal Koblitz.
{\em $p$-adic Numbers, $p$-adic Analysis, and Zeta-Functions} (Graduate Texts in Mathematics) (v. 58)
Springer; 2nd edition, 1984.
\bibitem{K06}
Sergey V. Kozyrev.
Ultrametric Analysis and Interbasin Kinetics.
{\em $p$-adic Mathematical Physics,} Proc. of the 2nd International Conference on $p$-adic
Mathematical Physics, American Institute Conference Proceedings,
vol. 826, pp. 121--128, 2006.
\bibitem{M00}
David A. Madore.
A first introduction to $p$-adic numbers.
http://www.madore.org/~david/math/padics.pdf
\bibitem{VVZ95}
V. S. Vladimirov, I. V. Volovich and E. I. Zelenov.
{\em $p$-adic Analysis and Mathematical Physics}
World Scientific, Singapore, 1995.
\end{thebibliography}
\end{document}