Browse Source

Naming.

Selecting names for sections; adding corrections with one-to last weekly meeting
with Emanuele.
Michele Orrù 11 years ago
parent
commit
5346711082

+ 2 - 2
book/conclusions.tex

@@ -1,7 +1,7 @@
-\chapter{Epilogue}
+\chapter{Conclusions}
 \noindent
 \noindent
 Everytime we see a certificate, we get this idea the somebody is telling us the
 Everytime we see a certificate, we get this idea the somebody is telling us the
-connection is safe. There is some outhority out there telling what to do.
+connection is safe. There is some authority out there telling what to do.
 We should be thinking more about what these authorities are and what they are
 We should be thinking more about what these authorities are and what they are
 doing.
 doing.
 
 

+ 29 - 26
book/dixon.tex

@@ -19,7 +19,7 @@ led to an entire family of algorithms, like \emph{Quadratic Sieve},
 The core idea is still to find a pair of perfect squares whose difference can
 The core idea is still to find a pair of perfect squares whose difference can
 factorize $N$, but maybe Fermat's hypotesis can be made weaker.
 factorize $N$, but maybe Fermat's hypotesis can be made weaker.
 
 
-\paragraph{Kraitchick} was the first one popularizing the idea the instead of
+\paragraph{Kraitchick} was the first one popularizing the idea that instead of
 looking for integers $\angular{x, y}$ such that $x^2 -y^2 = N$ it is sufficient
 looking for integers $\angular{x, y}$ such that $x^2 -y^2 = N$ it is sufficient
 to look for \emph{multiples} of $N$:
 to look for \emph{multiples} of $N$:
 \begin{align}
 \begin{align}
@@ -59,7 +59,8 @@ This way the complexity of generating a new $x$ is dominated by
 \bigO{|\factorBase|}. Now that the right side of \ref{eq:dixon:fermat_revisited}
 \bigO{|\factorBase|}. Now that the right side of \ref{eq:dixon:fermat_revisited}
 has been satisfied, we have to select a subset of those $x$ so that their
 has been satisfied, we have to select a subset of those $x$ so that their
 product can be seen as a square. Consider an \emph{exponent vector}
 product can be seen as a square. Consider an \emph{exponent vector}
-$v_i = (\alpha_0, \alpha_1, \ldots, \alpha_r)$ associated with each $x_i$, where
+$v_i = (\alpha_0, \alpha_1, \ldots, \alpha_r)$ with $r = |\factorBase|$
+associated with each $x_i$, where
 \begin{align}
 \begin{align}
   \label{eq:dixon:alphas}
   \label{eq:dixon:alphas}
   \alpha_j = \begin{cases}
   \alpha_j = \begin{cases}
@@ -72,12 +73,15 @@ values of $x^2 -N$, so we are going to use $\alpha_0$ to indicate the sign. This
 benefit has a neglegible cost: we have to add the non-prime $-1$ to our factor
 benefit has a neglegible cost: we have to add the non-prime $-1$ to our factor
 base $\factorBase$.
 base $\factorBase$.
 
 
-Let now $\mathcal{M}$ be the rectangular matrix having per each $i$-th row the
-$v_i$ associated to $x_i$: this way each element $m_{ij}$ will be $v_i$'s
-$\alpha_j$. We are interested in finding set(s) of the subsequences of $x_i$
+Let now $M \in \mathbb{F}_2^{(f \times r)}$,
+for some $f \geq r$,
+be the rectangular matrix having per each $i$-th row the
+$v_i$ associated to $x_i$: this way each matrix element $m_{ij}$ will be the
+$j$-th component of $v_i$.
+We are interested in finding set(s) of the subsequences of $x_i$
 whose product always have even powers (\ref{eq:dixon:fermat_revisited}).
 whose product always have even powers (\ref{eq:dixon:fermat_revisited}).
 Turns out that this is equivalent to look for the set of vectors
 Turns out that this is equivalent to look for the set of vectors
-$\{ w \mid wM = 0 \} = \ker(\mathcal{M})$ by definition of matrix multiplication
+$\{ w \mid wM = 0 \} = \ker(M)$ by definition of matrix multiplication
 in $\mathbb{F}_2$.
 in $\mathbb{F}_2$.
 
 
 
 
@@ -85,11 +89,11 @@ in $\mathbb{F}_2$.
 were actually used for a slightly different factorization method, employing
 were actually used for a slightly different factorization method, employing
 continued fractions instead of the square difference polynomial. Dixon simply
 continued fractions instead of the square difference polynomial. Dixon simply
 ported these to the square problem, achieving a probabilistic factorization
 ported these to the square problem, achieving a probabilistic factorization
-method working at a computational cost asymptotically  best than all other ones
-previously described: \bigO{\beta(\log N \log \log N)^{\rfrac{1}{2}}} for some
-constant $\beta > 0$ \cite{dixon}.
+method working at a computational cost asymptotically better than all other ones
+previously described: \bigO{\exp \{\beta(\log N \log \log N )^{\rfrac{1}{2}}\}}
+for some constant $\beta > 0$ \cite{dixon}.
 
 
-\section{Reduction Procedure}
+\section{Breaching the kernel}
 
 
 The following reduction procedure, extracted from ~\cite{morrison-brillhart}, is
 The following reduction procedure, extracted from ~\cite{morrison-brillhart}, is
 a forward part of the Gauss-Jordan elimination algorithm (carried out from right
 a forward part of the Gauss-Jordan elimination algorithm (carried out from right
@@ -109,7 +113,6 @@ At this point, we have all data structures needed:
 \\
 \\
 \\
 \\
 
 
-
 \begin{center}
 \begin{center}
   \emph{Reduction Procedure}
   \emph{Reduction Procedure}
 \end{center}
 \end{center}
@@ -130,8 +133,8 @@ At this point, we have all data structures needed:
 
 
 Algorithm \ref{alg:dixon:kernel} formalizes concepts so far discussed, by
 Algorithm \ref{alg:dixon:kernel} formalizes concepts so far discussed, by
 presenting a function \texttt{ker}, discovering linear dependencies in any
 presenting a function \texttt{ker}, discovering linear dependencies in any
-rectangular matrix $\mathcal{M} \in (\mathbb{F}_2)^{(f \times r)}$
-and storing dependencies into a \emph{history matrix} $\mathcal{H}$.
+rectangular matrix $M \in \mathbb{F}_2^{(f \times r)}$
+and storing dependencies into a \emph{history matrix} $H$.
 
 
 \begin{remark}
 \begin{remark}
   We are proceeding from right to left in order to conform with
   We are proceeding from right to left in order to conform with
@@ -143,18 +146,18 @@ and storing dependencies into a \emph{history matrix} $\mathcal{H}$.
 \begin{algorithm}
 \begin{algorithm}
   \caption{Reduction Procedure  \label{alg:dixon:kernel}}
   \caption{Reduction Procedure  \label{alg:dixon:kernel}}
   \begin{algorithmic}[1]
   \begin{algorithmic}[1]
-    \Function{Ker}{$\mathcal{M}$}
-    \State $\mathcal{H} \gets \texttt{Id}(f \times f)$
-    \Comment the initial $\mathcal{H}$ is the identity matrix
+    \Function{Ker}{$M$}
+    \State $H \gets \texttt{Id}(f \times f)$
+    \Comment the initial $H$ is the identity matrix
 
 
     \For{$j = r \strong{ downto } 0$}
     \For{$j = r \strong{ downto } 0$}
     \Comment reduce
     \Comment reduce
       \For{$i=0 \strong{ to } f$}
       \For{$i=0 \strong{ to } f$}
-        \If{$\mathcal{M}_{i, j} = 1$}
+        \If{$M_{i, j} = 1$}
           \For{$i' = i \strong{ to } f$}
           \For{$i' = i \strong{ to } f$}
-            \If{$\mathcal{M}_{i', k} = 1$}
-              \State $\mathcal{M}_{i'} = \mathcal{M}_i \xor \mathcal{M}_{i'}$
-              \State $\mathcal{H}_{i'} = \mathcal{H}_i \xor \mathcal{H}_{i'}$
+            \If{$M_{i', k} = 1$}
+              \State $M_{i'} = Mi \xor M_{i'}$
+              \State $H_{i'} = H_i \xor H_{i'}$
             \EndIf
             \EndIf
           \EndFor
           \EndFor
           \State \strong{break}
           \State \strong{break}
@@ -164,8 +167,8 @@ and storing dependencies into a \emph{history matrix} $\mathcal{H}$.
 
 
     \For{$i = 0 \strong{ to } f$}
     \For{$i = 0 \strong{ to } f$}
     \Comment yield linear dependencies
     \Comment yield linear dependencies
-      \If{$\mathcal{M}_i = (0, \ldots, 0)$}
-        \strong{yield} $\{\mu  \mid \mathcal{H}_{i,\mu} = 1\}$
+      \If{$M_i = (0, \ldots, 0)$}
+        \strong{yield} $\{\mu  \mid H_{i,\mu} = 1\}$
       \EndIf
       \EndIf
     \EndFor
     \EndFor
     \EndFunction
     \EndFunction
@@ -226,12 +229,12 @@ $e^{\sqrt{\ln N \ln \ln N}}$.
     \Comment search for suitable pairs
     \Comment search for suitable pairs
     \State $x_i \getsRandom \naturalN_{< N}$
     \State $x_i \getsRandom \naturalN_{< N}$
     \State $y_i \gets x_i^2 - N$
     \State $y_i \gets x_i^2 - N$
-    \State $v_i \gets \texttt{smooth}(y_i)$
+    \State $v_i \gets \textsc{smooth}(y_i)$
     \If{$v_i$} $i \gets i+1$ \EndIf
     \If{$v_i$} $i \gets i+1$ \EndIf
   \EndWhile
   \EndWhile
-  \State $\mathcal{M} \gets \texttt{matrix}(v_0, \ldots, v_f)$
+  \State $M \gets \texttt{matrix}(v_0, \ldots, v_f)$
   \For{$\lambda = \{\mu_0, \ldots, \mu_k\}
   \For{$\lambda = \{\mu_0, \ldots, \mu_k\}
-    \strong{ in } \texttt{ker}(\mathcal{M})$}
+    \strong{ in } \textsc{ker}(M)$}
   \Comment get relations
   \Comment get relations
     \State $x \gets \prod\limits_{\mu \in \lambda} x_\mu \pmod{N}$
     \State $x \gets \prod\limits_{\mu \in \lambda} x_\mu \pmod{N}$
     \State $y, r \gets \dsqrt{\prod\limits_{\mu \in \lambda} y_\mu \pmod{N}}$
     \State $y, r \gets \dsqrt{\prod\limits_{\mu \in \lambda} y_\mu \pmod{N}}$
@@ -245,7 +248,7 @@ $e^{\sqrt{\ln N \ln \ln N}}$.
   \end{algorithmic}
   \end{algorithmic}
 \end{algorithm}
 \end{algorithm}
 
 
-\paragraph{Parallelization}
+\paragraph{Parallelism}
 
 
 Dixon's factorization is ideally suited to parallel implementation. Similarly to
 Dixon's factorization is ideally suited to parallel implementation. Similarly to
 other methods like ECM and MPQS, treated in \cite{brent:parallel} \S 6.1,
 other methods like ECM and MPQS, treated in \cite{brent:parallel} \S 6.1,

+ 4 - 3
book/fermat.tex

@@ -155,9 +155,10 @@ the class \bigO{\log^2 N}, as we saw in section ~\ref{sec:preq:sqrt}.
 Computing separatedly $x^2$ would add an overhead of the same order of magnitude
 Computing separatedly $x^2$ would add an overhead of the same order of magnitude
 \bigO{\log^2 N}, and thus result in a complete waste of resources.
 \bigO{\log^2 N}, and thus result in a complete waste of resources.
 
 
-As a result of this, we advice the use of a strictly limited number of
-processors - like two or three - performing in parallel fermat's factorization
-method over different intervals.
+%%As a result of this, we advice the use of a strictly limited number of
+%%processors - like two or three - performing in parallel fermat's factorization
+%%method over different intervals.
+
 %%% Local Variables:
 %%% Local Variables:
 %%% TeX-master: "question_authority.tex"
 %%% TeX-master: "question_authority.tex"
 %%% End:
 %%% End:

+ 1 - 1
book/library.bib

@@ -226,7 +226,7 @@
 
 
 
 
 @article{morrison-brillhart,
 @article{morrison-brillhart,
-  title={A method of factoring and the factorization of $mathcal{F}_7$},
+  title={A method of factoring and the factorization of $\mathcal{F}_7$},
   author={Morrison, Michael A and Brillhart, John},
   author={Morrison, Michael A and Brillhart, John},
   journal={Mathematics of Computation},
   journal={Mathematics of Computation},
   volume=29,
   volume=29,

+ 1 - 1
book/math_prequisites.tex

@@ -45,7 +45,7 @@ i.e. $a \xor b$.
   An integer $a$ is said to be a \emph{quadratic residue} $\mod n$ if it is
   An integer $a$ is said to be a \emph{quadratic residue} $\mod n$ if it is
   congruent to a perfect square $\!\mod n$:
   congruent to a perfect square $\!\mod n$:
   \begin{equation*}
   \begin{equation*}
-    x^2 = a \pmod{n}
+    x^2 \equiv a \pmod{n}
   \end{equation*}
   \end{equation*}
 \end{definition*}
 \end{definition*}
 
 

+ 17 - 14
book/pollard+1.tex

@@ -34,14 +34,15 @@ specific names:
 \end{tabular}
 \end{tabular}
 \\
 \\
 \\
 \\
-For our purposes, $U_n$ is not necessary, and $\upsilon=1$\footnote{
+For our purposes, $U_n$ is not necessary, and $\upsilon=1$.\footnote{
   Williams justifies this choice stating that choosing to compute a $U_n$ sequence
   Williams justifies this choice stating that choosing to compute a $U_n$ sequence
   is far more computationally expensive than involving $V_n$; for what
   is far more computationally expensive than involving $V_n$; for what
   concerns $\upsilon$, that simplifies Lehmer's theorem with no loss of
   concerns $\upsilon$, that simplifies Lehmer's theorem with no loss of
   generality. For further references,
   generality. For further references,
-  see \cite{Williams:p+1} \S 3.}.
-In order to simplify any later theorem, we just omit it. Therefore, the latter
-expression becomes:
+  see \cite{Williams:p+1} \S 3.}
+In order to simplify any later theorem, we just omit $U_n$, and assume $\upsilon
+= 1$.
+Therefore, the latter expression becomes:
 \begin{equation}
 \begin{equation}
   \label{eq:williams:ls}
   \label{eq:williams:ls}
   \begin{cases}
   \begin{cases}
@@ -60,7 +61,7 @@ Three foundamental properties interpolate terms of Lucas Sequences:
 
 
 All these identities can be verified by direct substitution with
 All these identities can be verified by direct substitution with
 \ref{eq:williams:ls}. What's interesting about the ones of above, is that we can
 \ref{eq:williams:ls}. What's interesting about the ones of above, is that we can
-exploit those to efficiently compute the product $V_{hk}$ if we are provided with
+exploit them to efficiently compute the product $V_{hk}$ if we are provided with
 $\angular{V_k, V_{k-1}}$ by considering the binary representation of the number
 $\angular{V_k, V_{k-1}}$ by considering the binary representation of the number
 $h$. In other words, we can consider each bit of $h$, starting from the least
 $h$. In other words, we can consider each bit of $h$, starting from the least
 significant one: if it is zero, we use the multiplication formula
 significant one: if it is zero, we use the multiplication formula
@@ -95,7 +96,7 @@ significant one: if it is zero, we use the multiplication formula
 Finally, we need the following (\cite{Williams:p+1} \S 2):
 Finally, we need the following (\cite{Williams:p+1} \S 2):
 \begin{theorem*}[Lehmer]
 \begin{theorem*}[Lehmer]
   If $p$ is an odd prime and the Legendre symbol
   If $p$ is an odd prime and the Legendre symbol
-  $\legendre{\Delta}{p} = \varepsilon$, then:
+  $\varepsilon = \legendre{\Delta}{p}$, then:
   \begin{align*}
   \begin{align*}
 %%  &  U_{(p - \varepsilon)m} \equiv 0 \pmod{p} \\
 %%  &  U_{(p - \varepsilon)m} \equiv 0 \pmod{p} \\
   &  V_{(p - \varepsilon)m} \equiv 2 \pmod{p}
   &  V_{(p - \varepsilon)m} \equiv 2 \pmod{p}
@@ -106,18 +107,20 @@ Finally, we need the following (\cite{Williams:p+1} \S 2):
 
 
 \begin{remark}
 \begin{remark}
   From number theory we know that the probability that
   From number theory we know that the probability that
-  $\mathbb{P}\{\epsilon = -1\} = \rfrac{1}{2}$.
-  But, there is reason to restrict ourselves for $\legendre{\Delta}{p} = -1$.
-  What's woth noring, though, is that a $p-1$ factorization attempt would be
-  quite slow with respect to Pollard's $p-1$ method. As a consequence of this,
-  we and \cite{Williams:p+1} proceeded running pollard first????
+  $\mathbb{P}\{\varepsilon = -1\} = \rfrac{1}{2}$.
+  There is no reason to restrict ourselves to
+  $\legendre{\Delta}{p} = -1$.
+  In the alternative case of $\varepsilon = 1$, the factorization yields the
+  same factors as Pollard's $p-1$ method, but slowerly.
+  For this reason, when we look up for a $p-1$ factorization, it is advisable
+  to attempt the attack presented in the previous chapter \cite{Williams:p+1}.
 \end{remark}
 \end{remark}
 
 
 
 
 \section{Dressing up}
 \section{Dressing up}
 
 
 At this point the factorization proceeds just by substituting the
 At this point the factorization proceeds just by substituting the
-exponentiation and Fermat's theorem with lucas sequences and Lehmer's theorem
+exponentiation and Fermat's theorem with Lucas sequences and Lehmer's theorem
 introduced in the preceeding section. If we find a $Q$ satisfying $p+1 \mid Q
 introduced in the preceeding section. If we find a $Q$ satisfying $p+1 \mid Q
 \text{ or } p-1 \mid Q$ then, due to Lehmer's theorem $p \mid V_Q -2$ and thus
 \text{ or } p-1 \mid Q$ then, due to Lehmer's theorem $p \mid V_Q -2$ and thus
 $\gcd(V_Q -2, N)$ is a non-trial divisor of $N$.
 $\gcd(V_Q -2, N)$ is a non-trial divisor of $N$.
@@ -125,7 +128,7 @@ $\gcd(V_Q -2, N)$ is a non-trial divisor of $N$.
 \begin{enumerate}[(i)]
 \begin{enumerate}[(i)]
 \item take a random, initial $\tau = V_1$; now let the \emph{base} be
 \item take a random, initial $\tau = V_1$; now let the \emph{base} be
   $\angular{V_0, V_1}$.
   $\angular{V_0, V_1}$.
-\item take the $i$-th prime in $\mathcal{P}$, starting from $0$, and call it be
+\item take the $i$-th prime in $\mathcal{P}$, starting from $0$, and call it
   $p_i$;
   $p_i$;
 \item assuming the current state is $\angular{V_k, V_{k-1}}$, compute the
 \item assuming the current state is $\angular{V_k, V_{k-1}}$, compute the
   successive terms of the sequence using additions and multiplications formula,
   successive terms of the sequence using additions and multiplications formula,
@@ -140,7 +143,7 @@ $\gcd(V_Q -2, N)$ is a non-trial divisor of $N$.
 If so, than we have finished, since $g$ itself and $\frac{N}{g}$
 If so, than we have finished, since $g$ itself and $\frac{N}{g}$
 are the two primes factorizing the public  modulus.
 are the two primes factorizing the public  modulus.
 Otherwise, if $g = 1$ we go back to to (ii), since $p-1 \nmid Q$ yet;
 Otherwise, if $g = 1$ we go back to to (ii), since $p-1 \nmid Q$ yet;
-if $g = N$ start back from scratch, as $pq \mid g_i$.
+if $g = N$ start back from scratch, as $pq \mid g$.
 %% riesel actually does not examine this case, strangely. However, it seems to
 %% riesel actually does not examine this case, strangely. However, it seems to
 %% be fairly probable that.
 %% be fairly probable that.
 
 

+ 11 - 1
book/pollardrho.tex

@@ -169,7 +169,7 @@ Since any of the two primes factoring $N$ is bounded above by $\sqrt{N}$, we
 will find a periodic sequence, and thus a factor, in time \bigO{\sqrt[4]{N}}.
 will find a periodic sequence, and thus a factor, in time \bigO{\sqrt[4]{N}}.
 
 
 
 
-\section{A Computer program for Pollard's $\rho$ method}
+\section{An Implementation Perspective}
 
 
 The initial algorithm described by Pollard \cite{pollardMC} and consultable
 The initial algorithm described by Pollard \cite{pollardMC} and consultable
 immediately below, looks for the pair $\angular{x_i, x_{2i}}$ such that
 immediately below, looks for the pair $\angular{x_i, x_{2i}}$ such that
@@ -205,6 +205,16 @@ algorithm over the accumulated product to save some computation cycles, just as
 we saw in section~\ref{sec:pollard-1:implementing}. The next code fragment
 we saw in section~\ref{sec:pollard-1:implementing}. The next code fragment
 adopts this trick together with Brent's cycle-finding variant:
 adopts this trick together with Brent's cycle-finding variant:
 
 
+\paragraph{Parallelism}
+Unfortunately, a parallel implementation of the $\rho$ algorithm would not
+provide a linear speedup.
+
+The computation of the $x_i$ sequence is intrinsecally serial; the only
+plausible approach to parallelism would be to try several different pseudorandom
+sequences, in which case $m$ different machines processing $m$ different
+sequences in parallel would be no more than \bigO{\sqrt{m}}
+efficient (\cite{brent:parallel} \S 3).
+
 \begin{algorithm}
 \begin{algorithm}
   \caption{Pollard-Brent's factorization \label{alg:pollardrho}}
   \caption{Pollard-Brent's factorization \label{alg:pollardrho}}
   \begin{algorithmic}[1]
   \begin{algorithmic}[1]

+ 1 - 1
book/question_authority.tex

@@ -58,7 +58,7 @@
 \newcommand{\getsRandom}{\xleftarrow{r}}
 \newcommand{\getsRandom}{\xleftarrow{r}}
 \newcommand{\xor}{\oplus}
 \newcommand{\xor}{\oplus}
 \newcommand{\legendre}[2]{({#1}/{#2})}
 \newcommand{\legendre}[2]{({#1}/{#2})}
-\newcommand{\PKArg}{\textit{PubKey:}$\angular{N, e}$}
+\newcommand{\PKArg}{${N, e}$}
 \theoremstyle{plain}
 \theoremstyle{plain}
 \newtheorem*{theorem*}{Theorem}
 \newtheorem*{theorem*}{Theorem}
 \newtheorem*{definition*}{Definition}
 \newtheorem*{definition*}{Definition}

+ 2 - 2
book/ssl_prequisites.tex

@@ -160,7 +160,7 @@ such as HTTP have a common set of standard messages.
 The {Padding} section contains informations about the padding algorithm
 The {Padding} section contains informations about the padding algorithm
 adopted, and the padding size.
 adopted, and the padding size.
 
 
-\section{What's inside a certificate \label{sec:ssl:x509}}
+\section{What is inside a certificate \label{sec:ssl:x509}}
 SSL certificates employed the X.509 PKI standard, which specifies, among other
 SSL certificates employed the X.509 PKI standard, which specifies, among other
 things, the format for revocation lists, and certificate path validation
 things, the format for revocation lists, and certificate path validation
 algorithms.
 algorithms.
@@ -220,7 +220,7 @@ browser now mitigates its spectrum of action.
 Even if TLS 1.1, and TLS 1.2 are considered safe as of today, attacks such as
 Even if TLS 1.1, and TLS 1.2 are considered safe as of today, attacks such as
 CRIME, and lately BREACH constitute a new and valid instance of threat for HTTP
 CRIME, and lately BREACH constitute a new and valid instance of threat for HTTP
 compressions mechanisms. However, as their premises go beyond the scope of this
 compressions mechanisms. However, as their premises go beyond the scope of this
-document, those attacks have not been analyzed. For forther informations, see
+document, all these attacks have not been analyzed. For forther informations, see
 \url{http://breachattack.com/}.
 \url{http://breachattack.com/}.
 
 
 %%% Local Variables:
 %%% Local Variables:

+ 21 - 12
book/wiener.tex

@@ -31,9 +31,17 @@ Consider now any \emph{finite continued fraction}, conveniently represented with
 the sequence
 the sequence
 $\angular{a_0, a_1, a_2, a_3,  \ \ldots, a_n}$.
 $\angular{a_0, a_1, a_2, a_3,  \ \ldots, a_n}$.
 Any number $x \in \mathbb{Q}$ can be represented as a finite continued fraction,
 Any number $x \in \mathbb{Q}$ can be represented as a finite continued fraction,
-and for each $i < n$ there exists a fraction $\rfrac{h_i}{k_i}$ approximating
+and for each $i < n$ there exists a fraction $\rfrac{h}{k}$ approximating
 $x$.
 $x$.
-By definition, each new approximation is recursively defined as:
+By definition, each new approximation
+$$
+\begin{bmatrix}
+  h_i \\ k_i
+\end{bmatrix}
+=
+\angular{a_0, a_1, \ \ldots, a_i}
+$$
+is recursively defined as:
 
 
 \begin{align}
 \begin{align}
   \label{eq:wiener:cf}
   \label{eq:wiener:cf}
@@ -61,8 +69,9 @@ an underestimate of another one $f = \frac{\theta}{\kappa}$, i.e.
 \begin{align}
 \begin{align}
   \abs{f - f'} = \delta
   \abs{f - f'} = \delta
 \end{align}
 \end{align}
-then for a $\delta$ sufficiently small, $f$ is \emph{equal} to the $n$-th
-continued fraction expansion of $f'$ (\cite{smeets} \S 2). Formally,
+then for a $\delta$ sufficiently small, $f'$ is \emph{equal} to the $n$-th
+continued fraction expansion of $f$, for some $n \geq 0$ (\cite{smeets} \S 2).
+Formally,
 
 
 \begin{theorem*}[Legendre]
 \begin{theorem*}[Legendre]
   If $f = \frac{\theta}{\kappa}$,  $f' = \frac{\theta'}{\kappa'}$ and
   If $f = \frac{\theta}{\kappa}$,  $f' = \frac{\theta'}{\kappa'}$ and
@@ -74,11 +83,11 @@ continued fraction expansion of $f'$ (\cite{smeets} \S 2). Formally,
     \text{ implies that }
     \text{ implies that }
     \quad
     \quad
     \begin{bmatrix}
     \begin{bmatrix}
-      \theta \\ \kappa
+      \theta' \\ \kappa'
     \end{bmatrix}
     \end{bmatrix}
     =
     =
     \begin{bmatrix}
     \begin{bmatrix}
-      \theta'_n \\ \kappa'_n
+      \theta_n \\ \kappa_n
     \end{bmatrix},
     \end{bmatrix},
     \quad
     \quad
     \text{ for some } n \geq 0
     \text{ for some } n \geq 0
@@ -99,7 +108,7 @@ The \emph{continued fraction algorithm}  is the following:
   \item check whether $\rfrac{h_i}{k_i}$ is equal to $f$
   \item check whether $\rfrac{h_i}{k_i}$ is equal to $f$
 \end{enumerate}
 \end{enumerate}
 
 
-\section{Constructing the attack}
+\section{Continued Fraction Algorithm applied to RSA}
 
 
 As we saw in ~\ref{sec:preq:rsa}, by construction the two exponents are such that
 As we saw in ~\ref{sec:preq:rsa}, by construction the two exponents are such that
 $ed \equiv 1 \pmod{\varphi(N)}$. This implies that there exists a
 $ed \equiv 1 \pmod{\varphi(N)}$. This implies that there exists a
@@ -146,7 +155,7 @@ The above equation is constructed so that the $x$ coefficient is the sum of the
 two primes, while the constant term $N$ is the product of the two. Therefore, if
 two primes, while the constant term $N$ is the product of the two. Therefore, if
 $\eulerphi{N}$ has been correctly guessed, the two roots will be $p$ and $q$.
 $\eulerphi{N}$ has been correctly guessed, the two roots will be $p$ and $q$.
 
 
-\section{Again on the engine™}
+\section{An Implementation Perspective}
 
 
 The algorithm is pretty straightforward by itself: we just need to apply the
 The algorithm is pretty straightforward by itself: we just need to apply the
 definitions provided in ~\ref{eq:wiener:cf} and test each convergent until
 definitions provided in ~\ref{eq:wiener:cf} and test each convergent until
@@ -205,10 +214,10 @@ convergent, we provide an algorithm for attacking the RSA cipher via Wiener:
 \paragraph{Parallelism}
 \paragraph{Parallelism}
 Parallel implementation of this specific version of Wiener's Attack is
 Parallel implementation of this specific version of Wiener's Attack is
 difficult, because the inner loop is inherently serial. At best, parallelism
 difficult, because the inner loop is inherently serial. At best, parallelism
-could be employed to construct a constructor process, building the $f_n$
-convergents, and consumers receiving each of those and processing them
-seperatedly. The first one arriving to a solution, broadcasts a stop message to
-the others.
+could be employed to split the task into a \emph{constructor} process, building
+the $f_n$ convergents, and many \emph{consumers} receiving each convergent to be
+processed seperatedly.
+The first one arriving to a solution, broadcasts a stop message to the others.
 
 
 %%% Local Variables:
 %%% Local Variables:
 %%% mode: latex
 %%% mode: latex