-
Notifications
You must be signed in to change notification settings - Fork 1
/
lattice_based_attacks.tex
executable file
·1844 lines (1563 loc) · 71.4 KB
/
lattice_based_attacks.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\documentclass[a4paper,12pt]{report}
\usepackage{alltt, fancyvrb, url}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{subfigure}
\usepackage{wrapfig}
% \usepackage{algorithm}% http://ctan.org/pkg/algorithms
% \usepackage[ruled, vlined]{algorithm2e}
% \usepackage{algorithm}
\usepackage[]{algorithm2e}
\usepackage{titling}
\usepackage{algorithmic}
\usepackage[utf8]{inputenc}
\usepackage{fontenc}
\usepackage{amsmath, stmaryrd, mathtools, algorithm}
\usepackage{amssymb}
%\usepackage{amsfonts}
\usepackage{float}
\usepackage{hyperref}
\usepackage{listings}
\usepackage{url}
\usepackage[english]{cleveref}
\usepackage[english]{babel}
\newtheorem{definition}{Definition}[section]
\newtheorem{theorem}{Theorem}[section]
\newtheorem{property}{Property}[section]
\newcommand{\R}{\mathbb{R}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\F}{\mathbb{F}}
\newcommand{\dd}{\cdot}
\newcommand{\subtitle}[1]{
\posttitle{
\par\end{center}
\begin{center}\large#1\end{center}
\vskip0.5em}
}
\title{Introduction to Lattice-based Attacks}
\subtitle{Version 2}
\author{Giovanni Di Santi}
\date{}
\begin{document}
\maketitle
\tableofcontents
\chapter{Introduction}
Lattices are potent mathematical structures widely utilized in computer science and mathematics to solve a broad spectrum of problems.
In recent years, numerous post-quantum cryptographic protocol candidates have employed the challenging problems associated with hard lattices,
for example, the \textbf{Learning With Errors} \cite{LWE}.
Additionally, lattice constructions are leveraged in cryptanalysis to compromise cryptographic schemes, typically through the use of
reduction algorithms such as \textbf{LLL} \cite{LLL}.
To comprehend the mathematics of lattices, a foundational understanding of linear algebra is essential. Therefore, the second chapter provides a summary of the
basic properties necessary for this understanding. We primarily referenced ``An Introduction to Mathematical Cryptography''\cite{mathcrypto14} to
elucidate these concepts, while also providing some geometric intuition behind certain operations.
Cryptographic protocols such as \textbf{RSA} and \textbf{ECDSA} are considered secure; however, various attacks can exploit a
relaxed model of each, e.g., what if some bits of the secret key are known? What if a portion of the message is known prior to encryption?
We introduce lattice-based attacks on these protocols and provide code in our repository \cite{repo} to verify the effectiveness of these attacks. The main reference for this discussion is
``Recovering cryptographic keys from partial information, by example''\cite{cryptoeprint:2020:1506}.
% Lattices are powerful mathematical objects that can be used in computer science and mathematics to solves an extensive range of different problems.
% In the recent years many post-quantum cryptographic protocols candidates involve the use of hard lattices problems,
% for example the \textbf{Learning With Errors} \cite{LWE}.
% Lattice constructions can also be used in cryptanalysis to break cryptographic schemes, typically using reduction
% algorithms as \textbf{LLL} \cite{LLL}.
%
% To introduce the mathematics of lattices a basic linear algebra background is needed, so the second chapter is a summary of the
% basics properties needed to understand it. We used as main resource ``An Introduction to Mathematical Cryptography''\cite{mathcrypto14} to
% explain them, but we also tried to give some geometrical intuition behind some operations.
%
% Cryptographic protocol as \textbf{RSA} and \textbf{ECDSA} are considered secure, however there are various attacks against a
% relaxed-model of each of them, i.e., What if we know some bits of the secret key? What if we know a part of the message before the encryption?
% We present an introduction to the attacks which involves lattices against these protocols. The code used to confirm the correctness
% of the attacks is available on the repository \cite{repo} and the main reference was
% ``Recovering cryptographic keys from partial information, by example''\cite{cryptoeprint:2020:1506}.
\chapter{Linear Algebra Background}
This chapter introduces key definitions and properties essential for understanding lattices. It progresses to cover the mathematics of lattices and related complex problems. By the end of this chapter, the reader should be well-equipped to follow the mathematical discussions in subsequent chapters.
\section{Vector Spaces}
\begin{definition}
\textbf{Vector space}.
\end{definition}
A \textit{vector space} $V$ over $\R^{m}$ is a set closed under finite vector addition and scalar multiplication. This closure is characterized by:
\begin{center}
$a_1v_1 + a_2v_2 \in V$ for any $v_1,v_2 \in V$ and $a_1,a_2 \in \R$
\end{center}
\begin{definition}
\textbf{Linear Combinations}
\end{definition}
Consider any vectors $v_1, v_2, \ldots, v_k \in V$. A \textit{linear combination} of these vectors is any vector expressed as:
\begin{center}
$\alpha_1v_1 + \alpha_2v_2 + \cdots + \alpha_kv_k$ where $\alpha_1, \ldots, \alpha_k \in \R$
\end{center}
\begin{definition}
\textbf{Linear Independence}
\end{definition}
A set of vectors $v_1, v_2, \ldots, v_k \in V$ is \textit{linearly independent} if the only solution to the equation
\begin{center}
$a_1v_1 + a_2v_2 + \cdots + a_kv_k = 0$
\end{center}
is $a_1 = a_2 = \cdots = a_k = 0$.
\begin{definition}
\textbf{Bases}
\end{definition}
A set of linearly independent vectors $b = (v_1, \ldots, v_n) \in V$ constitutes a \textit{basis} of $V$ if every vector $w \in V$ can be represented as:
\begin{center}
$w = a_1v_1 + a_2v_2 + \cdots + a_nv_n$
\end{center}
\begin{definition}
\textbf{Vector's Length}
\end{definition}
The \textit{Euclidean norm} or length of a vector $v = (x_1, x_2, \ldots, x_m)$ is given by:
\begin{center}
$\lVert v \rVert = \sqrt{x_1^2 + x_2^2 + \cdots + x_m^2}$
\end{center}
% \chapter{Linear Algebra Background}
%
% We start with some definitions and properties needed to understand the lattices, to move later on the mathematics behind them and some
% hard problems related to it. At the end of the chapter the reader should be able to follow the math used in the other chapters.
%
% \section{Vector Spaces}
%
% \begin{definition}
% \textbf{Vector space}.
% \end{definition}
% A \textit{vector space} $V$ is a subset of $\R^{m}$ which is closed under finite vector addition and scalar multiplication, with the property that
%
% \begin{center}
% $a_1v_1 + a_2v_2 \in V$ for all $v_1,v_2 \in V$ and all $a_1,a_2 \in \R$
% \end{center}
%
% \begin{definition}
% \textbf{Linear Combinations}
% \end{definition}
%
% Let $v_1,v_2,\ldots,v_k \in V$. A \textit{linear combination} of $v_1,v_2,\ldots,v_k \in V$ is any vector of the form:
%
% \begin{center}
% $\alpha_1v_1 + \alpha_2v_2 + \cdots + \alpha_kv_k$ with $\alpha_1, \ldots, \alpha_k \in \R$
% \end{center}
%
% \begin{definition}
% \textbf{Linear Independece}
% \end{definition}
%
% A set of vectors $v_1,v_2,\ldots,v_k \in V$ is \textit{linearly independent} if the the only way to get
%
% \begin{center}
% $a_1v_1 + a_2v_2 + \cdots + a_kv_k = 0$
% \end{center}
%
% is to have $a_1 = a_2 = \cdots = a_k = 0$.
%
% \begin{definition}
% \textbf{Bases}
% \end{definition}
%
% Taken a set of linearly independent vectors $b = (v_1,\ldots,v_n) \in V$ we say that $b$ is a \textit{basis} of $V$ if $\forall w \in V$ we can write
%
% \begin{center}
% $w = a_1v_1 + a_2v_2 + \cdots + a_nv_n$
% \end{center}
%
% \begin{definition}
% \textbf{Vector's length}
% \end{definition}
%
% The vector's length or \textit{Euclidean norm} of $v = (x_1, x_2, \ldots, x_m)$ is
%
% \begin{center}
% $\lVert v \rVert = \sqrt{x_1^2 + x_2^2 + \cdots + x_m^2}$
% \end{center}
\begin{definition}
\textbf{Dot Product}
\end{definition}
Given vectors $v, w \in V \subset \R^m$ where $v = (x_1, x_2, \ldots, x_m)$ and $w = (y_1, y_2, \ldots, y_m)$, the \textit{dot product} of $v$ and $w$ is defined as:
\begin{center}
$v \cdot w = x_1y_1 + x_2y_2 + \cdots + x_my_m$
or
$v \cdot w = \lVert v \rVert \lVert w \rVert \cos{\theta}$
\end{center}
where $\theta$ is the angle between $v$ and $w$ when their starting points are positioned at the origin $O$.
\begin{figure}[!b]
\centering
\includegraphics[scale=0.2]{./img/dot_product.png}
\caption{Dot Product By 3Blue1Brown \cite{3b1b}}
\label{fig:dot_product}
\end{figure}
Geometrically, $v \cdot w$ represents the length of the projection of $w$ onto $v$ multiplied by the length of $v$, as illustrated in \ref{fig:dot_product}.
\begin{definition}
\textbf{Orthogonal Basis}
\end{definition}
An \textit{orthogonal basis} for a vector space $V$ consists of vectors $v_1, \ldots, v_m$ such that:
\begin{center}
$v_i \cdot v_j = 0$ for all $i \neq j$
\end{center}
If $\lVert v_i \rVert = 1$ for each $i$, then the basis is termed \textit{orthonormal}.
\begin{algorithm}[H]
\vspace*{5px}
$v_1^* = v_1$\;
\\
\For{$i \gets 2$ \KwTo $n$}{
$\mu_{ij} = v_i \cdot v_j^* / \lVert v_j^* \rVert ^ 2$ for $1 \le j < i$ \;
\\
$v_i^* = v_i - \sum_{j=1}^{i-1} \mu_{ij}v_j^*$ \;
}
\caption{Gram-Schmidt Algorithm}
\label{alg:gram_schmidt}
\end{algorithm}
Given a basis $b = (v_1, \ldots, v_n)$ of a vector space $V \subset \R^m$, the Gram-Schmidt algorithm generates an orthogonal basis $b^* = (v_1^*,\ldots,v_n^*)$. These two bases satisfy the property that Span$\{v_1,\ldots,v_i\}$ = Span$\{v_1^*,\ldots,v_i^*\}$ for all $i = 1,2,\ldots,n$.
\begin{figure}[htpb]
\centering
\includegraphics[width=0.5\textwidth]{./img/gram_schmidt.png}
\caption{Gram Schmidt Orthogonalization}
\label{fig:gram_schmidt}
\end{figure}
Using vectors $v_1=(4, 1)$ and $v_2=(2, 3)$ as a basis and applying the Gram-Schmidt process results in $u_1=v_1=(4, 1), u_2=(-10/17, 40/17)$, as depicted in \ref{fig:gram_schmidt}.
\begin{definition}
\textbf{Determinant}
\end{definition}
The \textit{determinant} of a square matrix is a scalar value that characterizes the matrix and is defined by the following properties:
\begin{enumerate}
\item The determinant is linear in each row.
\item The determinant changes sign when two rows are swapped.
\item The determinant of the identity matrix is 1.
\end{enumerate}
A matrix is termed \textbf{singular} if its determinant is $0$, indicating it lacks an inverse.\\
Animations created by Grant Sanderson provide a fantastic visual understanding of the geometric intuition behind the \textit{determinant}\cite{determinant}.
% \begin{definition}
% \textbf{Dot Product}
% \end{definition}
%
% Let $v, w \in V \subset \R^m$ and $v = (x_1, x_2, \ldots, x_m), w = (y_1, y_2, \ldots, y_m)$, the \textit{dot product} of $v$ and $m$ is
%
% \begin{center}
% $v \dd m = x_1y_1 + x_2y_2 + \cdots + x_my_m$
%
% or
%
% $v \dd m = \lVert v \rVert \lVert w \rVert \cos{\theta}$
% \end{center}
%
% where $\theta$ is the angle between $v$ and $w$ if we place the starting points of the vectors at the origin $O$.
%
% \begin{figure}[!b]
% \centering
% \includegraphics[scale=0.2]{./img/dot_product.png}
% \caption{Dot Product By 3Blue1Brown \cite{3b1b}}
% \label{fig:dot_product}
% \end{figure}
%
% Geometrically speaking $v \cdot m$ is the length of $w$ projected to $v$ multiplied by the length of $v$ as shown in \ref{fig:dot_product}
%
% \begin{definition}
% \textbf{Ortoghonal Basis}
% \end{definition}
%
% An \textit{ortoghonal basis} for a vector space $V$ is a basis $v_1, \ldots, v_m$ with the property that
%
% \begin{center}
% $v_i \dd v_j = 0$ for all $i \neq j$
% \end{center}
%
% If $\lVert v_i \rVert = 1$ for all $i$ then the basis is \textit{orthonormal}.
%
% \begin{algorithm}[H]
% \vspace*{5px}
% $v_1^* = v1$\;
% \\
% \For{$i \gets 2$ \KwTo $n$}{
% $\mu_{ij} = v_i \cdot v_j^* / \lVert v_j^* \rVert ^ 2$ for $1 \le j < i$ \;
% \\
% $v_i^* = v_i - \sum_{j=1}^{i-1} \mu_{ij}v_j^*$ \;
% }
% \caption{Gram-Schmidt Algorithm}
% \label{alg:gram_schmidt}
% \end{algorithm}
%
% Let $b = (v_1, \ldots, v_n)$, be a basis for a vector space $V \subset \R^m$. There is an algorithm to create an orthogonal basis
% $b^* = (v_1^*,\ldots,v_n^*)$.
% The two bases have the property that Span$\{v_1,\ldots,v_i\}$ = Span$\{v_1^*,\ldots,v_i^*\}$ for all $i = 1,2,\ldots,n$
%
% \begin{figure}[htpb]
% \centering
% \includegraphics[width=0.5\textwidth]{./img/gram_schmidt.png}
% \caption{Gram Schmidt orthogonalization}
% \label{fig:gram_schmidt}
% \end{figure}
%
% If we take $v_1=(4, 1), v_2=(2, 3)$ as basis and apply gram schmidt we obtain $u_1=v_1=(4, 1), u_2=(-10/17, 40/17)$ as shown in \ref{fig:gram_schmidt}
%
% \begin{definition}
% \textbf{Determinant}
% \end{definition}
%
% The \textit{determinant} of a square matrix is a function that satisfies the following properties:
%
% \begin{enumerate}
% \item The determinant is linear in each row.
% \item The determinant reverses sign if two rows are interchanged.
% \item The determinant of the identity matrix is equal to 1.
% \end{enumerate}
%
% If the determinant of a matrix is $0$ then the matrix is called \textbf{singular} (without an inverse).\\
%
% Grant Sanderson created fantastic animations to understand the geometrical intuition behind the \textit{determinant}\cite{determinant}.
% \section{Lattices}
%
% \begin{definition}
% \textbf{Lattice}
% \end{definition}
%
% Let $v_1,\ldots,v_n \in \R^m, m \ge n$ be linearly independent vectors. A \textit{Lattice} $L$ spanned by $\{v_1,\ldots,n_n\}$ is the set of
% all integer linear combinations of $v_1,\ldots,v_n$.
%
% \begin{center}
% $L = \bigg\{\sum_{i=1}^{n} a_iv_i, a_i \in \Z \bigg\}$
% \end{center}
%
% If $v_i$ for every $i = 1,\ldots\,n$ has integer coordinates then the lattice is
% called \textit{Integral Lattice}.
%
% On the figure \ref{fig:lattice0} we show a lattice $L$ with bases $v=(3, 1)$ and $w=(-1, 1)$, and on \ref{fig:lattice1} the same lattice $L$ with
% a different basis.
%
% \begin{figure}[!bp]
% \begin{minipage}[b]{0.50\textwidth}
% \includegraphics[width=\textwidth]{./img/lattice_b0.png}
% \caption{Lattice $L$ spanned by $v, w$}
% \label{fig:lattice0}
% \end{minipage}
% \hspace{\fill}
% \hspace{\fill}
% \hspace{\fill}
% \hspace{\fill}
% \hspace{\fill}
% \begin{minipage}[b]{0.50\textwidth}
% \includegraphics[width=\textwidth]{./img/lattice_b1.png}
% \caption{Lattice $L$ spanned by $v', w'$}
% \label{fig:lattice1}
% \end{minipage}
% \end{figure}
%
% \clearpage
%
% \section{Lattice Problems}
%
% \subsection{SVP}
%
% \textbf{The Shortest Vector Problem} (\texttt{SVP}): \textit{Find a nonzero vector $v \in L$ that minimez the Euclidean norm $\lVert v \rVert$.}
%
% Gauss's developed an algorithm to find an optimal basis for a two-dimensional lattice given an arbitrary basis. The output of the algorithm
% gives the shortest nonzero vector in $L$ and in this way solves the \texttt{SVP}.
%
% \begin{figure}[!b]
% \centering
% \includegraphics[width=0.6\textwidth]{./img/gauss_svp.png}
% \caption{Gauss reduction}
% \label{fig:gauss_svp}
% \end{figure}
%
% \begin{algorithm}[H]
% \vspace*{5px}
% \While{true}{
% \If{$\lVert v_2 \rVert < \lVert v_1 \rVert$}{
% swap $v_1, v_2$ \;
% }
% m = $\left \lfloor{(v_1 \cdot v_2) {\lVert v_1 \rVert}^{-2}} \right \rfloor$ \\
% \If{$m = 0$}{
% return $v_1, v_2$ \;
% }
% $v_2 = v_2 - mv_1$ \\
% }
% \caption{Gauss Basis Reduction}
% \end{algorithm}
%
% \textbf{Example}. Let $L$ a lattice spanned by $v_1 = (8, 4), v_2 = (7, 5)$, if we apply the gauss reduction algorithm we
% obtain $w_1 = (1, -1), w_2 = (6, 6)$ \ref{fig:gauss_svp}. $w_1$ is the shortest nonzero vector in the lattice $L$.
%
% The bigger the dimension of the lattice, the harder is the problem and there isn't a polynomial algorithm to solve it.
%
% \subsection{CVP}
%
% \textbf{The Closest Vector Problem} (\texttt{CVP}): \textit{Given a vector $t \in \R^m$ that is not in $L$, find the vector $v \in L$
% closest to $t$, in other words find a vector $v \in L$ that minimizes the Euclidean norm $\lVert t - v \rVert$.}
%
% \begin{figure}[!ht]
% \centering
% \includegraphics[width=0.6\textwidth]{./img/cvp.png}
% \caption{CVP}
% \label{fig:cvp}
% \end{figure}
%
% \textbf{Example}. Let $L$ a lattice spanned by $v_1 = (0, 1), v_2 = (2, 0)$, and a target vector $t = (2.3, 2.4)$. The closest vector
% to $t$ in the lattice $L$ is $w = (2, 2)$.\\
%
% Both \textbf{CVP} and \textbf{SVP} are $\mathcal{NP}$-hard problems, thus the complexity of the problem increments with the dimension
% of the lattice. They can be used as primitive to create a trapdoor functions for a cryptographic scheme.
%
% There are various algorithms that can output a solution under certain constraints for an approximation of both \textbf{SVP}
% and \textbf{CVP}, for example \textbf{LLL}\cite{LLL} and \textbf{Babai's algorithm}\cite{babai} respectively.
\section{Lattices}
\begin{definition}
\textbf{Lattice}
\end{definition}
Consider $v_1, \ldots, v_n \in \R^m$ with $m \ge n$, which are linearly independent vectors. A \textit{lattice} $L$, spanned by $\{v_1, \ldots, v_n\}$, comprises all integer linear combinations of these vectors:
\begin{center}
$L = \bigg\{\sum_{i=1}^{n} a_iv_i , a_i \in \Z \bigg\}$
\end{center}
If every $v_i$ ($i = 1, \ldots, n$) has integer coordinates, the lattice is termed an \textit{integral lattice}.
Figures \ref{fig:lattice0} and \ref{fig:lattice1} depict the lattice $L$ with bases $v=(3, 1)$ and $w=(-1, 1)$, and another basis configuration, respectively.
\begin{figure}[H]
\begin{minipage}[b]{0.50\textwidth}
\includegraphics[width=\textwidth]{./img/lattice_b0.png}
\caption{Lattice $L$ spanned by $v, w$}
\label{fig:lattice0}
\end{minipage}
\hspace{\fill}
\begin{minipage}[b]{0.50\textwidth}
\includegraphics[width=\textwidth]{./img/lattice_b1.png}
\caption{Lattice $L$ spanned by $v', w'$}
\label{fig:lattice1}
\end{minipage}
\end{figure}
\section{Lattice Problems}
\subsection{SVP}
\textbf{The Shortest Vector Problem} (\texttt{SVP}): \textit{Find a nonzero vector $v \in L$ that minimizes the Euclidean norm $\lVert v \rVert$.}
Gauss developed an algorithm to determine an optimal basis for a two-dimensional lattice from any given basis. This method outputs the shortest nonzero vector in $L$, thereby solving the \texttt{SVP}.
\begin{figure}[!b]
\centering
\includegraphics[width=0.6\textwidth]{./img/gauss_svp.png}
\caption{Gauss reduction}
\label{fig:gauss_svp}
\end{figure}
\begin{algorithm}[H]
\vspace*{5px}
\While{true}{
\If{$\lVert v_2 \rVert < \lVert v_1 \rVert$}{
swap $v_1, v_2$\;
}
m = $\left \lfloor{(v_1 \cdot v_2) {\lVert v_1 \rVert}^{-2}} \right \rfloor$\;
\If{$m = 0$}{
return $v_1, v_2$\;
}
$v_2 = v_2 - mv_1$\;
}
\caption{Gauss Basis Reduction}
\end{algorithm}
\textbf{Example}: In a lattice $L$ spanned by $v_1 = (8, 4), v_2 = (7, 5)$, applying the Gauss reduction algorithm results in $w_1 = (1, -1), w_2 = (6, 6)$ as shown in \ref{fig:gauss_svp}. $w_1$ represents the shortest nonzero vector in lattice $L$.
The complexity of solving \texttt{SVP} increases with the dimension of the lattice, making it a challenging problem without a known polynomial-time solution.
\subsection{CVP}
\textbf{The Closest Vector Problem} (\texttt{CVP}): \textit{Given a vector $t \in \R^m$ that is not in $L$, find the vector $v \in L$ closest to $t$, minimizing the Euclidean norm $\lVert t - v \rVert$.}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\textwidth]{./img/cvp.png}
\caption{CVP}
\label{fig:cvp}
\end{figure}
\textbf{Example}: For a lattice $L$ spanned by $v_1 = (0, 1), v_2 = (2, 0)$, with target vector $t = (2.3, 2.4)$, the closest vector in the lattice $L$ is $w = (2, 2)$, as depicted in \ref{fig:cvp}.
Both \textbf{CVP} and \textbf{SVP} are considered $\mathcal{NP}$-hard problems, and their complexity increases with the dimension of the lattice. They serve as primitives for creating trapdoor functions in cryptographic schemes.
Several algorithms can approximate solutions for both \textbf{SVP} and \textbf{CVP}, such as \textbf{LLL}\cite{LLL} and \textbf{Babai's algorithm}\cite{babai} respectively.
\chapter{LLL}
\section{Introduction}
The \textbf{Lenstra-Lenstra-Lovász (LLL)} algorithm is a polynomial-time method designed to compute a "short" basis for a lattice.
\begin{theorem}
\textbf{LLL}
\end{theorem}
Given a lattice $L \in \Z^n$ spanned by $B = \{v_1,\ldots,v_n\}$, the LLL algorithm produces a reduced basis $\{w_1, \ldots, w_n\}$ such that:
\begin{center}
$\lVert w_i \rVert \le 2^{\frac{n(n-1)}{4(n-i+1)}} \cdot det(L)^{\frac{1}{n-i+1}}$ for $i=1,\ldots,n$
\end{center}
This process is executed in time polynomial in $n$ and the bit-size of the entries in the basis matrix $B$.\\
In this reduced basis, the vectors are as short as possible starting from the first, with each subsequent vector being longer. Additionally, these vectors are almost orthogonal to each other, meaning the dot product $w_i \cdot w_j$ is close to zero.
\subsection*{Example}
Consider the following basis, represented in matrix form, that spans a lattice $L$:
\[
L =
\begin{pmatrix}
4 & 9 & 10\\
2 & 1 & 30\\
3 & 7 & 9
\end{pmatrix}
\]
After applying the LLL algorithm, the basis is transformed to:
\[
LLL(L) =
\begin{pmatrix}
-1 & -2 & -1\\
3 & -2 & 1\\
-1 & -1 & 5
\end{pmatrix}
\]
Here, the first row is the shortest vector in the lattice $L$, thus addressing the \textbf{SVP} problem. For higher dimensions, however, the LLL algorithm provides only an approximation to the \textbf{SVP}.
\section{Algorithm}
\begin{algorithm}[H]
\vspace*{5px}
Input basis ${v_1, \ldots, v_n}$ \; \\
$k = 2$ \; \\
$v_1* = v_1$ \; \\
\While{$k \le n$}{
\For{$j = k -1$ \KwTo $1$ step $-1$} {
$v_k = v_k - \left\lfloor\mu_{k, j}\right\rceil \boldsymbol{v}_{j}$ [size-reduction] \;
}
\eIf{$\lVert v_k^* \rVert^2 \ge (\frac{3}{4} - \mu_{k, k-1}^2) \lVert v_{k-1}^2 \rVert$}{
$k = k + 1$ \; [Lovàsz-condition]
}{
Swap $v_{k-1}$ and $v_k$ [swap-step] \; \\
$k = \max(k - 1, 2)$
}
}
Note: At each step, $v_1^*, \ldots, v_k^*$ is the orthogonalized set of vectors
obtained with the Gram-Schmidt \ref{alg:gram_schmidt}. \\
\vspace*{5px}
$\mu_{i,j}$ is the quantity $(v_i \cdot v_j^*) / \lVert v_j^* \rVert^2$. \\
Output reduced basis ${v_1, \ldots, v_n}$
\caption{LLL reduction algorithm}
\end{algorithm}
The running time of the algorithm is polynomial and completes the main loop in no more than
$\mathcal{O}(n^2\log n + n^2\log M)$ steps, where $M = max \lVert v_i \rVert$. \\
Kelby Ludwig elucidates the intuition behind each step of the algorithm \cite{lllintuition}, and Oded Regev provides a detailed explanation of its properties \cite{lllexplanation}.\\
Applications of the LLL algorithm include:
\begin{enumerate}
\item Factoring polynomials over the integers, e.g., factorizing $x^2 - 1$ into $(x + 1)(x - 1)$.
\item Solving Integer Programming, a renowned $\mathcal{NP}$-complete problem, in polynomial time when the number of variables is fixed.
\item Approximating solutions to the \textbf{CVP} and \textbf{SVP}, among other lattice-based challenges.
\item Cryptanalysis, specifically breaking cryptographic protocols.
\end{enumerate}
% \chapter{LLL}
%
% \section{Introduction}
%
% The \textbf{Lenstra-Lenstra-Lovász} \textit{LLL}\cite{LLL} or \textit{$L^3$} is a polynomial time algorithm to find a ``short'' basis of a lattice.
%
% \begin{theorem}
% \textbf{LLL}
% \end{theorem}
%
% \textit{Let $L \in \Z^n$ be a lattice spanned by $B = \{v_1,\ldots,v_n\}$. The LLL algorithm outputs a reduced lattice
% basis $\{w_1, \ldots, w_n\}$ with}
%
% \begin{center}
% $\lVert w_i \rVert \le 2^{\frac{n(n-1)}{4(n-i+1)}} det(L)^{\frac{1}{n-i+1}}$ for $i=1,\ldots,n$
% \end{center}
%
% \textit{in time polynomial in n and in the bit-size of the entries of the basis matrix $B$}.\\
%
% Basically the first vector of the new basis will be as short as possible, and the other will have increasing lengths. The new vectors
% will be as orthogonal as possible to one another, i.e., the dot product $w_i \dd w_j$ will be close to zero.
%
% \subsection*{Example}
%
% For example we can take the following basis (the rows are the vector) that span
% a lattice $L$.
%
% \[
% L =
% \begin{pmatrix}
% 4 & 9 & 10\\
% 2 & 1 & 30\\
% 3 & 7 & 9
% \end{pmatrix}
% \]
%
% Applying the LLL algorithm we obtain
%
% \[
% LLL(L) =
% \begin{pmatrix}
% -1 & -2 & -1\\
% 3 & -2 & 1\\
% -1 & -1 & 5
% \end{pmatrix}
% \]
%
% Where the first row is the shortest vector in the lattice $L$, and so solves the \textbf{SVP} problem.
% For higher dimensions however the LLL algorithm outputs only an approximation for the \textbf{SVP} problem.
%
% \section{Algorithm}
%
% \begin{algorithm}[H]
% \vspace*{5px}
% Input basis ${v_1, \ldots, v_n}$ \; \\
% $k = 2$ \; \\
% $v_1* = v_1$ \; \\
% \While{$k \le n$}{
% \For{$j = k -1$ \KwTo $1$ step $-1$} {
% $v_k = v_k - \left\lfloor\mu_{k, j}\right\rceil \boldsymbol{v}_{j}$ [size-reduction] \;
% }
% \eIf{$\lVert v_k^* \rVert^2 \ge (\frac{3}{4} - \mu_{k, k-1}^2) \lVert v_{k-1}^2 \rVert$}{
% $k = k + 1$ \; [Lovàsz-condition]
% }{
% Swap $v_{k-1}$ and $v_k$ [swap-step] \; \\
% $k = \max(k - 1, 2)$
% }
% }
% Note: At each step, $v_1^*, \ldots, v_k^*$ is the orthogonalized set of vectors
% obtained with the Gram-Schmidt \ref{alg:gram_schmidt}. \\
% \vspace*{5px}
% $\mu_{i,j}$ is the quantity $(v_i \cdots v_j^*) / \lVert v_j^* \rVert^2$. \\
% Output reduced basis ${v_1, \ldots, v_n}$
% \caption{LLL reduction algorithm}
% \end{algorithm}
%
% The running time of the algorithm is polynomial and executes the main loop no more than
% $\mathcal{O}(n^2\log n + n^2\log M)$ steps, where $M = max \lVert v_i \rVert$. \\
%
% Kelby Ludwig explains an intuition behind every step of the algorithm \cite{lllintuition}, and Oded Regev
% gives a more rigorous explanation of the algorithm and the properties \cite{lllexplanation}.\\
%
% Some applications of the algorithm are the following:
%
% \begin{enumerate}
% \item Factoring polynomials over the integers. For example, given $x^2 - 1$ factor it into $x + 1$ and $x - 1$.
% \item Integer Programming. This is a well-known $\mathcal{NP}$-complete problem. Using \textbf{LLL}, one can obtain a polynomial time solution
% to integer programming with a fixed number of variables.
% \item Approximation to the \textbf{CVP} or \textbf{SVP}, as well as other lattice problems.
% \item Cryptanalysis (break cryptographic protocols).
% \end{enumerate}
\chapter{RSA Cryptanalysis}
\section{RSA Introduction}
\subsection{Algorithm}
\textbf{RSA} is one of the earliest and most used asymmetric cryptosystem.
The steps to generate a public/private key for \textbf{RSA} are:
\begin{enumerate}
\item Fix $e = 65537$ or $e = 3$ (public).
\item Find two primes $p, q$ such that $p - 1$ and $q - 1$ are relatively prime to $e$, gcd$(e, p-1) = 1$ and gcd$(e, q-1) = 1$.
\item Compute $N = p * q$ and $\phi(n) = (p-1) * (q-1)$
\item Calculate $d$ (private) as the multiplicative inverse of $e$ modulo $\phi(n)$.
\item $(N, e)$ is the public key, $(N, d)$ is the private key.
\end{enumerate}
To encrypt a message $m$ with \textbf{textbook RSA}:
\begin{center}
$c = m^e \mod N$
\end{center}
To decrypt a ciphertext $c$:
\begin{center}
$m = c^d \mod N$
\end{center}
\subsection{Security}
RSA relies on the hardness of factoring $N$, and there isn't a polynomial algorithm for factorization, so it's considered secure.
However, there are different attacks against textbook RSA or bad implementations:
\begin{itemize}
\item If $m < N^{\frac{1}{e}}$, then $m^e < N$ (the modulo operation is not applied), and we only need to find the $e$th root of $c$ over the
integers to find $m$. This is one of the reasons why we pad the message before the encryption.
\item If $q = p + x$ where $x$ is small enough, then it's easy to recover the factors of $N$ computing $A = \sqrt{N}$ and
$p = A - x$ for different values of $x$ until $N \mod p = 0$, which means that $q = N / p$.
\item It's possible to recover $d$ if it is too small (in practice it could be used to accelerate decryption), using the Wiener's attack \cite{wiener}.
\item Other attacks can be found on the excellent survey by Dan Boneh \cite{boneh99twentyyears}.
\end{itemize}
\section{Lattices against RSA}
We begin by discussing the main concepts behind Coppersmith's attack \cite{coppersmith96} and the underlying mathematics. This attack is applicable to \textbf{RSA} when a small exponent $e$ is used, and it can also be used to factorize the modulus if partial information about $p$ or $q$ is known. In 2017, this method was utilized to exploit \href{https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-15361}{CVE-2017-15361}, also known as \textbf{ROCA} (Return Of Coppersmith's Attack) \cite{roca}.
\subsection{Mathematical Introduction}
Finding roots of a univariate polynomial over the integers is straightforward. However, finding roots of a modular polynomial, such as:
\begin{center}
$f(x) \equiv 0 \mod N$
\end{center}
poses significant challenges. Assume $N$ is an \textbf{RSA} modulus whose factorization is unknown. Consider a univariate integer polynomial $f(x)$ of degree $n$:
\begin{center}
$f(x) = x^n + a_{n-1}x^{n-1} + a_{n-2}x^{n-2} + \cdots + a_1n + a_0$
\end{center}
Coppersmith demonstrated that it is possible to recover the value $x_0$ such that $f(x_0) \equiv 0 \mod N$, with $x_0 < N^{\frac{1}{n}}$, in polynomial time using the following theorem:
\clearpage
\begin{theorem}
\textbf{Howgrave-Graham}
\end{theorem}
\textit{Let $g(x)$ be a univariate polynomial with $n$ monomials and $m$ be a positive integer. If we establish a constraint $X$ and the following conditions are satisfied:}
\begin{center}
\begin{eqnarray}
g(x_0) \equiv 0 \mod N^m, |x_0| \le X \label{eq:bound} \\
\lVert g(xX) \rVert < \frac{N^m}{\sqrt{n}} \label{eq:pol}
\end{eqnarray}
\end{center}
\textit{Then $g(x_0) = 0$ is also valid over the integers.}
This theorem enables the computation of the root of $f(x) \mod N$ by finding a polynomial $g(x)$ that shares the same root, but modulo $N^m$. If the conditions \ref{eq:bound} and \ref{eq:pol} are met, the root of $g(x)$ over the integers corresponds to the same root $x_0$ such that $f(x_0) \equiv 0 \mod N$.
The idea behind Howgrave-Graham's method involves generating polynomials $p_i$ that also have $x_0$ as a root modulo $N^m$.
The significance of the \textbf{LLL} algorithm in this context is two-fold:
\begin{itemize}
\item It conducts \textbf{integer linear operations} on basis vectors, ensuring that even if the basis changes, it remains a linear combination of vectors retaining $x_0$ as a root modulo $N^m$.
\item If the lattice is properly constructed, the norm of the shortest vector in the reduced basis will satisfy \ref{eq:pol}, allowing the algorithm to predict the length's bound of the shortest vector it can find.
\end{itemize}
Polynomials $p_i$ ($g_{i,j}$ and $h_i$) sharing the same root $x_0$ over $N^m$ of $f$, where $\delta$ is the degree of $f$, can be crafted as follows:
\begin{center}
\begin{eqnarray}
g_{i,j}(x) &= x^j \cdot N^i \cdot f^{m-i}(x) \text{ for } i = 0,\hdots,m-1,\hspace{2mm} j=0,\hdots,\delta-1 \label{eq:create_pol} \\
h_i(x) &= x^i \cdot f^m(x) \text{ for } i = 0,\hdots,t-1
\end{eqnarray}
\end{center}
Applying \textbf{LLL} to a lattice built from the coefficients of \ref{eq:create_pol}, we'll find a short vector $v = g(xX)$ that will satisfy \ref{eq:pol}. Consequently, if we then compute $g(x)$, we will be able to determine the root over the integers.
% \section{Lattices against RSA}
%
% We start introducing the main ideas of Coppersmith's attack \cite{coppersmith96} and the math behind it.
% The attack can be used against \textbf{RSA} when a small $e$ is used, but also
% to factorize the modulus if some bits of $p$ or $q$ are known. In 2017 the attack was used to exploit
% \href{https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-15361}{CVE-2017-15361} also named \textbf{ROCA}
% (Return Of Coppersmith's Attack) \cite{roca}.
%
% \subsection{Mathematical introduction}
%
% It's easy to find the roots of a univariate polynomial over the integers. Finding the roots of \textbf{modular} polynomial is hard, example:
%
% \begin{center}
% $f(x) \equiv 0 \mod N$
% \end{center}
%
% Suppose $N$ is an \textbf{RSA} modulus, and we don't know the factorization of it. Let's have an univariate integer polynomial $f(x)$ with degree $n$
%
% \begin{center}
% $f(x) = x^n + a_{n-1}x^{n-1} + a_{n-2}x^{n-2} + \cdots + a_1n + a_0$
% \end{center}
%
% Coppersmith showed how we can recover the value $x_0$ such that $f(x_0) \equiv 0 \mod N$, with $x_0 < N^{\frac{1}{n}}$ in polynomial time
% using the following theorem
%
% \clearpage
%
% \begin{theorem}
% \textbf{Howgrave-Graham}
% \end{theorem}
%
% \textit{Let $g(x)$ be an univariate polynomial with n monomials and $m$ be a positive integer.
% If we have some restraint $X$ and the following equations hold}
%
% \begin{center}
% \begin{eqnarray}
% g(x_0) \equiv 0 \mod N^m, |x_0| \le X \label{eq:bound} \\
% \lVert g(xX) \rVert < \frac{N^m}{\sqrt{n}} \label{eq:pol}
% \end{eqnarray}
% \end{center}
%
% \textit{Then $g(x_0) = 0$ holds over the integers.}
%
% \vspace*{10px}
%
% This theorem states that is possible to compute the root of $f(x) \mod N$ if we can find a polynomial $g(x)$ that share the same root but modulo $N^m$.
% If \ref{eq:bound} and \ref{eq:pol} hold then we can simply compute the root of $g(x)$ over the integers to have the same root $x_0$
% such that $f(x_0) \equiv 0 \mod N$.
%
% Howgrave-Graham's idea is to find this polynomial $g$ by combining polynomials $p_i$ who also have $x_0$ as roots modulo $N^m$.
%
% \vspace*{10px}
%
% The \textbf{LLL} algorithm is fundamental because:
%
% \begin{itemize}
% \item It only does \textbf{integer linear operations} on the basis vectors.
% In this way even if the basis is different it's only a linear combination of vector that still have $x_0$ as root
% modulo $N^m$.
% \item If we craft the lattice properly, the norm of the shortest vector on the reduced basis will satisfy \ref{eq:pol}.
% We know the length's bound of the shortest vector that LLL could find.
% \end{itemize}
%
% We can easily create polynomials $p_i$ ($g_{i,j}$ and $h_i$) sharing the same root $x_0$ over $N^m$ of $f$ where $\delta$ is the degree of $f$:
%
% \begin{center}
% \begin{eqnarray}
% g_{i,j}(x) &= x^j \cdot N^i \cdot f^{m-i}(x) \text{ for } i = 0,\hdots,m-1,\hspace{2mm} j=0,\hdots,\delta-1 \label{eq:create_pol} \\
% h_i(x) &= x^i \cdot f^m(x) \text{ for } i = 0,\hdots,t-1
% \end{eqnarray}
% \end{center}
%
% Applying \textbf{LLL} to a lattice constructed by the coefficients of \ref{eq:create_pol} we'll find a short vector $v = g(xX)$ that will
% satisfy \ref{eq:pol}. If we then take $g(x)$ we will be able to compute the root over the integer.
\subsection{Recovering Plaintext with Known MSB Bits of $m$}
We will employ a simplified version of Coppersmith's attack, effective when the root is approximately less than $N^{\frac{1}{6}}$.
\textbf{Problem Setup}. Consider an \textbf{RSA} modulus $N$ of 97 bits:
\begin{center}
$N=\texttt{0x1a34992f64135aedaa0fe2bfd}$ and $e = 3$.
\end{center}
Before encryption, the message $m$ undergoes padding:
\[
z = pad || m = \texttt{0xffffffffffffffffffffff} || m
\]
where $||$ denotes concatenation.
The resulting ciphertext is:
\[
c = z^e \mod N = \texttt{0x11701d5479bcdef207cba6937}
\]
Assuming the factorization of $N$ is unknown, our goal is to deduce the message $m$. We are aware of the padding and that the message $m$ is 2 bytes long, consisting only of ASCII characters, so $m < \texttt{0x7b7b}$.
\vspace*{10px}
Let's define:
\[
a = \texttt{0xffffffffffffffffffffff0000}
\]
as the known padded string that was encrypted.
\vspace*{10px}
This leads to the equation $c = (a + m)^3 \mod N$, for an unknown small $m$. We can then define $f(x) = (a + x)^3 - c$ and set up the problem to find a small root $m$ such that $f(m) \equiv 0 \mod N$:
\begin{align*}
f(x) = x^3 + \texttt{0x17d7f55d0c9dc5851af54957c}x^2 + \texttt{0x3cf96fed2ea927527a2ed329}x\\
+ \texttt{0x32882e547964b3bcf75d315e}
\end{align*}
\vspace*{10px}
\textbf{Lattice Construction}. Let the coefficients of $f(x)$ be $f(x) = x^3 + f_2x^2 + f_1x + f_0$ and $X = \texttt{0x7b7b}$ be the upper bound for the root $m$. We can construct the matrix:
\[
B =
\label{mat:lattice_rsa}
\begin{pmatrix}
X^3 & f_2X^2 & f_1X & f_0 \\
0 & NX^2 & 0 & 0 \\
0 & 0 & NX & 0 \\
0 & 0 & 0 & N
\end{pmatrix}
\]
Each row of this matrix corresponds to the coefficient vectors of the polynomials $f(x), Nx^2, Nx,$ and $N$, all evaluated at $x = m$ being 0 modulo $N$.
We apply Howgrave-Graham with $m=1$ (the $N^m$ parameter, not the message).
In this lattice setup, every vector is of the form $v = (v_3X^3, v_2X^2, v_1X, v_0)$, because any integer linear combination of the vectors in the lattice will respect the bound $X^i$ for $i=\dim(B)-1,..,0$.
\textbf{Apply LLL}. We apply LLL to identify the shortest vector in the reduced basis:
\[
\begin{split}
v = (-\texttt{0xd55d67baf71d}X^3, \texttt{0x3cf3cca3200a58bf}X^2, \\
\texttt{0x3854d44211e80f248449}X, -\texttt{0xec93bf51cf766f7b9f1e5e6})
\end{split}
\]
Using the coefficients of $v$, we construct the polynomial $g$:
\[
\begin{split}
g(x) = -\texttt{0xd55d67baf71d}x^3 + \texttt{0x3cf3cca3200a58bf}x^2 \\
\texttt{0x3854d44211e80f248449}x -\texttt{0xec93bf51cf766f7b9f1e5e6}
\end{split}
\]
% \subsection{Recover plaintext with known MSB bits of $m$}
%
% We will use a simplified version of the Coppersmith's attack that works when the root is approximately less than $N^{\frac{1}{6}}$.
%
% \textbf{Problem setup}. Let $N$ be an \textbf{RSA} modulus of $97$-bit:
%
% \begin{center}
% $N=\texttt{0x1a34992f64135aedaa0fe2bfd}$ and $e = 3$.
% \end{center}
%
% Before the encryption the message $m$ is padded as
%
% \[
% z = pad || m = \texttt{0xffffffffffffffffffffff} || m
% \]
%
% where $||$ is the concatenation.
%
% The ciphertext is
%
% \[
% c = z^e \mod N = \texttt{0x11701d5479bcdef207cba6937}
% \]
%
% Suppose that we don't know the factorization of $N$, and we would like to know the message $m$.
% We know the padding and that the message $m$ is 2 bytes long, composed by only ascii letters, so $m < \texttt{0x7b7b}$.