(2+1)-dimensional topological gravity: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Bibcode Bot
m Converting 0 "id = {{arxiv|...}}" to "|arxiv=...". Adding 0 arXiv eprint(s), 1 bibcode(s) and 0 doi(s). Did it miss something? Report bugs, errors, and suggestions at User talk:Bibcode Bot
 
en>Colonies Chris
m Colonies Chris moved page (2+1)–dimensional topological gravity to (2+1)-dimensional topological gravity over redirect: hyphen, not dash
Line 1: Line 1:
<br><br>Cutlery knives have been the biggest component of each kitchen household or restaurants. Stainless steel - as expected, this type of knife is resistant to stains and rust, but it's mentioned not to be as sharp as carbon steel blades. The creating material of the handle is  Greatest Budget Japanese Knives not as critical as the knife weight and weight distribution, just like in sport vehicles. For fine slicing the tip of the knife keeps speak to with the cutting board and the object/food to be reduce slides below the blade pushed by the other hand.<br><br>My job right here is two-fold: I'll inform you what  Best Japanese Knife Sharpener the single ideal tactical knife to get is but I'll also aid you discover about exactly what makes a tactical knife and what you will need to know when purchasing to get the very best 1. A knife specially developed for very first responders, firefighters, and emergency personnel who will need to quickly cut away seatbelts or break security glass in car windows is a tactical knife.<br><br>Initially, by way of general use and abuse, the edge of the knife becomes bent and out of alignment.  [http://Www.Tumblr.com/tagged/Holding Holding] the sharpening steel just run the knife from finish to finish at a 25 degree angle, do this on each sides of the knife 3 or 4 times. Whichever sort you determine to get make sure it has guides to automatically set the suitable angle.  Seek advice from the organization just before lubricating your knife.<br><br>Knife sets are an location exactly where it's worth [http://photo.net/gallery/tag-search/search?query_string=spending spending] as significantly as you can.  Getting a set is the finest way to go for the reason that cutlery is generally much less expensive when purchased as a set.  Not all knives are made equally and you could potentially invest a lot of funds on a set that does not suit your needs.  Answering these inquiries will identify no matter whether you need a simple knife set or a more substantial set.<br><br>Normally speaking $300.00-$400.00 will get you an excellent top quality set of knives. When it really is good to have a complete set of knives, in a pretty block, along with the matching steak knives, that definitely isn't necessary. A excellent basic set need to consist of an 8 inch French chef's knife, a filet knife, a boning knife, and a paring knife.  We think that the most effective kitchen knives on the market place are in this Wusthof knife set.<br><br>Knife blades are ordinarily tapered at the edges to allow for less complicated sharpening. Portmeirion botanic garden kitchen knives set matches the renowned portmeirion dinnerware.  Every piece of the knife is handmade of the finest stainless steel material.  Cutting with a Chroma Form 301 knife is like driving a Porsche, and for excellent purpose.  Sleek and sexy, each knife looks like 1 seamless piece, but the blade and handle are truly various kinds of steel.<br><br>Recall the sharper the knife the much more careful you will need to be. But if the knife is sharper, and well balanced, you will not require to exert as significantly stress when cuttingThis implies the knife is significantly less probably to slip on the chopping board and cause damage.  The only time you will be injured with a high quality set is through a fault of your own not the knife. The Wusthof set of knives are popular for their durability.<br><br>Shopping for a new set of sharp knives and then maintaining them sharp will place you way ahead of the game - regardless of how significantly your knives expense. The single most essential aspect in figuring out the good quality of a kitchen knife is not its name brand, its price or whether or not it was stamped or forged.  If you have any type of questions relating to where and the best ways to make use of [http://www.thebestkitchenknivesreviews.com/best-japanese-knives-chef-models-review/ Very best Made Japanese Higo Knife], you can contact us at our own web-page. Applying the tips above to a set of diverse budget tiers yields some actual bargains. Cooks Illustrated's recommendation - the Forschner 3-Piece Fibrox set - is a screaming bargain.  The knife wasn't pretty sharp.
'''Ideal lattices''' are a special class of lattices and a generalization of [[cyclic lattice]]s.<ref name="Lyubattacks2008">
Vadim Lyubashevsky. [http://cseweb.ucsd.edu/users/vlyubash/papers/idlatticeconf.pdf Lattice-Based Identification Schemes Secure Under Active Attacks]. In ''Proceedings of the Practice and theory in [[Public-key cryptography|public key cryptography]] , 11th international conference on Public key cryptography'', 2008.</ref> Ideal lattices naturally occur in many parts of [[number theory]], but also in other areas. In particular, they have a significant place in [[cryptography]]. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example cyclic lattices, a special case of ideal lattices, are used in [[NTRUEncrypt]] and [[NTRUSign]].
 
==Introduction==
In general terms, ideal lattices are lattices corresponding to [[Ideal (ring theory)|ideals]] in [[Ring (mathematics)|rings]]  of the form <math> \mathbb{Z}[x]/\langle f \rangle </math> for some [[irreducible polynomial]] <math> f </math> of degree <math> n </math>.<ref name="Lyubattacks2008"/> All of the definitions of ''ideal lattices'' from prior work are instances of the following general notion: let <math> R </math> be a [[Ring (mathematics)|ring]] whose [[Ring (mathematics)|additive group]] is [[Group isomorphism|isomorphic]] to <math> \mathbb{Z}^n </math> (i.e., it is a free <math> \mathbb{Z} </math>-module of rank <math> n </math>), and let <math> \sigma </math> be an additive [[isomorphism]] mapping <math> R </math> to some lattice <math> \sigma(R) </math> in an <math> n</math>-dimensional real [[vector space]]  (e.g., <math> R^n </math>). The family of ''ideal lattices'' for the ring <math> R </math> under the embedding <math> \sigma </math> is the set of all lattices <math> \sigma(I) </math>, where <math> I </math> is an [[Ideal (ring theory)|ideal]] in <math> R. </math><ref name="LyubPeiReg2010">Vadim Lyubashevsky, Chris Peikert and Oded Regev. [http://www.springerlink.com/content/p0k0124216567122/ On Ideal Lattices and Learning with Errors over Rings]. In Eurocrypt 2010, ''Lecture Notes in Computer Science'', 2010.</ref>
 
==Definition==
 
===Notation===
Let <math> f \in \mathbb{Z}[x]</math> be a [[monic polynomial]] of degree <math> n </math>, and consider the [[quotient ring]] <math> \mathbb{Z}[x]/\langle f \rangle </math>.
 
Using the standard set of representatives <math> \lbrace(g\ \bmod\ \ f) : g \in \mathbb{Z}[x] \rbrace </math>, and identification of polynomials with vectors, the [[quotient ring]] <math> \mathbb{Z}[x]/\langle f \rangle </math> is [[Group isomorphism|isomorphic]] (as an [[Ring (mathematics)|additive group]] ) to the [[integer lattice]] <math> \mathbb{Z}^n</math>, and any [[Ideal (ring theory)|ideal]] <math> I \subseteq \mathbb{Z}[x]/\langle f \rangle </math> defines a corresponding integer sublattice <math> \mathcal{L}(I)\subseteq \mathbb{Z}^n</math>.
 
An '''ideal lattice''' is an [[integer lattice]] <math> \mathcal{L}(B)\subseteq \mathbb{Z}^n</math> such that <math>B = \lbrace g \ \bmod\ f : g \in I \rbrace </math> for some monic polynomial <math> f </math> of degree <math> n </math> and [[Ideal (ring theory)|ideal]] <math> I \subseteq \mathbb{Z}[x]/\langle f \rangle </math>.
 
===Related properties===
It turns out that the relevant properties of <math>f</math> for the resulting function to be collision resistant are:
* <math>f</math> should be [[Irreducible polynomial|irreducible]].
* the ring norm <math>\lVert g \rVert_f</math> is not much bigger than <math>\lVert g \rVert_\infty</math> for any polynomial <math>g</math>, in a quantitative sense.
 
The first property implies that every ideal of the [[Ring (mathematics)|ring]] <math> \mathbb{Z}[x]/\langle f \rangle </math> defines a full-rank lattice in <math> \mathbb{Z}^n </math> and plays a fundamental role in proofs.
 
'''Lemma:''' Every [[Ideal (ring theory)|ideal]] <math> I </math> of <math> \mathbb{Z}[x]/\langle f \rangle </math>, where <math> f </math> is a monic, [[Irreducible polynomial|irreducible]] integer polynomial of degree <math> n </math>, is isomorphic to a full-rank lattice in <math> \mathbb{Z}^n </math>.
 
Ding and Lindner<ref name="DinLin2007">Jintai Ding and Richard Lindner. [http://eprint.iacr.org/2007/322.pdf Identifying Ideal Lattices]. In ''Cryptology ePrint Archive, Report 2007/322'', 2007.</ref> gave evidence that distinguishing ''ideal lattices'' from general ones can be done in polynomial time and showed that in practice randomly chosen lattices are never ideal. They only considered the case where the lattice has full rank, i.e. the basis consists of <math> n </math> [[Linear independence|linear independent vectors]]. This is not a fundamental restriction because Lyubashevsky and Micciancio have shown that if a lattice is ideal with respect to an irreducible monic polynomial, then it has full rank, as given in the above lemma.
 
'''Algorithm:''' Identifying ideal lattices with full rank bases
 
''Data:'' A full-rank basis <math> B \in \mathbb{Z}^{(n,n)}</math> <br />
''Result:'' '''true''' and <math> \textbf{q} </math>, if <math> B </math> spans an ideal lattice with respect to <math> \textbf{q} </math>, otherwise '''false'''.
 
# Transform <math> B </math> into [[Hermite normal form|HNF]]
# Calculate <math> A = {\rm adj}(B) </math>, <math> d = \det(B) </math>, and <math> z = B_{(n,n)} </math>
# Calculate the product <math> P = AMB \bmod \ d </math>
# '''if''' ''only the last column of P is non-zero'' '''then'''
# set <math> c = P_{(\centerdot,n)} </math> to equal this column
# '''else return false'''
# '''if''' <math> z \mid c_i </math> for <math> i = 1, \dots , n </math> '''then'''
# use [[Chinese remainder theorem|CRT]] to find <math> q^ \ast \equiv \ (c/z) \bmod \ (d/z) </math> and <math> q^ \ast \equiv 0 \bmod \ z </math>
# '''else return false'''
# '''if''' <math> Bq^ \ast \equiv 0 \bmod \ (d/z) </math> '''then'''
# '''return true''', <math> q = Bq^ \ast /d </math>
# '''else return false'''
 
where the matrix M is
 
:<math> M = \begin{pmatrix}
0 & . & . & . & 0 \\
  &  &  &  & . \\
  &  &  &  & . \\
I_{n-1}  &  &  &  & . \\
  &  &  &  & 0
\end{pmatrix}</math>
 
Using this algorithm, it can be seen that many lattices are not ''ideal lattices''. For example let <math> n = 2 </math> and <math> k \in \mathbb{Z} \setminus \lbrace 0, \pm 1 \rbrace </math>, then
:<math> B_1 = \begin{pmatrix}
k & 0 \\
0 & 1
\end{pmatrix}</math>
is ideal, but
:<math> B_2 = \begin{pmatrix}
1 & 0 \\
0 & k
\end{pmatrix}</math>
is not. <math> B_2 </math> with <math> k = 2 </math> is an example given by Lyubashevsky and Micciancio.<ref name="LyubMic2006">Lyubashevsky, V., Micciancio, D.  [http://cseweb.ucsd.edu/users/vlyubash/papers/generalknapsackfull.pdf Generalized compact knapsacks are collision resistant.]. In ''CBugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 144–155. Springer, Heidelberg (2006)''.</ref>
 
Performing the algorithm on it and referring to the basis as B, matrix B is already in [[Hermite normal form|Hermite Normal Form]] so the first step is not needed.  The determinant is <math> d = 2 </math>, the [[adjugate matrix]]
:<math> A = \begin{pmatrix}
2 & 0 \\
0 & 1
\end{pmatrix},</math>
:<math> M = \begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix}</math>
and finally, the product <math> P = AMB \bmod d </math> is
:<math> P = \begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix}.</math>
 
At this point the algorithm stops, because all but the last column of <math> P </math> have to be zero if <math> B </math> would span an ''ideal lattice''.
 
==Use in cryptography==
Micciancio<ref name="Mic2007">Micciancio, D.  [http://www.springerlink.com/content/g11573q628x12970/fulltext.pdf Generalized compact knapsacks, cyclic lattices, and efficient oneway functions.]. In ''Computational Complexity 16(4), 365–411 (2007)''.</ref> introduced the class of structured cyclic lattices, which correspond to ideals in [[polynomial ring]]s  <math> \mathbb{Z}[x]/(x^n-1)</math>, and presented the first provably secure one-way function based on the worst-case [[Hardness of approximation|hardness]] of the restriction of ''Poly(n)''-SVP to cyclic lattices. (The problem ''γ''-SVP consists in computing a non-zero vector of a given lattice, whose norm is no more than ''γ'' times larger than the norm of a shortest non-zero lattice vector.) At the same time, thanks to its algebraic structure, this one-way function enjoys high efficiency comparable to the [[NTRUEncrypt|NTRU]] scheme <math> \tilde{O}(n) </math> evaluation time and storage cost). Subsequently, Lyubashevsky and Micciancio<ref name="LyubMic2006"/> and independently Peikert and Rosen<ref name="PeiRos2006">Peikert, C., Rosen, A.  [http://www.cc.gatech.edu/~cpeikert/pubs/cyclic-crh.pdf Efficient collision-resistant hashing from worst-case assumptions on cyclic lattices.]. In ''Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 145–166. Springer, Heidelberg (2006)''.</ref> showed how to modify Micciancio’s function to construct an efficient and provably secure [[Collision resistance|collision resistant]] [[Cryptographic hash function|hash function]]. For this, they introduced the more general class of ''ideal lattices'', which correspond to [[Ideal (ring theory)|ideals]] in [[polynomial ring]]s <math> \mathbb{Z}[x]/f(x)</math>. The [[collision resistance]] relies on the hardness of the restriction of Poly(n)-SVP to ''ideal lattices'' (called ''Poly(n)''-Ideal-SVP). The average-case collision-finding problem is a natural computational problem called Ideal-SIS, which has been shown to be as hard as the worst-case instances of Ideal-SVP. Provably secure efficient signature schemes from ''ideal lattices'' have also been proposed,<ref name="Lyubattacks2008"/><ref name="MicLyubAsympt2008">Vadim Lyubashevsky and Daniele Micciancio.  [http://www.iacr.org/archive/tcc2008/49480032/49480032.pdf Asymptotically efficient lattice-based digital signatures]. In ''Proceedings of the 5th conference on Theory of cryptography'', 2008.</ref> but constructing efficient provably secure [[Public-key cryptography|public key encryption]]  from ''ideal lattices'' was an interesting [[open problem]].
 
===Efficient collision resistant hash functions===
The main usefulness of the ''ideal lattices'' in [[cryptography]] stems from the fact that very efficient and practical [[Collision resistance|collision resistant]] [[Cryptographic hash function|hash functions]] can be built based on the hardness of finding an approximate [[Lattice problem|shortest vector]] in such lattices.<ref name="Lyubattacks2008"/>
Independently constructed [[Collision resistance|collision resistant]] [[Cryptographic hash function|hash functions]] by Peikert and Rosen,<ref name="PeiRos2006"/> and Lyubashevsky and Micciancio based on ''ideal lattices'' (a generalization of cyclic lattices), and provided a fast and practical implementation.<ref name="LyubPeiReg2010"/> These results paved the way for other efficient cryptographic constructions including identification schemes and signatures.
 
Lyubashevsky and Micciancio<ref name="LyubMic2006"/> gave constructions of efficient [[Collision resistance|collision resistant]] [[Cryptographic hash function|hash functions]] that can be proven secure based on worst case hardness of the [[Lattice problem|shortest vector problem]] for ''ideal lattices''. They defined [[Cryptographic hash function|hash function]] families as: Given a [[Ring (mathematics)|ring]] <math>R = \mathbb{Z}_p[x]/\langle f \rangle </math>, where <math> f \in \mathbb{Z}_p[x] </math> is a monic, [[irreducible polynomial]] of degree <math> n </math> and <math> p </math> is an integer of order roughly <math> n^2 </math>, generate <math> m </math> random elements <math> a_1, \dots , a_m \in R </math>, where <math> m </math> is a constant. The ordered <math> m </math>-tuple <math> h = (a_1, \ldots, a_m) \in R^m </math> determines the hash function. It will map elements in <math> D^m </math>, where <math> D </math> is a strategically chosen subset of <math> R </math>, to <math> R </math>. For an element <math> b = (b_1, . . . , b_m) \in D^m </math>, the hash is <math> h(b) = \sum_{i=1}^{m}\alpha_i \centerdot b_i</math>. Here the size of the key (the [[Cryptographic hash function|hash function]]) is <math> O(mn \log p) = O(n \log n)</math>, and the operation <math> \alpha_i \centerdot b_i </math> can be done in time <math> O(n \log n \log \log n) </math> by using the [[Fast Fourier transform|Fast Fourier Transform (FFT)]], for appropriate choice of the polynomial <math> f </math>. Since <math> m </math> is a constant,
hashing requires time <math> O(n \log n \log \log n)</math>. They proved that the [[Cryptographic hash function|hash function]] family is [[Collision resistance|collision resistant]]  by showing that there is a [[Polynomial time|polynomial-time algorithm]] that succeeds with non-negligible probability in finding <math> b \neq b' \in D^m </math> such that
<math> h(b) = h(b') </math>, for a randomly chosen [[Cryptographic hash function|hash function]] <math> h \in R^m </math>, then a certain
problem called the “[[Lattice problem|shortest vector problem]]” is solvable in [[polynomial time]] for every [[Ideal (ring theory)|ideal]] of the [[Ring (mathematics)|ring]] <math> \mathbb{Z}[x]/\langle f \rangle </math>.
 
Based on the work of Lyubashevsky and Micciancio in 2006, Micciancio and Regev<ref name="MicRegLBC2009">Daniele Micciancio, Oded Regev [http://www.cs.tau.ac.il/~odedr/papers/pqc.pdf Lattice-based Cryptography]. In ''POST-QUANTUM CRYPTOGRAPHY'', 2009.</ref> defined the following algorithm of [[Cryptographic hash function|hash functions]] based on ''ideal lattices'':
 
* '''Parameters:''' Integers <math> q, n, m, d </math> with <math> n \mid m </math>, and vector '''f''' <math> \in \mathbb{Z}^n </math>.
* '''Key:''' <math> m/n </math> vectors <math> a_1, ... , a_{m/n} </math> chosen independently and uniformly at random in <math> \mathbb{Z}_q^n </math>.
* '''Hash function:'''  <math> f_A : \lbrace 0, ... , d-1 \rbrace ^m \longrightarrow \mathbb{Z}_q^n </math>  given by <math> f_A(y)= [F \ast a_1 | . . . | F \ast a_{m/n}]y \bmod \ q </math>.
 
Here <math> n,m,q,d </math> are parameters, '''f''' is a vector in <math> \mathbb{Z}^n </math> and <math> A </math> is a block-matrix with structured blocks <math> A^{(i)} = F \ast a^{(i)}</math>.
 
Finding short vectors in <math> \Lambda_q^{\perp} ([F \ast a_1 | . . . | F \ast a_{m/n}])</math> on the average (even with just inverse polynomial
probability) is as hard as solving various lattice problems (such as approximate [[Lattice problem|SVP]] and SIVP) in the worst
case over ''ideal lattices'', provided the vector '''f''' satisfies the following two properties:
* For any two unit vectors '''u''', '''v''', the vector '''[F∗u]v''' has small (say, polynomial in <math> n </math>, typically <math> O(\sqrt{n}))</math> norm.
* The polynomial <math> f(x) = x^n+f_n x^{n-1}+...+f_1 \in \mathbb{Z}[x] </math> is [[Irreducible polynomial|irreducible]] over the integers, i.e., it does not factor into the product of integer polynomials of smaller degree.
 
The first property is satisfied by the vector '''f''' = <math> (-1,0, . . . ,0) </math> corresponding to [[Circulant matrix|circulant matrices]],
because all the coordinates of '''[F∗u]v''' are bounded by 1, and hence <math> \lVert [\textbf{F} \ast \textbf{u}]\textbf{v} \rVert \leq{\sqrt{n}}  </math>. However, the polynomial <math> x^n-1  </math> corresponding to '''f''' = <math> (-1,0, . . . ,0) </math> is not [[Irreducible polynomial|irreducible]] because it factors into <math> (x-1)(x^{n-1}+x^{n-2}+\cdots+ x + 1)</math>, and this is why collisions can be efficiently found.  So, '''f''' = <math> (-1,0, . . . ,0) </math> is not a good choice to get [[Collision resistance|collision resistant]] [[Cryptographic hash function|hash functions]], but many other choices are possible. For example, some choices of '''f''' for which both properties are satisfied (and therefore, result in [[Collision resistance|collision resistant]] [[Cryptographic hash function|hash functions]] with worst-case security guarantees) are
* '''f''' = <math> (1, . . . ,1) \in \mathbb{Z}^n </math> where <math> n + 1 </math> is prime, and
* '''f''' = <math> (1,0, . . . ,0) \in \mathbb{Z}^n </math> for <math> n </math> equal to a power of 2.
 
===Digital signatures===
[[Digital signature]]s schemes are among the most important cryptographic primitives. They can be obtained by using the one-way functions based on the worst-case [[Hardness of approximation|hardness]] of lattice problems. However, they are impractical. The most recent efficient scheme was provided by Lyubashevsky and Micciancio.<ref name="MicRegLBC2009"/>
 
Their direct construction of [[digital signature]]s based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices.<ref name="MicLyubAsympt2008"/>  The scheme of Lyubashevsky and Micciancio<ref name="MicLyubAsympt2008"/> has worst-case security guarantees based on ideal lattices and it is the most asymptotically efficient construction known to date, yielding signature generation and verification algorithms that run in almost [[linear time]].<ref name="MicRegLBC2009"/>
 
One of the main open problems that was raised by their work  is constructing a one-time signature with similar efficiency, but based on a weaker [[Hardness of approximation|hardness]] assumption. For instance, it would be great to provide a one-time signature with security based on the [[Hardness of approximation|hardness]] of approximating the [[Lattice problem|Shortest Vector Problem (SVP)]]  (in ''ideal lattices'') to within a factor of <math> \tilde{O}(n) </math>.<ref name="MicLyubAsympt2008"/>
 
Their construction is based on a standard transformation from one-time signatures (i.e. signatures that allow to securely sign a single message) to general signature schemes, together with a novel construction of a lattice based one-time signature whose security is ultimately based on the worst-case [[Hardness of approximation|hardness]] of approximating the [[Lattice problem|shortest vector]] in all lattices corresponding to [[Ideal (ring theory)|ideals]] in the [[Ring (mathematics)|ring]] <math> \mathbb{Z}[x]/\langle f \rangle </math> for any [[irreducible polynomial]] <math> f </math>.
 
'''Key-Generation Algorithm:'''
''Input'': <math> 1^n</math>, [[irreducible polynomial]] <math> f \in \mathbb{Z} </math> of degree <math> n</math>.
# Set <math> p \longleftarrow (\phi n)^3 </math>, <math> m \longleftarrow \lceil \log n \rceil </math>, <math> R \longleftarrow \mathbb{Z}_p[x]/\langle f \rangle </math>
# For all positive <math> i </math>, let the sets <math> DK_i </math>  and <math> DL_i </math>  be defined as:
:<math> DK_i  = \lbrace \hat{y} \in R^m </math> such that <math> \lVert \hat{y} \rVert_\infty \leq 5ip^{1/m} \rbrace </math>
:<math> DL_i  = \lbrace \hat{y} \in R^m </math> such that <math>\lVert \hat{y} \rVert_\infty \leq 5in \phi p^{1/m} \rbrace </math>
# Choose uniformly random <math> h \in \mathcal{H}_{R,m} </math>
# Pick a uniformly random string <math> r \in \lbrace 0, 1 \rbrace^{\lfloor \log^2n \rfloor} </math>
# '''If''' <math> r = 0^{\lfloor \log^2n \rfloor} </math> '''then'''
# Set <math> j = \lfloor \log^2n \rfloor </math>
# '''else'''
# Set <math> j </math> to the position of the first 1 in the string <math> r </math>
# '''end if'''
# Pick <math> \hat{k} , \hat{l}</math> independently and uniformly at random from <math> DK_j </math>  and <math> DL_j </math>  respectively
# Signing Key: <math> (\hat{k} , \hat{l})</math>. Verification Key: <math> (h,h(\hat{k}) , h(\hat{l})) </math>
 
'''Signing Algorithm:'''
 
''Input:'' Message <math> z \in R </math> such that <math> \lVert z \rVert_\infty \leq 1 </math>; signing key <math> (\hat{k} , \hat{l})</math>
 
''Output:'' <math> \hat{s} \longleftarrow \hat{k}z + \hat{l} </math>
 
'''Verification Algorithm:'''
 
''Input:'' Message <math> z </math>; signature <math> \hat{s} </math>; verification key <math> (h,h(\hat{k}) , h(\hat{l})) </math>
 
''Output:'' “ACCEPT”, if <math> \lVert \hat{s} \rVert_\infty \leq 10 \phi p^{1/m}n \log^2n </math> and <math> \hat{s} = \hat{k}z + \hat{l} </math>
 
“REJECT”, otherwise.
 
===The SWIFFT hash function===
The [[Cryptographic hash function|hash function]] is quite efficient and can be computed asymptotically in <math> \tilde{O}(m) </math> time using the [[Fast Fourier transform|Fast Fourier Transform (FFT)]]  over the [[complex number]]s. However, in practice, this carries a substantial overhead. The [[SWIFFT]] family of [[Cryptographic hash function|hash functions]] defined by Micciancio and Regev<ref name="MicRegLBC2009"/> is essentially a highly optimized variant of the [[Cryptographic hash function|hash function]] above using the [[Fast Fourier transform|(FFT)]] in <math> \mathbb{Z}_q</math>. The vector '''f''' is set to <math> (1, 0,\dots , 0) \in \mathbb{Z}^n </math> for <math> n </math> equal to a power of 2, so that the corresponding polynomial <math> x^n + 1 </math> is [[Irreducible polynomial|irreducible]].
Let <math> q </math> be a [[prime number]] such that <math>2n</math> divides <math> q-1 </math>, and let <math> \textbf{W} \in \mathbb{Z}^{n \times n}_{q}</math> be an [[invertible matrix]] over <math> \mathbb{Z}_q </math> to be chosen later. The [[SWIFFT]] [[Cryptographic hash function|hash function]] maps a key <math>\tilde{a}^{(1)} , ... , \tilde{a}^{(m/n)}</math> consisting of <math> m/n </math> vectors chosen uniformly from <math> \mathbb{Z}^{n}_{q} </math> and an input <math> y \in \lbrace 0, . . . , d-1 \rbrace^m </math> to <math> \textbf{W}^{\centerdot} f_A(y) \bmod \ q </math> where <math> \textbf{A} = [ \textbf{F} \ast \alpha^{(1)}, \ldots, \textbf{F} \ast \alpha^{(m/n)} ] </math>  is as before and <math> \alpha^{(i)} = \textbf{W}^{-1} \tilde{a}^{(i)} \ mod \ q </math>.
Multiplication by the [[invertible matrix]]  <math> \textbf{W}^{-1} </math> maps a uniformly chosen <math> \tilde{a} \in  \mathbb{Z}^{n}_{q} </math> to a uniformly chosen <math> \alpha \in  \mathbb{Z}^{n}_{q} </math>. Moreover, <math> \textbf{W}^{\centerdot} f_A(y)=\textbf{W}^{\centerdot} f_A(y') \ (mod \ q) </math> if and only if <math> f_A(y)= f_A(y') \ (mod \ q) </math>.
Together, these two facts establish that finding collisions in [[SWIFFT]] is equivalent to finding [[Collision (computer science)|collisions]] in the underlying ''ideal lattice'' function <math> f_A </math>, and the claimed [[collision resistance]] property of [[SWIFFT]] is supported by the connection to worst case [[lattice problem]]s on ''ideal lattices''.
 
The algorithm of the SWIFFT hash function is:
* '''Parameters:''' Integers <math> n, m, q, d </math> such that <math> n </math> is a power of 2, <math> q </math> is prime, <math> 2n \mid (q-1)</math> and <math> n \mid m </math>.
* '''Key:''' <math> m/n </math> vectors <math> \tilde{a}_1, ... , \tilde{a}_{m/n} </math> chosen independently and uniformly at random in <math> \mathbb{Z}_q^n </math>.
* '''Input:''' <math> m/n </math> vectors <math> y^{(1)}, \dots , y^{(m/n)} \in \lbrace 0, \dots , d-1 \rbrace ^n </math>.
* '''Output:''' the vector <math> \sum_{i=1}^{m/n} \tilde{a}^{(i)} \odot (\textbf{W}y^{(i)}) \in \mathbb{Z}_q^n </math>, where <math> \odot </math> is the component-wise vector product.
 
===Learning with errors (LWE)===
 
====Ring-LWE====
[[Learning with errors|Learning with errors (LWE) ]] problem has been shown to be as hard as worst-case lattice problems and has served as the foundation for plenty of cryptographic applications. However, these applications are inefficient because of an inherent quadratic overhead in the use of [[Learning with errors|LWE]]. To get truly efficient [[Learning with errors|LWE]] applications, Lyubashevsky, Peikert and Regev<ref name="LyubPeiReg2010"/> defined an appropriate version of the [[Learning with errors|LWE]] problem in a wide class of rings and proved its hardness under worst-case assumptions on ideal lattices in these rings. They called their [[Learning with errors|LWE]] version ring-LWE. 
 
Let <math> f(x)= x^n+1 \in \mathbb{Z}[x] </math>, where the security parameter <math> n </math> is a power of 2, making <math> f(x) </math> irreducible over the rationals. (This particular <math> f(x) </math> comes from the family of [[cyclotomic polynomial]]s, which play a special role in this work).
 
Let <math> R= \mathbb{Z}[x]/\langle f(x) \rangle </math> be the ring of integer polynomials modulo <math> f(x) </math>. Elements of <math> R </math> (i.e., residues modulo <math> f(x) </math>) are typically represented by integer polynomials of degree less than <math> n </math>. Let <math> q \equiv 1 \bmod 2n </math> be a sufficiently large public prime modulus (bounded by a polynomial in <math> n </math>), and let <math> R_q = R/\langle q \rangle = \mathbb{Z}_q[x]/\langle f(x) \rangle </math> be the ring of integer polynomials modulo both <math> f(x) </math> and <math> q </math>. Elements of <math> R_q </math> may be represented by polynomials of degree less than <math> n </math>-whose coefficients are from <math> \lbrace 0 , \dots , q-1 \rbrace </math>.
 
In the above-described ring, the R-LWE problem may be described as follows.
Let <math> s = s(x) \in R_q </math> be a uniformly random ring element, which is kept secret. Analogously to standard LWE, the goal of the attacker is to distinguish arbitrarily many (independent) ‘random noisy ring equations’ from truly uniform ones. More specifically, the noisy equations are of the form <math> (a, b \approx a \centerdot s) \in R_q \times R_q </math>, where a is uniformly random and the product <math> a \centerdot s </math> is perturbed by some ‘small’ random error term, chosen from a certain distribution over <math> R </math>.
 
They gave a quantum reduction from approximate [[Lattice problem|SVP]] (in the worst case) on ideal lattices in <math> R </math> to the search version of ring-LWE, where the goal is to recover the secret <math> s \in R_q </math> (with high probability, for any <math> s </math>) from arbitrarily many noisy products. This result follows the general outline of Regev’s iterative quantum reduction for general lattices,<ref name="Reg2010">
Oded Regev. [http://www.cs.tau.ac.il/~odedr/papers/qcrypto.pdf  On lattices, learning with errors, random linear codes, and cryptography  ]. In ''Journal of the ACM'', 2009.</ref> but ideal lattices introduce several new technical roadblocks in both the ‘algebraic’ and ‘geometric’ components of the reduction. They<ref name="LyubPeiReg2010"/>  used  algebraic number theory, in particular, the canonical embedding of a number field and the [[Chinese remainder theorem|Chinese Remainder Theorem]] to overcome these obstacles.  They got the following theorem:
 
'''Theorem''' Let <math> K </math> be an arbitrary number field of degree <math> n </math>. Let <math> \alpha = \alpha (n) \in (0, 1) </math> be arbitrary, and let the (rational) integer modulus <math> q = q(n) \geq 2 </math> be such that <math> \alpha \centerdot q \geq \omega (\sqrt{log n}) </math>. There is a probabilistic polynomial-time quantum reduction from <math> K </math>-<math> DGS_\gamma </math> to <math> \mathcal{O}_K </math>- <math> LWE_{q, \Psi \leq \alpha} </math>, where <math> \gamma = \eta_\epsilon(I) \centerdot \omega(\sqrt{log n})/\alpha </math>.
 
====Ideal-LWE====
Stehle, Steinfeld, Tanaka and Xagawa<ref name="stehle2009">
Damien Stehlé, Ron Steinfeld, Keisuke Tanaka and Keita Xagawa.  [http://eprint.iacr.org/2009/285.pdf Efficient public key encryption based on ideal lattices]. In ''Lecture Notes in Computer Science'', 2009.</ref> defined a structured variant of LWE problem (Ideal-LWE) to describe an efficient public key encryption scheme based on the worst case hardness of the approximate [[Lattice problem|SVP]] in ideal lattices. This is the first CPA-secure public key encryption scheme whose security relies on the hardness of the worst-case instances of <math> \tilde{O}(n^2) </math>-Ideal-SVP against subexponential quantum attacks. It achieves asymptotically optimal efficiency: the public/private key length is <math> \tilde{O}(n) </math>  bits and the amortized encryption/decryption cost is <math> \tilde{O}(1) </math> bit operations per message bit (encrypting <math> \tilde{\Omega}(n) </math>  bits at once, at a <math> \tilde{O}(n) </math> cost). The security assumption here is that <math> \tilde{O}(n^2) </math>-Ideal-SVP cannot be solved by any subexponential time quantum algorithm. It is noteworthy that this is stronger than standard [[Public-key cryptography|public key cryptography]] security assumptions. On the other hand, contrary to the most of [[Public-key cryptography|public key cryptography]], [[lattice-based cryptography]]  allows security against subexponential quantum attacks.
 
Most of the cryptosystems based on general lattices rely on the average-case hardness of the [[Learning with errors|Learning with errors (LWE)]].  Their scheme is based on a structured variant of LWE, that they call Ideal-LWE. They needed to  introduce some techniques to circumvent two main difficulties that arise from the restriction to ideal lattices. Firstly, the previous cryptosystems based on unstructured lattices all make use of Regev’s worst-case to average-case classical reduction from Bounded Distance Deconding problem (BDD) to [[Learning with errors|LWE]] (this is the classical step in the quantum reduction from [[Lattice problem|SVP]] to [[Learning with errors|LWE]]). This reduction exploits the unstructured-ness of the considered lattices, and does not seem to carry over to the structured lattices involved in Ideal-LWE. In particular, the probabilistic independence of the rows of the LWE matrices allows to consider a single row. Secondly, the other ingredient used in previous cryptosystems, namely Regev’s reduction from the computational variant of [[Learning with errors|LWE]] to its decisional variant, also seems to fail for Ideal-LWE: it relies on the probabilistic independence of the columns of the [[Learning with errors|LWE]] matrices.
 
To overcome these difficulties, they avoided the classical step of the reduction. Instead, they used the quantum step to construct a new quantum average-case reduction from SIS (average-case collision-finding problem) to [[Learning with errors|LWE]]. It also works from Ideal-SIS to Ideal-LWE. Combined with the reduction from worst-case Ideal-SVP to average-case Ideal-SIS, they obtained the a quantum reduction from Ideal-SVP to Ideal-LWE. This shows the hardness of the computational variant of Ideal-LWE. Because they did not obtain the hardness of the decisional variant, they used a generic hardcore function to derive pseudorandom bits for encryption. This is why they needed to assume the exponential hardness of [[Lattice problem|SVP]].
 
===Fully homomorphic encryption===
An encryption <math> \varepsilon </math> is homomorphic for circuits in <math> \mathcal{C}_\varepsilon </math> if <math> \varepsilon </math>  is correct for <math> \mathcal{C}_\varepsilon </math> and <math> Decrypt_\varepsilon </math> can be expressed as a circuit <math> Decrypt_\varepsilon </math>  of size <math> poly( \lambda ) </math>. <math> \varepsilon </math> is fully homomorphic if it is homomorphic for all circuits. A fully [[Homomorphic Encryption|homomorphic encryption]] scheme is the one which allows one to evaluate circuits over encrypted data without being able to decrypt. Gentry<ref>Craig Gentry. [http://portal.acm.org/citation.cfm?id=1536414.1536440 Fully Homomorphic Encryption Using Ideal Lattices]. In ''the 41st ACM Symposium on Theory of Computing (STOC)'', 2009.</ref> proposed a solution to the problem of constructing a fully [[Homomorphic Encryption|homomorphic encryption]] scheme, which was introduced by Rivest, Adleman and Dertouzos<ref>R. Rivest, L. Adleman, and M. Dertouzos. [On data banks and privacy homomorphisms.]. In ''In Foundations of Secure Computation,'' pp. 169–180, 1978.</ref> shortly after the invention of [[RSA (algorithm)|RSA]] by Rivest, Adleman and Shamir<ref>R. Rivest, A. Shamir, and L. Adleman. [A method for obtaining digital signatures and public-key cryptosystems.]. In ''Comm. of the ACM,''21:2, pages 120–126, 1978.</ref> in 1978. His scheme was based on ideal lattices.
 
==See also==
*[[Lattice-based cryptography]]
*[[Homomorphic Encryption]]
 
== References ==
<references/>
 
[[Category:Number theory]]
[[Category:Lattice-based cryptography]]

Revision as of 11:47, 26 November 2013

Ideal lattices are a special class of lattices and a generalization of cyclic lattices.[1] Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.

Introduction

In general terms, ideal lattices are lattices corresponding to ideals in rings of the form [x]/f for some irreducible polynomial f of degree n.[1] All of the definitions of ideal lattices from prior work are instances of the following general notion: let R be a ring whose additive group is isomorphic to n (i.e., it is a free -module of rank n), and let σ be an additive isomorphism mapping R to some lattice σ(R) in an n-dimensional real vector space (e.g., Rn). The family of ideal lattices for the ring R under the embedding σ is the set of all lattices σ(I), where I is an ideal in R.[2]

Definition

Notation

Let f[x] be a monic polynomial of degree n, and consider the quotient ring [x]/f.

Using the standard set of representatives {(gmodf):g[x]}, and identification of polynomials with vectors, the quotient ring [x]/f is isomorphic (as an additive group ) to the integer lattice n, and any ideal I[x]/f defines a corresponding integer sublattice (I)n.

An ideal lattice is an integer lattice (B)n such that B={gmodf:gI} for some monic polynomial f of degree n and ideal I[x]/f.

Related properties

It turns out that the relevant properties of f for the resulting function to be collision resistant are:

The first property implies that every ideal of the ring [x]/f defines a full-rank lattice in n and plays a fundamental role in proofs.

Lemma: Every ideal I of [x]/f, where f is a monic, irreducible integer polynomial of degree n, is isomorphic to a full-rank lattice in n.

Ding and Lindner[3] gave evidence that distinguishing ideal lattices from general ones can be done in polynomial time and showed that in practice randomly chosen lattices are never ideal. They only considered the case where the lattice has full rank, i.e. the basis consists of n linear independent vectors. This is not a fundamental restriction because Lyubashevsky and Micciancio have shown that if a lattice is ideal with respect to an irreducible monic polynomial, then it has full rank, as given in the above lemma.

Algorithm: Identifying ideal lattices with full rank bases

Data: A full-rank basis B(n,n)
Result: true and q, if B spans an ideal lattice with respect to q, otherwise false.

  1. Transform B into HNF
  2. Calculate A=adj(B), d=det(B), and z=B(n,n)
  3. Calculate the product P=AMBmodd
  4. if only the last column of P is non-zero then
  5. set c=P(,n) to equal this column
  6. else return false
  7. if zci for i=1,,n then
  8. use CRT to find q(c/z)mod(d/z) and q0modz
  9. else return false
  10. if Bq0mod(d/z) then
  11. return true, q=Bq/d
  12. else return false

where the matrix M is

M=(0...0..In1.0)

Using this algorithm, it can be seen that many lattices are not ideal lattices. For example let n=2 and k{0,±1}, then

B1=(k001)

is ideal, but

B2=(100k)

is not. B2 with k=2 is an example given by Lyubashevsky and Micciancio.[4]

Performing the algorithm on it and referring to the basis as B, matrix B is already in Hermite Normal Form so the first step is not needed. The determinant is d=2, the adjugate matrix

A=(2001),
M=(0010)

and finally, the product P=AMBmodd is

P=(0010).

At this point the algorithm stops, because all but the last column of P have to be zero if B would span an ideal lattice.

Use in cryptography

Micciancio[5] introduced the class of structured cyclic lattices, which correspond to ideals in polynomial rings [x]/(xn1), and presented the first provably secure one-way function based on the worst-case hardness of the restriction of Poly(n)-SVP to cyclic lattices. (The problem γ-SVP consists in computing a non-zero vector of a given lattice, whose norm is no more than γ times larger than the norm of a shortest non-zero lattice vector.) At the same time, thanks to its algebraic structure, this one-way function enjoys high efficiency comparable to the NTRU scheme O~(n) evaluation time and storage cost). Subsequently, Lyubashevsky and Micciancio[4] and independently Peikert and Rosen[6] showed how to modify Micciancio’s function to construct an efficient and provably secure collision resistant hash function. For this, they introduced the more general class of ideal lattices, which correspond to ideals in polynomial rings [x]/f(x). The collision resistance relies on the hardness of the restriction of Poly(n)-SVP to ideal lattices (called Poly(n)-Ideal-SVP). The average-case collision-finding problem is a natural computational problem called Ideal-SIS, which has been shown to be as hard as the worst-case instances of Ideal-SVP. Provably secure efficient signature schemes from ideal lattices have also been proposed,[1][7] but constructing efficient provably secure public key encryption from ideal lattices was an interesting open problem.

Efficient collision resistant hash functions

The main usefulness of the ideal lattices in cryptography stems from the fact that very efficient and practical collision resistant hash functions can be built based on the hardness of finding an approximate shortest vector in such lattices.[1] Independently constructed collision resistant hash functions by Peikert and Rosen,[6] and Lyubashevsky and Micciancio based on ideal lattices (a generalization of cyclic lattices), and provided a fast and practical implementation.[2] These results paved the way for other efficient cryptographic constructions including identification schemes and signatures.

Lyubashevsky and Micciancio[4] gave constructions of efficient collision resistant hash functions that can be proven secure based on worst case hardness of the shortest vector problem for ideal lattices. They defined hash function families as: Given a ring R=p[x]/f, where fp[x] is a monic, irreducible polynomial of degree n and p is an integer of order roughly n2, generate m random elements a1,,amR, where m is a constant. The ordered m-tuple h=(a1,,am)Rm determines the hash function. It will map elements in Dm, where D is a strategically chosen subset of R, to R. For an element b=(b1,...,bm)Dm, the hash is h(b)=i=1mαibi. Here the size of the key (the hash function) is O(mnlogp)=O(nlogn), and the operation αibi can be done in time O(nlognloglogn) by using the Fast Fourier Transform (FFT), for appropriate choice of the polynomial f. Since m is a constant, hashing requires time O(nlognloglogn). They proved that the hash function family is collision resistant by showing that there is a polynomial-time algorithm that succeeds with non-negligible probability in finding bbDm such that h(b)=h(b), for a randomly chosen hash function hRm, then a certain problem called the “shortest vector problem” is solvable in polynomial time for every ideal of the ring [x]/f.

Based on the work of Lyubashevsky and Micciancio in 2006, Micciancio and Regev[8] defined the following algorithm of hash functions based on ideal lattices:

Here n,m,q,d are parameters, f is a vector in n and A is a block-matrix with structured blocks A(i)=Fa(i).

Finding short vectors in Λq([Fa1|...|Fam/n]) on the average (even with just inverse polynomial probability) is as hard as solving various lattice problems (such as approximate SVP and SIVP) in the worst case over ideal lattices, provided the vector f satisfies the following two properties:

  • For any two unit vectors u, v, the vector [F∗u]v has small (say, polynomial in n, typically O(n)) norm.
  • The polynomial f(x)=xn+fnxn1+...+f1[x] is irreducible over the integers, i.e., it does not factor into the product of integer polynomials of smaller degree.

The first property is satisfied by the vector f = (1,0,...,0) corresponding to circulant matrices, because all the coordinates of [F∗u]v are bounded by 1, and hence [Fu]vn. However, the polynomial xn1 corresponding to f = (1,0,...,0) is not irreducible because it factors into (x1)(xn1+xn2++x+1), and this is why collisions can be efficiently found. So, f = (1,0,...,0) is not a good choice to get collision resistant hash functions, but many other choices are possible. For example, some choices of f for which both properties are satisfied (and therefore, result in collision resistant hash functions with worst-case security guarantees) are

Digital signatures

Digital signatures schemes are among the most important cryptographic primitives. They can be obtained by using the one-way functions based on the worst-case hardness of lattice problems. However, they are impractical. The most recent efficient scheme was provided by Lyubashevsky and Micciancio.[8]

Their direct construction of digital signatures based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices.[7] The scheme of Lyubashevsky and Micciancio[7] has worst-case security guarantees based on ideal lattices and it is the most asymptotically efficient construction known to date, yielding signature generation and verification algorithms that run in almost linear time.[8]

One of the main open problems that was raised by their work is constructing a one-time signature with similar efficiency, but based on a weaker hardness assumption. For instance, it would be great to provide a one-time signature with security based on the hardness of approximating the Shortest Vector Problem (SVP) (in ideal lattices) to within a factor of O~(n).[7]

Their construction is based on a standard transformation from one-time signatures (i.e. signatures that allow to securely sign a single message) to general signature schemes, together with a novel construction of a lattice based one-time signature whose security is ultimately based on the worst-case hardness of approximating the shortest vector in all lattices corresponding to ideals in the ring [x]/f for any irreducible polynomial f.

Key-Generation Algorithm: Input: 1n, irreducible polynomial f of degree n.

  1. Set p(ϕn)3, mlogn, Rp[x]/f
  2. For all positive i, let the sets DKi and DLi be defined as:
DKi={y^Rm such that y^5ip1/m}
DLi={y^Rm such that y^5inϕp1/m}
  1. Choose uniformly random hR,m
  2. Pick a uniformly random string r{0,1}log2n
  3. If r=0log2n then
  4. Set j=log2n
  5. else
  6. Set j to the position of the first 1 in the string r
  7. end if
  8. Pick k^,l^ independently and uniformly at random from DKj and DLj respectively
  9. Signing Key: (k^,l^). Verification Key: (h,h(k^),h(l^))

Signing Algorithm:

Input: Message zR such that z1; signing key (k^,l^)

Output: s^k^z+l^

Verification Algorithm:

Input: Message z; signature s^; verification key (h,h(k^),h(l^))

Output: “ACCEPT”, if s^10ϕp1/mnlog2n and s^=k^z+l^

“REJECT”, otherwise.

The SWIFFT hash function

The hash function is quite efficient and can be computed asymptotically in O~(m) time using the Fast Fourier Transform (FFT) over the complex numbers. However, in practice, this carries a substantial overhead. The SWIFFT family of hash functions defined by Micciancio and Regev[8] is essentially a highly optimized variant of the hash function above using the (FFT) in q. The vector f is set to (1,0,,0)n for n equal to a power of 2, so that the corresponding polynomial xn+1 is irreducible. Let q be a prime number such that 2n divides q1, and let Wqn×n be an invertible matrix over q to be chosen later. The SWIFFT hash function maps a key a~(1),...,a~(m/n) consisting of m/n vectors chosen uniformly from qn and an input y{0,...,d1}m to WfA(y)modq where A=[Fα(1),,Fα(m/n)] is as before and α(i)=W1a~(i)modq. Multiplication by the invertible matrix W1 maps a uniformly chosen a~qn to a uniformly chosen αqn. Moreover, WfA(y)=WfA(y)(modq) if and only if fA(y)=fA(y)(modq). Together, these two facts establish that finding collisions in SWIFFT is equivalent to finding collisions in the underlying ideal lattice function fA, and the claimed collision resistance property of SWIFFT is supported by the connection to worst case lattice problems on ideal lattices.

The algorithm of the SWIFFT hash function is:

Learning with errors (LWE)

Ring-LWE

Learning with errors (LWE) problem has been shown to be as hard as worst-case lattice problems and has served as the foundation for plenty of cryptographic applications. However, these applications are inefficient because of an inherent quadratic overhead in the use of LWE. To get truly efficient LWE applications, Lyubashevsky, Peikert and Regev[2] defined an appropriate version of the LWE problem in a wide class of rings and proved its hardness under worst-case assumptions on ideal lattices in these rings. They called their LWE version ring-LWE.

Let f(x)=xn+1[x], where the security parameter n is a power of 2, making f(x) irreducible over the rationals. (This particular f(x) comes from the family of cyclotomic polynomials, which play a special role in this work).

Let R=[x]/f(x) be the ring of integer polynomials modulo f(x). Elements of R (i.e., residues modulo f(x)) are typically represented by integer polynomials of degree less than n. Let q1mod2n be a sufficiently large public prime modulus (bounded by a polynomial in n), and let Rq=R/q=q[x]/f(x) be the ring of integer polynomials modulo both f(x) and q. Elements of Rq may be represented by polynomials of degree less than n-whose coefficients are from {0,,q1}.

In the above-described ring, the R-LWE problem may be described as follows. Let s=s(x)Rq be a uniformly random ring element, which is kept secret. Analogously to standard LWE, the goal of the attacker is to distinguish arbitrarily many (independent) ‘random noisy ring equations’ from truly uniform ones. More specifically, the noisy equations are of the form (a,bas)Rq×Rq, where a is uniformly random and the product as is perturbed by some ‘small’ random error term, chosen from a certain distribution over R.

They gave a quantum reduction from approximate SVP (in the worst case) on ideal lattices in R to the search version of ring-LWE, where the goal is to recover the secret sRq (with high probability, for any s) from arbitrarily many noisy products. This result follows the general outline of Regev’s iterative quantum reduction for general lattices,[9] but ideal lattices introduce several new technical roadblocks in both the ‘algebraic’ and ‘geometric’ components of the reduction. They[2] used algebraic number theory, in particular, the canonical embedding of a number field and the Chinese Remainder Theorem to overcome these obstacles. They got the following theorem:

Theorem Let K be an arbitrary number field of degree n. Let α=α(n)(0,1) be arbitrary, and let the (rational) integer modulus q=q(n)2 be such that αqω(logn). There is a probabilistic polynomial-time quantum reduction from K-DGSγ to 𝒪K- LWEq,Ψα, where γ=ηϵ(I)ω(logn)/α.

Ideal-LWE

Stehle, Steinfeld, Tanaka and Xagawa[10] defined a structured variant of LWE problem (Ideal-LWE) to describe an efficient public key encryption scheme based on the worst case hardness of the approximate SVP in ideal lattices. This is the first CPA-secure public key encryption scheme whose security relies on the hardness of the worst-case instances of O~(n2)-Ideal-SVP against subexponential quantum attacks. It achieves asymptotically optimal efficiency: the public/private key length is O~(n) bits and the amortized encryption/decryption cost is O~(1) bit operations per message bit (encrypting Ω~(n) bits at once, at a O~(n) cost). The security assumption here is that O~(n2)-Ideal-SVP cannot be solved by any subexponential time quantum algorithm. It is noteworthy that this is stronger than standard public key cryptography security assumptions. On the other hand, contrary to the most of public key cryptography, lattice-based cryptography allows security against subexponential quantum attacks.

Most of the cryptosystems based on general lattices rely on the average-case hardness of the Learning with errors (LWE). Their scheme is based on a structured variant of LWE, that they call Ideal-LWE. They needed to introduce some techniques to circumvent two main difficulties that arise from the restriction to ideal lattices. Firstly, the previous cryptosystems based on unstructured lattices all make use of Regev’s worst-case to average-case classical reduction from Bounded Distance Deconding problem (BDD) to LWE (this is the classical step in the quantum reduction from SVP to LWE). This reduction exploits the unstructured-ness of the considered lattices, and does not seem to carry over to the structured lattices involved in Ideal-LWE. In particular, the probabilistic independence of the rows of the LWE matrices allows to consider a single row. Secondly, the other ingredient used in previous cryptosystems, namely Regev’s reduction from the computational variant of LWE to its decisional variant, also seems to fail for Ideal-LWE: it relies on the probabilistic independence of the columns of the LWE matrices.

To overcome these difficulties, they avoided the classical step of the reduction. Instead, they used the quantum step to construct a new quantum average-case reduction from SIS (average-case collision-finding problem) to LWE. It also works from Ideal-SIS to Ideal-LWE. Combined with the reduction from worst-case Ideal-SVP to average-case Ideal-SIS, they obtained the a quantum reduction from Ideal-SVP to Ideal-LWE. This shows the hardness of the computational variant of Ideal-LWE. Because they did not obtain the hardness of the decisional variant, they used a generic hardcore function to derive pseudorandom bits for encryption. This is why they needed to assume the exponential hardness of SVP.

Fully homomorphic encryption

An encryption ε is homomorphic for circuits in 𝒞ε if ε is correct for 𝒞ε and Decryptε can be expressed as a circuit Decryptε of size poly(λ). ε is fully homomorphic if it is homomorphic for all circuits. A fully homomorphic encryption scheme is the one which allows one to evaluate circuits over encrypted data without being able to decrypt. Gentry[11] proposed a solution to the problem of constructing a fully homomorphic encryption scheme, which was introduced by Rivest, Adleman and Dertouzos[12] shortly after the invention of RSA by Rivest, Adleman and Shamir[13] in 1978. His scheme was based on ideal lattices.

See also

References

  1. 1.0 1.1 1.2 1.3 Vadim Lyubashevsky. Lattice-Based Identification Schemes Secure Under Active Attacks. In Proceedings of the Practice and theory in public key cryptography , 11th international conference on Public key cryptography, 2008.
  2. 2.0 2.1 2.2 2.3 Vadim Lyubashevsky, Chris Peikert and Oded Regev. On Ideal Lattices and Learning with Errors over Rings. In Eurocrypt 2010, Lecture Notes in Computer Science, 2010.
  3. Jintai Ding and Richard Lindner. Identifying Ideal Lattices. In Cryptology ePrint Archive, Report 2007/322, 2007.
  4. 4.0 4.1 4.2 Lyubashevsky, V., Micciancio, D. Generalized compact knapsacks are collision resistant.. In CBugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 144–155. Springer, Heidelberg (2006).
  5. Micciancio, D. Generalized compact knapsacks, cyclic lattices, and efficient oneway functions.. In Computational Complexity 16(4), 365–411 (2007).
  6. 6.0 6.1 Peikert, C., Rosen, A. Efficient collision-resistant hashing from worst-case assumptions on cyclic lattices.. In Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 145–166. Springer, Heidelberg (2006).
  7. 7.0 7.1 7.2 7.3 Vadim Lyubashevsky and Daniele Micciancio. Asymptotically efficient lattice-based digital signatures. In Proceedings of the 5th conference on Theory of cryptography, 2008.
  8. 8.0 8.1 8.2 8.3 Daniele Micciancio, Oded Regev Lattice-based Cryptography. In POST-QUANTUM CRYPTOGRAPHY, 2009.
  9. Oded Regev. On lattices, learning with errors, random linear codes, and cryptography . In Journal of the ACM, 2009.
  10. Damien Stehlé, Ron Steinfeld, Keisuke Tanaka and Keita Xagawa. Efficient public key encryption based on ideal lattices. In Lecture Notes in Computer Science, 2009.
  11. Craig Gentry. Fully Homomorphic Encryption Using Ideal Lattices. In the 41st ACM Symposium on Theory of Computing (STOC), 2009.
  12. R. Rivest, L. Adleman, and M. Dertouzos. [On data banks and privacy homomorphisms.]. In In Foundations of Secure Computation, pp. 169–180, 1978.
  13. R. Rivest, A. Shamir, and L. Adleman. [A method for obtaining digital signatures and public-key cryptosystems.]. In Comm. of the ACM,21:2, pages 120–126, 1978.