Workshop on Mathematics of Information - Theoretic Cryptography
(19 - 30 Sep 2016)

Jointly organized with Nanyang Technological University

~ Abstracts ~

 

Private simultaneous messages, zero-information arthur-merlin protocols and conditional disclosure of secrets
Benny Applebaum, Tel-Aviv University, Israel


We review several basic notions of information-theoretic secure multiparty computation, and present recent results about these models including the relationships between different variants, their structural properties, and new constructions. If time permits we also discuss the relevance of these results to the computational setting.

« Back...

 

Towers and codes
Peter Beelen, Technical University of Denmark, Denmark


A function field with finite constant field is an alternative way to describe a projective, nonsingular algebraic curve defined over a finite field. Such curves are used in Goppa's construction of error-correcting codes, now called algebraic-geometry codes (AG codes). Such codes have a rich structure, making them suitable for applications in for example secret sharing, multiparty computation, and local recoverable codes.

When considering a tower of function fields, or equivalently a family of coverings of projective, nonsingular algebraic curves, the resulting family of AG codes can be better than families obtained from any other known construction. More precisely, these families can beat the asymptotic Gilbert-Varshamov bound. However, the tower needs to be chosen carefully: It needs to be asymptotically good (or even better: asymptotically optimal). Moreover, for algorithmic purposes, it is important that the towers are as explicit as possible. Explicit optimal towers have been known to exist in case the finite field has square cardinality since the late 90's, but for non-square cardinality the situation was different. However, recently several explicit near-optimal towers have been discovered for any non-prime cardinality. Simultaneously, new applications of AG codes have been emerging.

« Back...

 

Distribution design
Amos Beimel, Ben-Gurion University, Israel


Motivated by applications in cryptography, we introduce and study the problem of distribution design. The goal of distribution design is to find a joint distribution on n random variables that satisfies a given set of constraints on the marginal distributions. Each constraint can either require that two sequences of variables are identically distributed or, alternatively, that the two sequences have disjoint supports. We present several positive and negative results on the existence and efficiency of solutions for a given set of constraints. Distribution design can be seen as a strict generalization of several well-studied problems in cryptography. These include secret sharing, garbling schemes, and non-interactive protocols for secure multiparty computation.

« Back...

 

Secret sharing: a probabilistic perspective
Andrej Bogdanov, The Chinese University of Hong Kong, Hong Kong


A secret sharing scheme can be modelled as a collection of distributions that are "locally" indistinguishable but "globally" distinguishable. This probabilistic perspective is complementary to the algorithmic one which puts emphasis on the sharing and reconstruction procedures. It enables us to (partially) answer the following questions:

1. Is linear algebra necessary for secret sharing? The answer is essentially no: We show that for every pair of constants 0 < s < r <= 1 and sufficiently large n there exists a scheme for n parties in which perfect secrecy holds against any sn parties, reconstruction can be done by any rn parties, and the sharing and reconstruction algorithms are constant-depth circuits of size polynomial in n.

2. How do perfect security and statistical security relate in the context of low-complexity secret sharing schemes? We give a complete answer when the reconstruction function is an OR of all n shares.

3. Can the share size in Shamir's scheme be reduced when the secret is a single bit? We show that Shamir's scheme is in fact optimal in this respect up to a constant additive term for any number of parties n and any threshold value 1 < t < n.

The results are obtained via new connections of secret sharing to approximation theory and game theory.

« Back...

 

Homomorphic secret sharing and succinct secure computation from DDH
Elette Boyle, Interdisciplinary Center (IDC) Herzliya, Israel


Under the Decisional Diffie-Hellman (DDH) assumption, we present a 2-out-of-2 secret sharing scheme that supports a compact homomorphic evaluation of branching programs on the shares. This yields a number of DDH-based applications, including:
- A secure 2-party computation protocol for evaluating any branching program or NC1 circuit, where the communication complexity is linear in the input and output size.
- A secure 2-party computation protocol for evaluating any leveled boolean circuit of size S with communication complexity O(S/ log S).

« Back...

 

Secret sharing schemes with algebraic properties and their applications to secure computation
Ignacio Cascudo Pueyo, Aalborg University, Denmark


In this tutorial, I will cover the concept of secret sharing and focus on certain additional algebraic properties that one can impose on secret sharing schemes and are useful for applications to secure multiparty computation. Namely, I will define the concepts of linear and multiplicative secret sharing, explain their relation to error correcting codes and summarize some results on the area. Finally, I will describe some of the ways in which they are used in secure multiparty computation and if time permits, I will review some recent applications to the construction of homomorphic UC commitment schemes.

« Back...

 

Nearly optimal robust secret sharing
Mahdi Cheraghchi, Imperial College London, UK


We prove that a known approach to improve Shamir's celebrated secret sharing scheme; i.e., adding an information-theoretic authentication tag to the secret, can make it robust for $n$ parties against any collusion of size $delta n$, for any constant $delta in (0, 1/2)$.

This result holds in the so-called "non-rushing" model in which the $n$ shares are submitted simultaneously for reconstruction. We thus obtain an efficient and robust secret sharing scheme in this model that is essentially optimal in all parameters including the share size which is $k(1+o(1)) + O(kappa)$, where $k$ is the secret length and $kappa$ is the security parameter. Like Shamir's scheme, in this modified scheme any set of more than $delta n$ honest parties can efficiently recover the secret. Using algebraic geometry codes instead of Reed-Solomon codes, we decrease the share length to a constant (only depending on $delta$) while the number of shares $n$ can grow independently. In this case, when $n$ is large enough, the scheme satisfies the "threshold" requirement in an approximate sense; i.e., any set of $delta n(1+rho)$ honest parties, for arbitrarily small $rho > 0$, can efficiently reconstruct the secret.

« Back...

 

True randomness from minimal assumptions
Kai-Min Chung, Academia Sinica, Taiwan


Highly unpredictable events appear to be abundant in life. However, when modeled rigorously, their existence in nature is far from evident. In fact, the world can be deterministic yet at the same time the predictions of quantum mechanics are consistent with observations. Assuming that randomness does exist but only in a weak form, could highly random events be certifiably demonstrated?

In this talk, we resolve this question positively by constructing the first no-signaling secure device-independent randomness amplification protocol that works for arbitrary weak sources with sufficient entorpy. Our result implies that the laws of physics either do not allow randomness or allow it in an almost perfect quality. The assumptions under which we derive our conclusion are minimal and in particular do not rely on any structural or independence conditions assumed in previous works. If in addition quantum mechanics is assumed to be complete, our result implies that an unbounded amount of almost perfect randomness can be experimentally produced from a single source of weak randomness.

« Back...

 

Randomness in cryptography
Yevgeniy Dodis, New York University, USA


Unlike many other fields in computer science, randomness is essential for cryptography: secrets must have uncertainty to the attacker, and many cryptographic algorithms must be randomized (e.g., two stateless encryptions of the same message must look different). Traditionally, one assumes the existence of perfect randomness. However, this assumption is often unrealistic. In this talk I will survey what is know about basing cryptography of various (realistic) sources of randomness. We will ask the following questions:
1) Does Cryptography need nearly perfect ("extractable") sources of randomness, or is entropy sufficient?
2) What if the secret key is imperfect but "local" (or public) perfect randomness is available?
As we will see, the answer to the first question is largely negative, while the second questions leads to many positive answers, some of which found many applications beyond cryptography.

« Back...

 

Information theoretic continuously non-malleable codes in the constant split-state model
Nico Marcel Döttling, University of California at Berkeley, USA


We present a continuous non-malleable code for the model where the codeword is split into a constant number of states with independent tampering of the parts and where there is a so called self-destruct mechanism which ensure that the adversary loses access to tampering after the first failed reconstruction. Our code has information theoretic security. Prior to our result only codes with computational security was known for this model, and it was an open problem to construct such a code with information theoretic security. As a conceptual contribution we also introduce the notion of a one-way non-malleable code. This is the main new concept which allows to solve the open problem. As technical contributions we show how to black-box turn any one-way continuous non-malleable code into a full fledged continuous non-malleable code. and we construct the first one-way continuous non-malleable code with information theoretic security.

« Back...

 

Short stickelberger class relations and application to ideal-SVP
Léo Ducas, Centrum Wiskunde & Informatica, The Netherlands


The hardness of finding short vectors in ideals of cyclotomic number fields (Ideal-SVP) serves as the worst-case hypothesis underlying the security of numerous cryptographic schemes, including key-exchange, public-key encryption and fully-homomorphic encryption. A series of recent works has shown that, for large approximation factors, Principal Ideal-SVP is not as hard as finding short vectors in general lattices. Namely, there exists a quantum polynomial time algorithm for an approximation factor of exp(O~(n√)), even in the worst-case. Some schemes were broken, but more generally this exposed an unexpected hardness gap between general lattices and some structured ones, and called into question the exact security of various assumption over structured lattices.

In this work, we generalize the previous result to general ideals. We show an efficient way of finding a close enough principal multiple of any ideal by exploiting the classical theorem that, in our setting, the class-group is annihilated by the (Galois module action of) the so-called Stickelberger ideal. Under some plausible number theoretical hypothesis, we conclude that worst-case Ideal-SVP in this same set-up ---choice of ring, and approximation factor exp(O~(n√))--- is also solvable in quantum polynomial time.

Although it is not yet clear whether the security of further cryptosystems is directly affected, we contribute novel ideas to the cryptanalysis of schemes based on structured lattices. Moreover, our result shows a deepening of the gap between general lattices and structured one.

« Back...

 

Formal models for side-channel leakage
Stefan Dziembowski, University of Warsaw, Poland


Physical side-channel attacks that exploit leakage emitting from devices are an important threat for cryptographic implementations. A recent trend in cryptography is to construct cryptographic algorithms that are secure given leakage model. Over the past 15 years there has been a number of such models proposed in the literature, starting with the probing model of Ishai et al [CRYPTO 2003], where the computation is modelled as a Boolean circuit, and the adversary can learn a limited number of them. Other models studied in the theory community include the bounded-leakage paradigm [Dziembowski, TCC 2006],[Akavia et al, TCC 2009], theonly computation leaks model [Micali and Reyzin, TCC 2004], the independent leakage model [Dziembowski and Pietrzak, FOCS 2008], the auxiliary input model [Dodis et al, TCC 2010], and many others.

Some of these models have been received with skepticism by the practitioners, who often argued that it is much more realistic to model leakage as a noisy function of the secret data. The first model for such noisy leakagewas proposed by Chari et al, [CRYPTO'99], and fully formalized by Prouff and Rivain [Eurocrypt 2013]. Somewhat surprisingly, recently Duc, Dziembowski, and Faust [Eurocrypt 2014] have shown that in fact thenoisy leakage model of Prouff and Rivain can be reduced the probing model (i.e.: every noisy leakage function can be simulated be a probing function), which, in particular, greatly simplifies several proofs in the noisyleakage model, and can be viewed as closing the gap between theory and practice in this area.

In this talk we will give an overview of the leakage models used in the literature and present the reduction from the Duc et al paper. If time permits we will also talk about the follow-up work of Dziembowski, Faust and Skórski [Eurocrypt 2015].

« Back...

 

Bounds on the information ratios of secret sharing schemes for close access structures
Oriol Farras, Universitat Rovira i Virgili, Spain


The information ratio of a secret sharing scheme $\Sigma$ measures the size of the largest share of the scheme, and is denoted by $\sigma(\Sigma)$. The optimal information ratio of an access structure $\Gamma$ is the infimum of $\sigma(\Sigma)$ among all schemes $\Sigma$ for $\Gamma$, and is denoted by $\sigma(\Gamma)$. The main result of this work is that for every two access structures $\Gamma$ and $\Gamma'$, $|\sigma(\Gamma)- \sigma(\Gamma')|\leq|\Gamma\cup\Gamma'|-|\Gamma\cap\Gamma'|$. As a consequence of this result, we see that close access structures admit secret sharing schemes with similar information ratio. We show that this property is also true for particular families of secret sharing schemes and models of computation, like the family of linear secret sharing schemes, span programs, Boolean formulas and circuits. In order to understand this property, we also study the limitations of the techniques for finding lower bounds on the information ratio and other complexity measures. We analyze the behavior of these bounds when we add or delete subsets from an access structure.

« Back...

 

The monogamy of entanglement, and applications to quantum cryptography
Serge Fehr, Centrum Wiskunde & Informatica, The Netherlands


One of the peculiar features of quantum mechanics is entanglement. It is known that entanglement is monogamous in the sense that a quantum system can only be strongly entangled to one other system. In this talk, after introducing the necessary background, I show how to capture and quantify this so-called monogamy of entanglement by means of a simple "game", and I prove that in this game the monogamy completely "cancels out" any advantage of entanglement. The technical core of our result is an operator-norm inequality that enables to bound the norm of the sum of (certain) operators by means of their pair-wise products. Our monogamy-of-entanglement game has various applications to quantum cryptography, which I will briefly talk about.

« Back...

 

Non-malleable commitments from non-malleable codes
Vipul Goyal, Microsoft Research India, India


A central challenge in the design of secure systems is to defend against man-in-the-middle attacks, where an adversary can arbitrarily tamper with the messages exchanged by two parties over a communication channel. Starting with the early nineties, an important research goal in cryptography has been to build "non malleable" cryptographic protocols that are resilient to such attacks.

A very basic non-malleable primitive which is widely used in cryptography is what is known as non-malleable commitment schemes. In this talk, I will describe a recent result which constructs non-malleable commitments in the minimal number of rounds (and almost minimal complexity assumptions). In some sense, this culminates a two-decade long research quest of getting non-malleable commitments in the minimal number of rounds.

« Back...

 

MDS codes with complementary duals
Lingfei Jin, Fudan University, China


A Linear Complementary Dual (LCD for short) code is a linear code with complimentary dual. LCD codes have been extensively studied in literature. On the other hand, MDS codes are an important class of linear codes that have found wide applications in both theory and practice. This talk will present a construction of several classes of MDS codes with complimentary duals (i.e., LCD MDS codes) through generalized Reed-Solomon codes.

« Back...

 

Technical history of discrete logarithms in small characteristic finite fields
Antoine Joux, University Pierre et Marie Curie, France


Due to its use in cryptographic protocols such as the Diffie-Hellman key exchange, the discrete logarithm problem attracted a considerable amount of attention in the past 40 years. In this talk, we summarize the key technical ideas and their evolution for the case of discrete logarithms in small characteristic finite fields. This road leads from the original belief that this problem was hard enough for cryptographic purpose to the current state of the art where the algorithms are so efficient and practical that the problem can no longer be considered for cryptographic use.

« Back...

 

Almost optimal non-malleable extractors and privacy amplification protocols
Xin Li, Johns Hopkins University, USA


Privacy amplification is a basic problem in cryptography, where two parties communicate over an adversarially controlled channel to convert their shared weak random secret into nearly uniform secret random strings. The primary goal is to use as few number of interactions to achieve entropy loss as small as possible.A key ingredient in designing privacy amplification protocols is an object called non-malleable extractor. Previously, explicit constructions of such extractors require entropy at least log^2 (n/\epsilon) for input length n and error \epsilon, which translates into security parameter at most \sqrt{k} when the shared secret has entropy k. In this talk I'll describe the first explicit construction of a non-malleable extractor that only requres entropy log^{1+o(1)} (n/\epsilon), which in turn gives optimal privacy amplification protocols for nearly all possible security parameters.

« Back...

 

Fully secure functional encryption for inner products, from standard assumptions
Benoit Libert, Ecole Normale Supérieure de Lyon, France


Functional encryption is a public-key paradigm where a master secret key can be used to derive sub-keys associated with certain functions F in such a way that the decryption operation reveals F(M), if M is the encrypted message, and nothing else. Recently, Abdalla et al. (PKC'15) gave simple and efficient realizations of the primitive for the computation of linear functions on encrypted data: given an encryption of a vector y over some specified base ring, a secret key for the vector x allows computing . Their technique surprisingly allows for instantiations under standard assumptions, like the hardness of the Decision Diffie-Hellman(DDH) and Learning-with-Errors (LWE) problems. Their constructions, however, are only proved secure against selective adversaries, which have to declare the challenge messages M0 and M1 at the outset of the game. In this work, we provide constructions that provably achieve security against more realistic adaptive attacks (where the messages M0 and M1 may be chosen adaptively, based on the previously collected information) for the same inner product functionality. Our constructions are obtained from hash proof systems with homomorphic properties over the key space. They are (almost) as efficient as those of Abdalla et al. and rely on the same hardness assumptions. In addition, we obtain a solution based on Paillier's composite residuosity assumption, which was an open problem even in the case of selective adversaries. We also propose LWE-based schemes that evaluate inner products modulo a prime p, as opposed to the schemes of Abdalla et al. which are restricted to evaluating integer inner products of short integer vectors. We finally propose a solution based on Paillier's Composite Residuosity assumption that enables the evaluation of inner products modulo an RSA integer N=pq. We demonstrate that the functionality of inner products is sufficient for several natural applications.

« Back...

 

Cloud-friendly vss with a non-interactive dealer
Kirill Morozov, Tokyo Institute of Technology, Japan


Suppose that a dealer secret-shares her data and distributes them to the cloud servers. Then, at a later point, multi-party computation (MPC) need to be performed on these data, but neither the original scheme supports verifiable secret sharing (VSS), nor the dealer has his random coins used in the former scheme. The cloud servers may use MPC to verify consistency of their shares, but this may be costly. We study the concept called verifiable secret sharing with non-interactive dealer (VSS-NID), which provides a solution for the above problem. In this work, we focus on the protocols with perfect security against Q3 active adversaries.

First, we show that for linear secret sharing schemes (LSSS), where every access set can recover random coins, we can build VSS-NID using the idea of the VSS protocol of Cramer, Damgaard and Maurer from Eurocrypt 2000. This yields an efficient instantiation based on Shamir's scheme.

Still, it may be desirable to avoid VSS as a subroutine, and we achieve this goal in the second protocol, which uses a coding representation of LSSS and sharing the parity checks of shares. However, constructing an efficient verification protocol for this scheme is an open problem.

In addition, we show that Maurer's VSS scheme (SCN 2002) based on replicated sharing can be transformed into VSS-NID.

« Back...

 

How to share a secret: new perspectives
Moni Naor, Weizmann Institute of Science, Israel


Secret sharing schemes allow a dealer to distribute a secret piece of information among several parties such that only qualified subsets of parties can reconstruct the secret. The collection of qualified subsets is called an access structure. The best known example is the k-threshold access structure, where the qualified subsets are those of size at least k. Secret sharing has proved to be a most fundamental concept in theory and practice.

In this talk I will describe recent work on secret sharing, in particular concerning the case of where the set of parties is not known in advance and could potentially be infinite. Another case is where the access structure is rather complex such as parties corresponding to edges in graph and the qualified sets are those containing a Hamiltonian cycle.

« Back...

 

Lattice-based cryptography
Phong Nguyen, The University of Tokyo, Japan


Lattices are regular arrangements of points in the n-dimensional space. Lattice-based cryptography started in the mid-nineties, but its origins go back to the beginning of public-key cryptography with knapsack cryptosystems. In the past few years, lattice-based cryptography has been attracting significant interest, in part because of its well-known (potential) resistance to quantum computers, but especially because of new and surprising features, such as fully-homomorphic encryption, (noisy) multilinear maps, and lately, (indistinguishability) obfuscation. In this series of lectures, we will survey lattice-based cryptography: we will present the main algorithms for solving hard lattice problems, and the main constructions of lattice-based cryptography, by analogy with RSA and Discrete Log.

« Back...

 

Adaptively secure garbled circuits
Rafail Ostrovsky, University of California, Los Angeles, USA


A garbling scheme (also known as a randomized encoding) maps a circuit C to C' and maps any input x (for C) to x' such that C'(x')=C(x), yet C' and x' jointly reveal nothing about x except the output C(x). In many settings, the circuit C can first be garbled off-line to C' (and given to a cloud) and later any input x can be efficiently garbled to x', with much lower online complexity of garbling x to x' compared to evaluating the circuit. This can be used to delegate arbitrary computation to the cloud without revealing user's private inputs.

Yao's garbling scheme has small online complexity, but only achieves selective security, where the adversary must choose the input x prior to seeing the garbled circuit. Achieving adaptive security, where the adversary can choose x after seeing the garbled circuit, has remained an open problem since the 1980's.

In this talk, I will show how to modify Yao's scheme in a way that allows us to prove adaptive security, assuming the existence of any one-way function (such as AES) only. Specifically, I will present a scheme where the online complexity is proportional to the width of the circuit and is independent of the circuit's depth. More broadly, I will relate the online complexity of adaptively secure garbling schemes to a certain type of pebble-complexity of the circuit graph.

Joint work with: Brett Hemenway, Zahra Jafargholi , Alessandra Scafuro and Daniel Wichs, appeared in crypto 2016.

« Back...

 

Some results on algebraic curves and planar functions
Ferruh Ozbudak, Middle East Technical University, Turkey


There are close connections on algebraic curves and some problems in cryptography and coding theory. In this talk we present some results related to algebraic curves,cryptography and coding theory. We also hope to explain some of the connections to semifields, plateaued functions and planar functions in detail.

« Back...

 

Group operation on nodal curves
Enver Ozdemir, Istanbul Technical University, Turkey


In this talk, I am going to show that any element D in the (generalized) Jacobian group of a nodal curve can be uniquely represented by a single polynomial. Then, I present how to perform group operation on the Jacobian groups of these special curves. The Jacobian groups of smooth curves, especially for elliptic and hyperelliptic curves, have been rigorously investigated due to their use in computational number theory and cryptography. The recent works of mine (Polynomial Factorization, Square root computation, Compositeness test, a parallel symmetric encryption method) show that special singular curves, called nodal curves, might also be potential candidates for applications in computational number theory and cryptography. For example, the square root algorithm does not only show an e_cient algorithm to compute square roots in _nite _elds with nodal curves but also shows that the other well-known square root algorithms (Tonelli-Shanks, Cipallo, Peralta) can be explained as special cases of this algorithm. In this talk, I present a simple and e_cient method to perform group operation on the Jacobians of nodal curves. The method is basically an extension of Mumford representation and Cantor's algorithm. For our purpose, a nodal curve N over a _nite _eld Fq with characteristic p 6= 2 is a curve de_ned by an equation y2 = xf(x)2 where f(x) 2 Fq[x] is a square-free polynomial. Let d = deg(f(x)). We show that any polynomial h(x) such that x h2(x) is coprime to f(x) and of degree less d uniquely represents an element D in the Jacobian of the curve. An element in the Jacobian of a smooth curve, for example a hyperelliptic curve, is represented by a pair of polynomials (u(x); v(x)) satisfying certain conditions. The situation is the same for higher degree curves, for example an element D in the Jacobian of a superelliptic curve S : y3 = g(x) is represented by triple polynomials (s1(x); s2(x); s3(x)) satisfying certain conditions. Therefore for a smooth curve, we do not have the liberty to choose a polynomial u(x) and say that it is a coordinate of an element in the corresponding Jacobian. The result of this work will allow us to treat any monic polynomial h(x) an element of a nodal curve. I believe that this might intrigue researchers to work with these groups for further applications.

« Back...

 

Tension: understanding cryptographically interesting correlations
Vinod Prabhakaran, Tata Institute of Fundamental Research, India


We introduce the notion of "tension region" to quantitatively study cryptographically interesting correlations between a pair of random variables. We show that it obeys a monotonicity property in secure 2-party protocols, and that it can be used to obtain bounds on secure computation capacity (e.g.,the oblivious-transfer capacity) of various functionalities. We shall also discuss applications of tension to multi-party settings, as well as some multi-party analogues of tension.

« Back...

 

On the communication complexity of secure computation
Manoj M. Prabhakaran, University of Illinois, Urbana-Champaign, USA


We will discuss some recent results on communication complexity for information-theoretically secure multiparty computation. Our main goal in this investigation is to identify information theoretic tools for establishing lower bounds for amortized communication complexity. We focus on secure 3-party computation to develop a collection of tools that we expect to be applicable more broadly. We shall discuss our results as well as major outstanding problems.

« Back...

 

Factoring with hints
Francesco Sica, Nazarbayev University, Kazakhstan


We show that, given the knowledge of the factorisations of $O(N^{1/3+\epsilon})$ terms following $N=pq$ product of two primes, we can recover deterministically $p$ and $q$ in $O(N^{1/3+\epsilon})$ bit operations. This method uses analytic number theory and specifically the theory of the Riemann zeta function. It shows a nontrivial relation between the factorisation of sufficiently many integers close to each other.

« Back...

 

Information-theoretic techniques in lattice-based cryptography
Ron Steinfeld, Monash University, Australia


We survey several applications of information-theoretic results and techniques, such as the Leftover Hash Lemma, Statistical Distance and Renyi Divergence and their properties, in the design and security analysis of lattice-based cryptographic primitives and protocols, and discuss related open problems.

« Back...

 

Equivocations, exponents and second-order coding rates under various rényi information measures
Yan Fu Tan, Vincent, National University of Singapore


We evaluate the asymptotics of equivocations, their exponents as well as their second-order coding rates under various R\'{e}nyi information measures. Specifically, we consider the effect of applying a hash function on a source and we quantify the level of non-uniformity and dependence of the compressed source from another correlated source when the number of copies of the sources is large. Unlike previous works that use Shannon information measures to quantify randomness, information or uniformity, we define our security measures in terms of a more general class of information measures--the R\'{e}nyi information measures and their Gallager-type counterparts. A special case of these R\'{e}nyi information measure is the class of Shannon information measures. We prove tight asymptotic results for the security measures and their exponential rates of decay. We also prove bounds on the second-order asymptotics and show that these bounds match when the magnitudes of the second-order coding rates are large. We do so by establishing new classes non-asymptotic bounds on the equivocation and evaluating these bounds using various probabilistic limit theorems asymptotically.

« Back...

 

New techniques for information-theoretic indistinguishability, and applications
Stefano Tessaro, University of California, Santa Barbara, USA


Proving concrete bounds on the information-theoretic indistinguishability of two (interactive) objects is a core technical problem across all of cryptography. For instance, for symmetric constructions (e.g., block ciphers and message-authentication codes), such bounds are used to justify the choice of parameters, such as the number of rounds, needed to achieve a desired security level. This talk will advocate developing a richer theory to put such proofs on solid foundations, and with the aim of obtaining bounds which are as tight as possible.

In the first part of this talk, I will discuss a new method for indistinguishability proofs, called the "expectation method", which generalizes Patarin's H-coefficient technique. As one particular testbed application, I will show how this technique yields (near) exact bounds on the indistinguishability of so-called key-alternating ciphers (i.e., the multi-round iterated version of Even-Mansour ciphers), improving on previous work by Chen and Steinberger (EUROCRYPT '14), and settling a question which has remained open for a few years in the theory of symmetric cryptography.

In the second part, I will show how information-theoretic indistinguishability is particularly amenable to reductions which are "transcript-centric" (in contrast to traditional cryptographic reductions, which are adversary-centric). Such reductions naturally allow for tighter quantitative connections between indistinguishability problems, and I will discuss a few applications.

« Back...

 

Black-box impossibility results in cryptography
Luca Trevisan, The University of California at Berkeley, USA


Is it possible to base public-key encryption on one-way functions? Is it possible to construct length-doubling pseudorandom generators based on a constant number of invocations of a one-way function? What form could a negative answer to the above questions take? If public-key cryptography is realizable, then it can be realized "based on" any one-way function, and, if pseduorandom generators are realizable, then they can be realized with zero invocations of any fixed primitive. Thus, a negative answer can be obtained only if we restrict our question to constructions that "really" base their security analysis on the security of a primitive, without proving security from scratch.

In this tutorial, we will see a possible formalization of the above set-up: we will consider cryptographic constructions and security reductions that treat their primitive as a "black box," or as an oracle. Such constructions and security analyses capture most, although not all, the known results in the foundations of cryptographic. Then we will see how to prove impossibility results for such constructions and security proofs, by applying information-theoretic methods. Along the way, we will see how to tightly analyze the cryptographic hardness of random oracles.

« Back...

 

Attribute-based encryption & information-theoretic crypto
Hoeteck Wee, École Normale Supérieure, France


Can we encrypt data while enabling fine-grained access control and selective computation? In this talk, we will survey how addressing this question led to new connections and questions in information-theoretic cryptography.

« Back...

 

Solving zero-dimensional polynomial systems and applications to multi-variate cryptography
Sze Ling Yeo, Agency for Science, Technology and Research


In this talk, we first discuss a theoretical framework to solve zero-dimensional polynomial systems. In particular, we will introduce a new parameter, called the last fall degree which does not depend on the monomial order chosen. Complexity bounds to solve some polynomial systems based on this parameter will be presented. Finally, we discuss how our results prove that multi-HFE cryptosystems are insecure.

« Back...

 

Crossing the theory-practice chasm: on deploying secure computations commercially
Moti Yung, Snapchat and Columbia University, USA


Technological innovations in security and privacy are critical to advancing modern computing in our time. I will present an industrial effort involving deployment of commercial applications designed and built as a 'secure multi-party computation protocol for specific tasks, to be used repetitively to achieve a number of concrete ubiquitous business goals. In these applications, the outputs are calculated in the presence of privacy constraints which prevent parties from sharing their individual inputs directly and openly.

In this tutorial, we will see a possible formalization of the above set-up: we will consider cryptographic constructions and security reductions that treat their primitive as a "black box," or as an oracle. Such constructions and security analyses capture most, although not all, the known results in the foundations of cryptographic. Then we will see how to prove impossibility results for such constructions and security proofs, by applying information-theoretic methods. Along the way, we will see how to tightly analyze the cryptographic hardness of random oracles.

« Back...

 
Best viewed with IE 7 and above