ANONYMOUS MESSAGING:
HORNET: High-speed Onion Routing at the Network Layer.
Chen Chen, Daniele Enrico Asoni, David Barrera, George Danezis, Adrian Perrig
Abstract: We present HORNET, a system that enables high-speed end-to-end anonymous channels by leveraging next generation network architectures. HORNET is designed as a low-latency onion routing system that operates at the network layer thus enabling a wide range of applications. Our system uses only symmetric cryptography for data forwarding yet requires no per-flow state on intermediate nodes. This design enables HORNET nodes to process anonymous traffic at over 93 Gb/s. HORNET can also scale as required, adding minimal processing overhead per additional anonymous channel. We discuss design and implementation details, as well as a performance and security evaluation.
ACM CCS 2015, Denver 2015. For the full paper visit: http://arxiv.org/abs/1507.05724v1
Better open-world website fingerprinting.
Jamie Hayes, George Danezis
Abstract: Website fingerprinting enables an attacker to infer which web page a client is browsing through encrypted or anonymized network connections. We present a new website fingerprinting technique based on random deci- sion forests and evaluate performance over standard web pages as well as Tor hidden services, on a larger scale than previous works. Our technique, k-fingerprinting, performs better than current state-of-the-art attacks even against website fingerprinting defenses, and we show that it is possible to launch a website fingerprinting at- tack in the face of a large amount of noisy data. We can correctly determine which of 30 monitored hidden services a client is visiting with 85% true positive rate (TPR), a false positive rate (FPR) as low as 0.02%, from a world size of 100,000 unmonitored web pages. We fur- ther show that error rates vary widely between web re- sources, and thus some patterns of use will be predictably more vulnerable to attack than others.
For the full paper visit: http://arxiv.org/pdf/1509.00789.pdf
BETTER, FASTER, SIMPLER ZK PROOFS:
Efficient Culpably Sound NIZK Shuffle Argument without Random Oracles.
Prastudy Fauzi and Helger Lipmaa
Abstract: One way to guarantee security against malicious voting servers is to use NIZK shuffle arguments. Up to now, only two NIZK shuffle arguments in the CRS model have been proposed. Both arguments are relatively inefficient compared to known random oracle based arguments. We propose a new, more efficient, shuffle argument in the CRS model. Importantly, its online prover’s computational complexity is dominated by only two (n+1)-wide multi-exponentiations, where n is the number of ciphertexts. Compared to the previously fastest argument by Lipmaa and Zhang, it satisfies a stronger notion of soundness.
In CT-RSA 2016, San Franscisco, CA, USA, February 29–March 4, 2016. Springer, Heidelberg. For the full paper visit: http://eprint.iacr.org/2015/1112
Prover-Efficient Commit-And-Prove Zero-Knowledge SNARKs.
Helger Lipmaa
Abstract: Zk-SNARKs (succinct non-interactive zero-knowledge arguments of knowledge) are needed in many applications. Unfortunately, all previous zk-SNARKs for interesting languages are either inefficient for prover, or are non-adaptive and based on an commitment scheme that does depend both on the prover’s input and on the language, i.e., they are not commit-and-prove (CaP) SNARKs. We propose a proof-friendly extractable commitment scheme, and use it to construct prover-efficient adaptive CaP succinct zk-SNARKs for different languages, that can all reuse committed data. In new zk-SNARKs, the prover computation is dominated by a linear number of cryptographic operations. We use batch-verification to decrease the verifier’s computation.
In Africacrypt 2016, Fes, Morocco, April 13–15, 2016. Springer, Heidelberg. For the full paper visit: http://eprint.iacr.org/2014/396
DIFFERENTIAL PRIVACY FOR MIXES:
Efficient Private Statistics with Succinct Sketches.
Luca Melis, George Danezis, Emiliano De Cristofaro
Abstract: Large-scale collection of contextual information is often essential in order to gather statistics, train machine learning models, and extract knowledge from data. The ability to do so in a {\em privacy-preserving} way — i.e., without collecting fine-grained user data — enables a number of additional computational scenarios that would be hard, or outright impossible, to realize without strong privacy guarantees. In this paper, we present the design and implementation of practical techniques for privately gathering statistics from large data streams. We build on efficient cryptographic protocols for private aggregation and on data structures for succinct data representation, namely, Count-Min Sketch and Count Sketch. These allow us to reduce the communication and computation complexity incurred by each data source (e.g., end-users) from linear to logarithmic in the size of their input, while introducing a parametrized upper-bounded error that does not compromise the quality of the statistics. We then show how to use our techniques, efficiently, to instantiate real-world privacy-friendly systems, supporting recommendations for media streaming services, prediction of user locations, and computation of median statistics for Tor hidden services.
To be published in the proceedings of NDSS 2015. http://arxiv.org/abs/1508.06110
Other research outputs:
Centrally Banked Cryptocurrencies.
George Danezis, Sarah Meiklejohn
Abstract: Current cryptocurrencies, starting with Bitcoin, build a decentralized blockchain-based transaction ledger, maintained through proofs-of-work that also generate a monetary supply. Such decentralization has benefits, such as independence from national political control, but also significant limitations in terms of scalability and computational cost. We introduce RSCoin, a cryptocurrency framework in which central banks maintain complete control over the monetary supply, but rely on a distributed set of authorities, or mintettes, to prevent double-spending. While monetary policy is centralized, RSCoin still provides strong transparency and auditability guarantees. We demonstrate, both theoretically and experimentally, the benefits of a modest degree of centralization, such as the elimination of wasteful hashing and a scalable system for avoiding double-spending attacks.
To be published at NDSS 2015, http://arxiv.org/abs/1505.06895