Users outsource their data and computation to cloud-service providers such as Amazon EC2, Google Cloud, and Microsoft Azure that are potentially untrusted or may be compromised. Meanwhile, companies collect data from users so as to run machine-learning algorithms on that data to develop products and services. Despite the great benefits of these techniques, they currently require users to give up control of their data and to trade off integrity and privacy for utility.
This talk discuss several cryptographic techniques developed to address these issues. I will first talk about techniques for verifiable storage and computation that can be used to ensure the correctness of computations done in the cloud and services offered by cloud providers. I will then discuss privacy-preserving machine learning, which allows companies to execute machine-learning algorithms without learning users’ data. I will conclude with some thoughts on future applications of these new protocols to other domains.