Tselil Schramm - Postdoc in Theoretical Computer Science, Harvard and MIT
March 1, 2019, 9 a.m. - March 1, 2019, 10 a.m.
McConnell 321
ABSTRACT:
Algorithmic problems that arise in practice are often not well-described by the worst-case model of computation. Statistical models are an avenue for building a theory that better supports computational endeavors: rather than design algorithms for the worst case, we assume that inputs come from a structured distribution. From an algorithmic perspective, statistical models are a canonical construct in machine learning; from a complexity-theoretic perspective, statistical models are a rich source of cryptographic and security primitives. In this talk, I will describe my work towards building a theory of algorithms for statistical models using convex relaxations (and particularly the powerful sum-of-squares algorithm). I'll talk about my efforts in understanding the tradeoff between computation and statistical information, in giving fast (often linear- or almost-linear-time) algorithms based on slower semidefinite programs, and in making a broad connection between convex programs and spectral algorithms for statistical problems.
****
Tselil Schramm is a postdoc in theoretical computer science at Harvard and MIT, hosted by Boaz Barak, Jon Kelner, Ankur Moitra, and Pablo Parrilo. She obtained her PhD in computer science from U.C. Berkeley under the advisement of Prasad Raghavendra and Satish Rao, and was supported by a NSF Graduate Research Fellowship. She spent Fall 2017 as the Google Research Fellow at the Simons Institute program on optimization. Her research interests include statistical and average-case problems, optimization via convex programs (especially the sum-of-squares algorithm), spectral algorithms, spectral graph theory, and more.
ALL ARE WELCOME