(7 March)Homework 0
(warmup) is now available. Due 21/3 in class. Submission in pairs
encouraged (but not in triplets or larger, please).
This homework uses the
paper from 2009 introducing Google Flu Trends (GFT), and the
paper from 2014 describing the failure of GFT since 2011.
code for investigating privacy of summary statistics release in
GWAS. The differential privacy topic started this week and continuing next is based on
Algorithmic Foundations of Differential Privacy by Dwork and Roth
by Wasserman and Zhou on statistical theory of differential
now available, due on 11 April in class. Submission in pairs is
2009 paper by Jacobs et al. may be used as reference for problem 1.
The next two classes (26/3 and 11/4) will
deal with high dimensional modeling (large p, p >> n). We will
discuss the statistical and computational challenges that are unique
to this setting and some of the most popular solutions. Relevant
reading materials include chapters 2-3 of
review I wrote on sparse modeling, and the papers on
LARS by Efron
et al. and
generalization by Rosset and Zhu.
is now available. Due 25 April in class.
Problem 1 uses
datasets, and there is also sample code in
Problem 2 extra credit uses this paper. (25 April)
The next two classes (25/4 and 2/5) will
include a brief introduction to Deep Learning methodology and applications. The relevant reading materials for this week include Chapter 18 of CASI and Giora Simchoni's blog entry.
(2 May)Moni's presentation from class today.
Recommended reading: Visualizing and Understanding Convolutional Networks Efficient Estimation of Word Representations in Vector Space Generative Adversarial Networks (Important topic we did not get to discuss) (7 May)Homework
is now available. Due 23 May in class.
It uses the code HW3-1.r (which requires installing the Keras R package, and also Python if you don't have it) and HW3-2.r. Note: Unlike previous homeworks, this one was prepared from scratch (thanks to Moni for his help!). So despite our efforts, there might be problems or issues. If you find any, please let me know. If you find a major problem and propose an appropriate way of fixing it, you may also get a bonus on the homework!
Today's class uses the survey by Goldberg et
al. (that appeared in 2010 in the Foundations and Trends in Machine
Learning series). Code
from class for fitting models to the Sampson monks and E-Coli
networks. (16 May)
Today's class focuses on multiple testing and selective inference in big data settings. Slides on
quality preserving databases.
Some slides from
Yoav Benjamini that cover many aspects of the discussion:
2 (pptx) Why most published research is wrong by Ioannidis
The goal of this course is to present some of the unique
statistical challenges that the new era of Big Data brings, and
discuss their solutions. The course will be a topics course,
meaning we will discuss various aspects that may not necessarily be
related or linearly organized. Our challenge will be to cover a
wide range of topics, while being specific and concrete in
describing the statistical aspects of the problems and the proposed
solutions, and discussing these solutions critically.
We will also keep in mind other practical aspects like computation
and demonstrate the ideas and results on real data when possible.
Accordingly, the homework and the final exam will include a
combination of hands-on programming and modeling with theoretical
Big Data is a general and rather vague term, typically referring to
data and problems that share some of the following characteristics:
It is big (obviously): this could mean having many
observations (large n), many features/variables (large p), or
It has additional structure information: temporal, spatial,
graph structure (like network data), etc.
It leads to non-traditional modeling problems, like network
evolution, collaborative filtering, structured learning, etc.
It presents significant practical challenges in handling the
data and modeling it, including:
The need to maintain privacy and security of the data while
sharing it and extracting information from it
The difficulty in storing and performing calculations at scale
The difficulty in correctly interpreting the data and
generating valid statistical modeling problems from it
The full extent of its potential utility is unclear and subject to research
Some examples of typical Big Data domains gaining
importance in recent years:
Internet usage data, including social network information, search
and advertising information, etc.
Health records and related information
Scientific databases, including areas like particle physics,
electron microscopy and genetics
Images and video surveillance data
A key topic in data modeling in general and Big Data in particular
is predictive modeling (regression, classification). Since the
course Statistical Learning (given last year and next year) deals mainly
with exposition and statistical analysis of algorithms in this area,
it will not be a focus of this course. However, some aspects of this
area that are not covered in that course, in particular the
p >> n case, efficient computation, and deep learning, will be discussed in some detail.
Tentative list of topics to be covered during the semester:
Network modeling: Probabilistic models of network evolution;
Parameter estimation and inference
Privacy: Differential privacy; Algorithms to guarantee privacy
in different settings; Examples of privacy breaches
Statistical validity of scientific research on modern data:
Replicability; Sequential testing on public databases
Spectral analysis of large random matrices: statistical and
p >> n: Sparsity and computation
Deep learning: theory and methodology
Turning data into modeling: Competitions and proof of concept
projects; Leakage in data mining
We will have 3-5 guest lectures during the semester, but they will
be treated as regular classes rather than enrichment classes
(specifically, their material will be included in the homework and
Basic knowledge of mathematical foundations:
Calculus: Integration; Sums of series; Extrema, etc.
Linear algebra/geometry: Basic properties of matrices:
inverse, trace, determinant; SVD and eigen decompositions: PCA and
its geometrical and statistical interpretations
Solid fundamentals in Probability: Discrete/continuous
probability definitions; Important distributions:
Bernoulli/Binomial, Poisson, Geometric, Hypergeometric, Negative
Binomial, Normal, Exponential/Double Exponential (Laplace), Uniform,
Beta, Gamma, etc.; Limit laws: large numbers and CLT; Inequalities:
Markov, Chebyshev, Hoeffding
Conditional distributions and moments: Basic definitions and
Bayes rules; Definitions and properties of conditional expectation
and variance; Laws: Iterated expectation, total variation; Intuition
vs mathematics in conditional probabilities: Simpson's paradox etc.
Solid fundamentals in Statistics:
The equivalent of a course in
Statistical Theory: Basic definitions: Estimation, confidence
intervals, hypothesis testing, basic properties of statistical tests
and estimators: Level, power, p-values, bias, variance, consistency
etc.; Basic theoretical results: Neyman-Pearson Lemma,
Rao-Blackwell, Cramer-Rao, Wilks; Important families of tests: Z,t, F, χ2, GLRT; Bayesian inference: Basics and uses
The equivalent of a course in Regression / Analysis of
Variance: Algebra and geometry of multivariate regression; Inference
in linear regression; Error decompositions: Bias + Variance;
Generalizations: Logistic regression, auto-regression, model
selection (cp/AIC); Basic ANOVA; Familiarity with mixed/random
Advantage: Some knowledge in modern/nonparametric statistics
and/or statistical learning; Practical experience with R
Books and resources
The course does not have a specific textbook, and most lectures will
be on the board and not using slides. Some of the material will
closely follow chapters from books or published papers, and when
this is the case it will be announced. However, it is critical that
all students have all the material presented in class. If you miss
classes, make sure to get the material from someone!
Relevant books: Elements of Statistical Learning by Hastie, Tibshirani & Friedman. Including freely available pdf, data and errata) Modern Applied Statistics with Splus by Venables and Ripley Frontiers in Massive Data Analysis report from the National Research Council Computer Age Statistical Inference by Efron and Hastie
There will be four-five homework assignments, which will count for
about 30% of the final grade, and a final take-home exam. Both the
homework and the exam will combine theoretical analysis with
hands-on data analysis.