Statistics of Big Data

 
Semester 1 2021-22
Monday, 12-15
Schreiber 007
Home page on http://www.tau.ac.il/ ∼ saharon/BigData.html
Lecturer: Saharon Rosset
Schreiber 022
saharon@tauex.tau.ac.il
Office hrs: By email coordination

Announcements and handouts

(11 October)
Class 1 recording. Note it only includes the first hour and last half hour since unfortunately I forgot to turn on the recording for the middle portion (the jump is around minute 52 of the video). However the missing portion closely follows the class notes.
Class notes are available.
Homework 0 (warmup) is now available. Due 25/10 before class, submission is by email to me. Submission in pairs is encouraged (but not in triplets or larger, please).
This homework uses the Nature paper from 2009 introducing Google Flu Trends (GFT), and the Science paper from 2014 describing the failure of GFT since 2011.
(18 October)
Class 2 recording
Class notes are available.
code for demonstrating privacy violation in releasing GWAS summaries.
(25 October)
Class 3 recording
Class notes.
Main source for today's material: The Algorithmic Foundations of Differential Privacy by Dwork and Roth
Homework 1 is now available, due before class on 8/11. Submission in pairs is encouraged. This 2009 paper by Jacobs et al. may be used as reference for problem 1.
(1 November)
Class 4 recording
Class notes.
Sources for today's material: The Algorithmic Foundations of Differential Privacy by Dwork and Roth and A Statistical Framework for Differential Privacy by Wasserman and Zhou
(8 November)
The next 2-3 classes will deal with high dimensional modeling (large p, p >> n). We will discuss the statistical and computational challenges that are unique to this setting and some of the most popular solutions. Relevant reading materials include chapters 2-3 of ESL, this review I wrote on sparse modeling, and the papers on LARS by Efron et al. and its generalization by Rosset and Zhu. We will also discuss compressed sensing, with the most relevant references being Candes et al. (2005) and Meinshausen and Yu (2009)
Class 5 recording
Class 5 notes.
Sources for today's material: The Algorithmic Foundations of Differential Privacy by Dwork and Roth and A Statistical Framework for Differential Privacy by Wasserman and Zhou
Homework 2 is now available. Due 29 November before in class.
Problem 1 uses train.csv and test.csv datasets, and there is also sample code in sparse.r.
Problem 2 extra credit uses this paper.
(15 November)
Class 6 recording
Class 6 notes.
(22 November)
Class 7 recording
Class 7 notes.
After completing the LARS-Lasso discussion today, we will start the topic of Network data analysis. Our main source is the survey by Goldberg et al. (that appeared in 2010 in the Foundations and Trends in Machine Learning series).
(29 November)
Class 8 recording
Class 8 notes.
Code for fitting models to the Sampson monks and E-Coli networks
(6 December)
Class 9 recording
Class 9 notes
(13 December)
Class 10 recording
Presentation slides for Boaz Nadler's presentation: Introduction, Parts 1&2 of his talk, Part 3 of his talk
(20 December)
Class 11 recording
Class 11 notes
QPD presentation
Homework 3 is now available. The first problem uses the files covtrain.csv, covtest.csv with code hints in pca.r. The second problem relies on topics from Yoav Benjamini's talk, and refers to the Science paper by Zeggini et al.
(27 December)
Class 12 recording (unfortunately there are some problems with video in the beginning and the screen share did not work properly for most of the talk, however you have the full presentation available).
Yoav Benjamini's presentation
(3 January)
Class 13 recording

Syllabus

The goal of this course is to present some of the unique statistical challenges that the new era of Big Data brings, and discuss their solutions. The course will be a topics course, meaning we will discuss various aspects that may not necessarily be related or linearly organized. Our challenge will be to cover a wide range of topics, while being specific and concrete in describing the statistical aspects of the problems and the proposed solutions, and discussing these solutions critically. We will also keep in mind other practical aspects like computation and demonstrate the ideas and results on real data when possible. Accordingly, the homework and the final project will include a combination of hands-on programming and modeling with theoretical analysis.
Big Data is a general and rather vague term, typically referring to data and problems that share some of the following characteristics:
Some examples of typical Big Data domains gaining importance in recent years:
A key topic in data modeling in general and Big Data in particular is predictive modeling (regression, classification). Since the course Statistical Learning deals mainly with exposition and statistical analysis of algorithms in this area, it will not be a focus of this course. However, some aspects of this area that are not covered in that course, in particular the p >> n case, efficient computation, and deep learning, will be discussed in some detail.
Tentative list of topics to be covered during the semester:
We will have 3-5 guest lectures during the semester, but they will be treated as regular classes rather than enrichment classes (specifically, their material will be included in the homework and the final).

Expected background

  1. Basic knowledge of mathematical foundations:
  2. Solid fundamentals in Probability: Discrete/continuous probability definitions; Important distributions: Bernoulli/Binomial, Poisson, Geometric, Hypergeometric, Negative Binomial, Normal, Exponential/Double Exponential (Laplace), Uniform, Beta, Gamma, etc.; Limit laws: large numbers and CLT; Inequalities: Markov, Chebyshev, Hoeffding
  3. Solid fundamentals in Statistics:

Books and resources

The course does not have a specific textbook, and most lectures will be on the board and not using slides. Some of the material will closely follow chapters from books or published papers, and when this is the case it will be announced. However, it is critical that all students have all the material presented in class. If you miss classes, make sure to get the material from someone!
Relevant books:
Elements of Statistical Learning by Hastie, Tibshirani & Friedman. Including freely available pdf, data and errata)
Modern Applied Statistics with Splus by Venables and Ripley
Frontiers in Massive Data Analysis report from the National Research Council
Computer Age Statistical Inference by Efron and Hastie

Grading

There will be four-five homework assignments, which will count for about 30% of the final grade, and a final take-home project. Both the homework and the project will combine theoretical analysis with hands-on data analysis.

Computing

The course will require use of statistical modeling software. It is strongly recommended to use R (freely available for PC/Unix/Mac).
R Project website also contains extensive documentation.
A basic "getting you started in R" tutorial. Uses the Boston Housing Data (thanks to Giles Hooker).
Modern Applied Statistics with Splus by Venables and Ripley is an excellent source for statistical computing help for R/Splus.



File translated from TEX by TTH, version 4.12.
On 03 Jan 2022, 15:11.