Academic year 2012–2013
"The success of companies like Google, Facebook, Amazon, and Netflix, not to mention Wall Street firms and industries from manufacturing and retail to healthcare, is increasingly driven by better tools for extracting meaning from very large quantities of data. 'Data Scientist' is now the hottest job title in Silicon Valley." – Tim O'Reilly
The course will develop algorithms and statistical techniques for data analysis and mining, with emphasis on massive data sets such as large network data. It will cover the main theoretical and practical aspects behind data mining.
The goal of the course is twofold. First, it will present the main theory behind the analysis of data. Second, it will be hands-on and at the end students will become familiar with various state-of-the-art tools and techniques for analyzing data. We will use Python for downloading data and implementing various algorithms using its rich libraries and Hadoop's MapReduce framework for mining of large-scale data.
The due date for the sixth homework has been updated; check the homework.
The sixth homework is out; it is due on July 3.
The fifth homework is out; it is due on June 4.
You can find information about the project at the project web page.
Homework 4 has been updated; you can find the updated version below.
There will be no class on Wednesday, May 22.
The class of Monday, May 13, will be replaced by a guest lecture on frequent itemsets and by a social-network experiment. Details on the experiment to follow.
The fourth homework is out; it is due on May 14.
We have posted notes for the class of April 10.
There will be no class on April 24.
The class of Wed. April 17 will be in Room A7.
The third homework is out; it is due on April 23.
There will be no class on April 15.
The late bonus days are increased from 3 to 10.
The second homework is out; it is due on April 7.
We have added some more information on Hadoop and the examples we did in class when presenting Hadoop.
The class of Wed. April 10 will be in Room A6.
The first homework is out; it is due on March 20.
The rooms for the lectures have changed; check below.
The first lecture will be on Monday, March 4.
Aris Anagnostopoulos, Sapienza University of Rome.
For questions regarding installing and running Python or Hadoop you can contact Ida Mele.
When and where:
Monday 10.10–13.30, Room A4
Wednesday 15.45–17.15, Room B2
You can use the office hours for any question regarding the class material, past or current homeworks, or general questions on data mining. Send an email to the instructor for arrangement.
Chapters for which no book is mentioned refer to the "Mining of Massive Datasets" (see below).
|March 4||Introduction to data mining
Introduction to probability
|Chapters 1.1, 1.3
Introduction to data mining
A crash course on discrete probability
|March 6||Introduction to Python||Check the "Python resources" below|
|March 11||Overview of information retrieval, part 1||IIR book, chapters 1.0 – 1.1, 2.0 – 2.2, 6.2 – 6.3.2|
|March 13||Overview of information retrieval, part 2||IIR book, chapters 1.2, 6.3.3, 7.1.0, 4.2|
|March 18||Introduction to MapReduce and Hadoop||Chapters 2.0 – 2.3.2
Jeff Dean's talk on Google's distributed systems
Check the "Hadoop resources" below
Check the section "Examples done in class" below
|March 20||Mining frequent itemsets, part 1||Chapters 6.0 – 6.1.2, 6.2|
|March 25||Mining frequent itemsets, part 2||Chapters 6.3, 6.4.0 – 6.4.4|
|March 27||Mining frequent itemsets, part 3||Chapters 6.1.3, 6.1.4, 6.4.5, 6.4.6|
|April 3||Similarity and distance measures||Chapter 3.5, Wikipedia on LCS (the algorithm there
considers the last character of each string instead of
the first as we did in class, but otherwise it's the same)
|April 8||Clustering||Chapters 7.0 – 7.1.2, 7.2 – 7.2.3, 7.3 – 7.3.3
Chapter on k-means of the book of Christopher M. Bishop
|April 10||Modeling, maximum likelihood, soft
clustering, and expectation–maximization
|April 17||Clustering in non-Euclidean spaces
Finding similar items, shingling
|Chapters 7.2.4, 3.0 – 3.2|
|April 22||Minhashing, LSH for documents||Chapters 3.3 – 3.4|
|April 29||Link analysis and PageRank||Chapters 5.0 – 5.1|
|May 6||Personalized PageRank, link spam
Social networks and structural properties
|Chapters 5.3, 5.4
Slides on social networks
|May 8||Introduction to recommendation systems||Chapters 9.0 – 9.3|
|May 13||Guest Lecture by Prof. Eli Upfal:
"Randomized algorithms for frequent
|May 15||SVD for dimensionality reduction||Chapters 7.1.3, 11.3.0 – 11.3.3, 11.3.5|
|May 20||Streaming algorithms, part 1||Chapters 4.0 – 4.2, 4.5.5|
|May 27||Streaming algorithms, part 2||Chapters 4.3 – 4.5|
|May 29||Summary of the course and what's more|
Check the "Examination format" section below for information about collaborating, being late, and so on.
Handing in: You must hand in the homeworks by the due date and time by an email to the instructor that will contain as attachment (not links!) a .zip or .tar.gz file with all your answers and subject
[Data Mining class] Homework #
where # is the homework number. After you submit, you will receive an acknowledgement email that your homework has been received and at what date and time. If you have not received an acknowledgement email within 1 day after you submit then contact the instructor.
The solutions for the theoretical exercises must contain your answers either typed up or hand written clearly and scanned.
The solutions for the programming assignments must contain the source code, instructions to run it, and the output generated (to the screen or to files).
We will not post the solutions online, but we will present them in class.
- Homework 1 (due: 20/3/2013, 15:30)
- Homework 2 (due: 7/4/2013, 23:59)
- Homework 3 (due: 23/4/2013, 23:59)
- Homework 4 (updated 7/5/2013, due: 14/5/2013, 23:59)
- Homework 5 (due: 4/6/2013, 23:59)
- Homework 6 (due: check the homework)
Before the exam you have to implement a project, to which you will also be tested. You can find more information at the project web page.
Textbook and references
The main textbook is the "Mining of Massive Datasets," by A. Rajaraman, J. Leskovec, and J. D. Ullman. The printed version has been updated and you can download the latest version (currently 1.3) from the book's web site.
In addition, we will cover material from various sources, which we will post online as the course proceeds.
The main programming language that we will use in the course is Python. There are currently two main versions of Python, version 2.7 and version 3.3, which are slightly incompatible, but it is easy to translate programs from one version to the other. Even though version 3.3 is newer and better designed, version 2.7 is still more widely used and most available libraries are written for it. Therefore, in the class we will use Python 2.7.
To learn the language you can find a lot of material online. You can start from Python's documentation site: http://docs.python.org/2.7.
If you would like to buy some books, you can check the
- "Learning Python, 4th edition," by Mark Lutz (a 5th edition is expected in June). It is a bit verbose, but it presents well the features of the language.
- "Python Pocket Reference, 4th edition," by Mark Lutz. It is usefull as a quick reference if you know more or less the language and you are searching for some information.
Hadoop and MapReduce resources
For MapReduce and Hadoop you can find a lot of material online, whereas, if you prefer a book, we recommend:
- "Hadoop: The Definitive Guide, 3rd edition," by Tom White. It covers various aspects of Hadoop, including a lot of material that we will not need for the course. A good resource to learn the more advance material.
- "Hadoop in Action," by Chuck Lam. Better organized and with more introductory material. It covers though fewer aspects of the more advance material.
- There is a variety of releases for Hadoop; we recommend to install the latest 1.1.x release.
- Here is a tutorial for setting up a pseudo-distributed, single-node Hadoop cluster. Yet you may need to google around to fix some problems that may pop up, by creating some directories at HDFS or adding some options to the configuration files.
- For Python there exists a nice package, dumbo. Start by reading the short tutorial by its creator. If you have any problems after you install it, check whether the package typedbytes is also installed as a Python egg file.
Notes, slides and other material
10/4/2013: Notes on modeling, maximum likelihood, soft clustering, and expectation maximization
4/3/2013: Introduction to data mining
4/3/2013: A crash course on discrete probability
6/5/2013: Introduction to social networks
13/5/2013: Eli Upfal's slides on "Randomized algorithms for frequent itemsets"
11/3/2013: A good introductory source for a lot of the information-retrieval issues that we discussed is the book "Introduction to Information Retrieval," by Manning, Raghavan, and Schütze.
Examples done in class:
- Count the number of IP accesses at an Apache log: countIPs.py. You can find an example file here.
- Matrix-vector multiplication (y=Ax): matrixVectorMultiply.py (MapReduce program to multiply), matrix.txt (contains the matrix, created with randomMatrix.py), x.txt (contains the vector, created with randomVector.py)
The evaluation will consist of a series of homeworks, a final project, and a final exam. In addition, we will take into account participation during class.
- Every two weeks you will have to work on a set of homeworks, which will include both theoretical and programming exercises. The theoretical exercises will ask you questions that will cover or extend the material that we have done in the class. The programming exercises will ask you to implement some of the algorithms that we have covered in class or to extend the ideas, using either Python, or Hadoop. For instance, we may ask you to download and cluster some twitter posts. To be able to program the solutions you will need to study by yourself programming and programming libraries beyond what we will have covered in the class.
Late policy: Every homework must be returned by the due date. Homeworks that are late will lose 10% of the grade if they are up to 1 day (24h) late, 20% if they are 2 days late, 30% if they are 3 days late, and they will receive no credit if they are late for more than 3 days. However, you have a bonus of 3 10 late days, which you can distribute as you wish among all the homeworks. The homeworks will be discussed and graded at the end, during the final exam.
- Towards the end of the course you will have to select a project and program a solution. You may propose a topic that you would like to work on. More details will follow.
- The final exam will include answering questions about the class material, similar to the theoretical homework questions. It will also include examination of your homework solutions and of the final project, for which you should be able to provide details and prove that you have understood and remember your solutions and your programs.
Collaboration policy: You can discuss with other students of the course about the homeworks. However, you must understand well your solutions and the final writeup must be yours and written in isolation. In addition, even though you may discuss about how you could implement an algorithm, what type of libraries to use, and so on, the final code must be yours. For the final project, you can talk with other students of the course about questions on the programming language, libraries, some API issue, and so on, but both the solutions and the programming must be yours. If we find out that you have violated the policy and you have copied in any way you will automatically fail. If you have any doubts about whether something is allowed or not, ask the instructor.
We expect students to do all the homeworks and have a regular exam (see above). Students who for whatever reason (work, medical reasons, grandmother who died, laziness, etc.) do not do all the homeworks, will have a much longer final exam, in which they must demonstrate that they have knowledge of all the class material and that they are able to implement correctly algorithms such as those asked for the homeworks. In other words, we will ask them during the exam to come up with and implement algorithms similar to those asked at the homeworks.