Data Mining
Academic year 2017–2018
"The success of companies like Google, Facebook, Amazon, and Netflix, not to mention Wall Street firms and industries from manufacturing and retail to healthcare, is increasingly driven by better tools for extracting meaning from very large quantities of data. 'Data Scientist' is now the hottest job title in Silicon Valley." – Tim O'Reilly
- Data Scientist: The Sexiest Job of the 21st Century (pdf)
- Find true love with data mining
The course will develop algorithms and statistical techniques for data analysis and mining, with emphasis on massive data sets such as large network data. It will cover the main theoretical and practical aspects behind data mining.
The goal of the course is twofold. First, it will present the main theory behind the analysis of data. Second, it will be hands-on and at the end students will become familiar with various state-of-the-art tools and techniques for analyzing data.
We will use Python for downloading data and implementing various algorithms using its rich libraries and frameworks such as Hadoop's MapReduce framework, Spark, Storm, and Giraph, for mining of large-scale data.
Prerequisites
Students who wish to take this course should be familiar with Python programming and with the MapReduce framework.
Announcements
The fourth homework is out. It is due December 22.
The third homework is out. It is due November 26.
The second homework is out. It is due November 12.
The first homework is out. It is due October 23.
We start classes on September 26.
Instructor
Aris Anagnostopoulos, Sapienza University of Rome.
Teaching Assistant (TA)
Mara Sorella, Sapienza University of Rome.
When and where:
Tuesday 16.00–18.00, Room A7
Thursday 16.00–19.00, Room A7
Office hours
You can use the office hours for any question regarding the class material, past or current homeworks, general questions on data mining, the meaning of life, pretty much anything. Send an email to the instructor or the TA for arrangement.
Syllabus
Chapters for which no book is mentioned refer to the "Mining of Massive Datasets" (see below). For the other textbooks, we refer to with the author initials: A, ZM, ZAL, MRS.
Date | Topic | Reading |
September 26 | Introduction to data mining | Chapters 1.1, 1.3 Introduction to data mining |
September 28 | Guest lecture by Randy Goebel and David Israel AI, logic, visual explanations, and ethics. | |
October 3 | Introduction to probability | A crash course on discrete probability Check the background probability chapters below |
October 5 | Introduction to probability (cont.) | |
October 10 | Similarity and distance measures | Chapter 3.5, MRS Chapter 6.3 |
October 12 | Similarity and distance measures (cont.) | A Chapter 3.4 |
October 17 | Introduction to text mining: Preprocessing, scoring models, inverted indexes | MRS Chapters 1.0–1.4, 2.0–2.2, 6.2 |
October 19 | Scoring and term weighting, index construction | MRS Chapters 4.0–4.2, 7.1.0 |
October 24 | Brief recap of Hadoop, MapReduce, and Spark. Construction of large indexes. | |
October 26 | Near-duplicate detection, shingling, minwise hashing | Chapters 3.0–3.3 |
October 31 | LSH for similar-document detection | Chapter 3.4 |
November 2 | Solution of homework questions. | |
November 7 | Introduction to clustering, k-means clustering | Chapters 7.0–7.1.2, 7.3.0–7.3.2, Chapter on k-means of the book of Christopher M. Bishop |
November 9 | k-means++ | Paper by D. Arthur and S. Vassilvitskii |
November 14 | Modeling, maximum likelihood, soft clustering, and expectation–maximization | Notes |
November 16 | PCA and applications to k-means | |
November 21 | An introduction to social networks | Notes and slides |
November 23 | Centrality measures and PageRank | Notes above and Chapter 5.1 |
November 28 | Epidemics and influence maximization | Slides on epidemics and on influence maximization |
November 30 | Introduction to deep learning and CNNs | Slides on deep learning and on CNNs by Makis. |
December 7 | Introduction to TensorFlow | Slides on TensorFlow and GitHub rep by Mara. |
December 12 | Mining of frequent itemsets | Chapters 6.0–6.2, 6.4.0–6.4.2 |
December 14 | Introduction to streaming | Chapters 4.0–4.1, 4.3–4.5 |
December 15 | Project discussion | |
December 19 | Introduction to streaming (cont.) | |
December 21 | Project discussion |
Homeworks
Check the "Examination format" section below for information about collaborating, being late, and so on.
Handing in: You must hand in the homeworks by the due date and time by an email to This email address is being protected from spambots. You need JavaScript enabled to view it. that will contain as attachment (not links!) a .zip or .tar.gz file with all your answers and subject
[Data Mining class] Homework #
where # is the homework number. After you submit, you will receive an acknowledgement email that your homework has been received and at what date and time. If you have not received an acknowledgement email within 1 day after you submit then contact Mara.
The solutions for the theoretical exercises must contain your answers either typed up or hand written clearly and scanned.
The solutions for the programming assignments must contain the source code, instructions to run it, and the output generated (to the screen or to files).
We will not post the solutions online, but we will present them in class.
- Homework 1 (due: 23/10/2017, 23.59)
- Homework 2 (due: 12/11/2017, 23.59)
- Homework 3 (due: 26/11/2017, 23.59)
- Homework 4 (due: 22/12/2017, 23.59)
Project
You can find more information at the project web page.
Textbook and references
The main textbook is the "Mining of Massive Datasets," by J. Leskovec, A. Rajaraman, and J. D. Ullman. The printed version has been updated and you can download the latest version (currently 2.1) from the book's web site.
In addition, we will also use some chapters fro some other textbooks, all available online:
- C. Aggarwal, "Data Mining: The Textbook," Springer (must be downloaded from Sapienza)
- M. J. Zaki and W. Meira, Jr., "Data Mining and Analysis: Fundamental Concepts and Algorithms," Cambridge University Press
- R. Zafarani, M. A. Abbasi, and H. Liu, "Social Media Mining: An Introduction," Cambridge University Press
- C. D. Manning, P. Raghavan and H. Schütze, "Introduction to Information Retrieval," Cambridge University Press
Finally, we will cover material from various sources, which we will post online as the course proceeds.
Python resources
The main programming language that we will use in the course is Python. There are currently two main versions of Python, version 2.7 and version 3.6, which are slightly incompatible, but it is easy to translate programs from one version to the other. Even though version 3.6 is newer and better designed, version 2.7 is still more widely used and most available libraries are written for it. Therefore, in the class we will use Python 2.7.
To learn the language you can find a lot of material online. You can start from Python's documentation site: http://docs.python.org/2.7.
If you would like to buy some books, you can check the
- "Learning Python, 5th edition," by Mark Lutz. It is a bit verbose, but it presents well the features of the language.
- "Python Pocket Reference, 5th edition," by Mark Lutz. It is usefull as a quick reference if you know more or less the language and you are searching for some information.
We will use several libraries in the class. The Anaconda distribution has packaged all of them together and you can download it for free.
If you have problems with Python installation you can obtain an ubuntu virtual machine with Python preinstalled. Contact the instructor for more information.
Notes, slides, and other material
Book chapters and notes:
26/09/2017: Background reading on combinatorics, basic probability, random variables, and basic probability distributions.
Examination format
The evaluation will consist of two parts:
- 4 sets of homeworks
- A final project. Details will be given during the course
Late policy: Every homework must be returned by the due date. Homeworks that are late will lose 10% of the grade if they are up to 1 day (24h) late, 20% if they are 2 days late, 30% if they are 3 days late, and they will receive no credit if they are late for more than 3 days. However, you have a bonus of 10 late days, which you can distribute as you wish among all the homeworks. The homeworks will be discussed and graded at the end, during the final exam.
In addition, we will take into account participation during class.
Collaboration policy (read carefully!): You can discuss with other students of the course about the homeworks. However, you must understand well your solutions and the final writeup must be yours and written in isolation. In addition, even though you may discuss about how you could implement an algorithm, what type of libraries to use, and so on, the final code must be yours. You may also consult the internet for information, as long as it does not reveal the solution. If a question asks you to design and implement an algorithm for a problem, it's fine if you find information about how to resolve a problem with character encoding, for example, but it is not fine if you search for the code or the algorithm for the problem you are being asked. For the projects, you can talk with other students of the course about questions on the programming language, libraries, some API issue, and so on, but both the solutions and the programming must be yours. If we find out that you have violated the policy and you have copied in any way you will automatically fail. If you have any doubts about whether something is allowed or not, ask the instructor.