## Data Mining

## Academic year 2023–2024

"The success of companies like Google, Facebook, Amazon, and Netflix, not to mention Wall Street firms and industries from manufacturing and retail to healthcare, is increasingly driven by better tools for extracting meaning from very large quantities of data. 'Data Scientist' is now the hottest job title in Silicon Valley."– Tim O'Reilly

- Data Scientist: The Sexiest Job of the 21st Century
- Find true love with data mining

The course will develop algorithms and statistical techniques for data analysis and mining, with emphasis on massive data sets such as large network data. It will cover the main theoretical and practical aspects behind data mining.

The goal of the course is twofold. First, it will present the main theory behind the analysis of data. Second, it will be hands-on and at the end students will become familiar with various state-of-the-art tools and techniques for analyzing data.

We will use Python for downloading data and implementing various algorithms using its rich libraries and frameworks such as Spark, Storm, Giraph, and TensorFlow for mining of large-scale data.

### Prerequisites

Students who wish to take this course should be familiar with Python programming.

### Announcements

The fourth homework is out. It is due on January 9.

The deadline for the second homework has been extended to December 10.

The third homework is out. It is due on December 3.

The second homework is out. It is due on November 12.

The first homework is out. It is due on October 15.

There is no class on October 2 and October 4.

Make sure to register in the class mailing list. Send and email to Aris for details.

### Instructor

Aris Anagnostopoulos, Sapienza University of Rome.

### Teaching Assistant (TA)

This email address is being protected from spambots. You need JavaScript enabled to view it. , Sapienza University of Rome.

### When and where:

Monday 16.00–20.00, Room A5-A6

Wednesday 14.00–16.00, Room A6

### Office hours

You can use the office hours for any question regarding the class material, past or current homeworks, general questions on data mining, the meaning of life, pretty much anything. Send an email to Gianluca or, if needed, to Aris for arrangement.

### Textbook and references

The main textbook is the "Mining of Massive Datasets," by J. Leskovec, A. Rajaraman, and J. D. Ullman. The printed version has been updated and you can download the latest version (currently 3) from the book's web site.

In addition, we will also use some chapters fro some other textbooks, all available online:

- C. Aggarwal, "Data Mining: The Textbook," Springer (must be downloaded from Sapienza)
- M. J. Zaki and W. Meira, Jr., "Data Mining and Analysis: Fundamental Concepts and Algorithms," Cambridge University Press
- R. Zafarani, M. A. Abbasi, and H. Liu, "Social Media Mining: An Introduction," Cambridge University Press
- C. D. Manning, P. Raghavan, and H. Schütze, "Introduction to Information Retrieval," Cambridge University Press
- A. Blum, J. Hopcroft, and R. Kannan, "Foundations of Data Science," Cambridge University Press

The following book is not obligatory for the class, but is a vary useful book for the topic of feature engineering

- Pablo Duboue, "The Art of Feature Engineering," Cambridge University Press

For neural networks and GNNs, a nice book is

- Iddo Drori, "The Science of Deep Learning," Cambridge University Press

Finally, we will cover material from various sources, which we will post online as the course proceeds.

### Python resources

The main programming language that we will use in the course is Python 3.

To learn the language you can find a lot of material online. You can start from Python's documentation site: https://www.python.org/doc/.

We will use several libraries in the class. The Anaconda distribution has packaged all of them together and you can download it for free.

If you have problems with Python installation you can obtain an ubuntu virtual machine with Python preinstalled. Contact Gianluca for more information.

### Examination format

The evaluation will consist of two parts:

- 4 sets of homeworks
- A final project. Details will be given during the course

**Late
policy:** Every homework must be returned by the due date.
Homeworks that are late will lose 10% of the grade if they are up to 1
day (24h) late, 20% if they are 2 days late, 30% if they are 3 days
late, and they will receive no credit if they are late for more than 3
days. However, you have a bonus of 10 late days, which you can
distribute as you wish among all the homeworks. The homeworks will be
discussed and graded at the end, during the final
exam.

In addition, we will take into account participation during class.

### Syllabus

Chapters for which no book is mentioned refer to the "Mining of Massive Datasets" (see below). For the other textbooks, we refer to with the author initials: A, ZM, ZAL, MRS, BHK.

Date |
Topic |
Reading |

September 25 | Introduction to data mining and data types, a crash course in probability | LRU Chapters 1.1, 1.3 Introduction to data mining, A crash course on discrete probability Check the background probability chapters below |

September 27 | A crash course in probability (cont.) | |

October 9 | Discussion on HW1, crash course in probability (cont.), similarity and distance measures |
Book chapter on quicksort
from the book of Mitzenmacher and Upfal on Probability
and Computing Chapter 3.5 |

October 11 | Discussion about HW 1, similarity and distance measures | Chapter 3.5, A Chapter 3.4 |

October 16 | Similarity and distance measures (cont.), Preprocessing for text mining, the vector-space model, TF-IDF scoring | MRS Chapters 2.0–2.2, 6.2 |

October 18 | Inverted indexes | MRS Chapters 1.0–1.4, 7.1.0 |

October 23 | Shingles, minwise hashing | LRU Chapters 3.0–3.3 |

October 25 | Brief recap of Hadoop, MapReduce, and Spark; construction of large indexes. | LRU Chapters 3.4, 2.0–2.4, Quick introduction to MapReduce |

October 30 | Lab on Apache SPARK | |

November 6 | Introduction to clustering, hierarchical clustering, k-means | Chapters 7.0–7.1.2, 7.2, 7.3.0–7.3.2. Chapter on k-means of the book of Christopher M. Bishop |

November 8 | k-means++, introduction to generative models | Paper by D. Arthur and S. Vassilvitskii, Notes |

November 13 | Soft clustering and expectation–maximization, projections in vector spaces. | |

November 15 | Introduction to principal componenent analysis | Notes sent by email |

November 20 | Principal componenent analysis (cont.) | BHK Chapters 3.1–3.5, 3.8, 3.9.1, 3.9.2 (you can skip the proofs) |

November 22 | Principal componenent analysis (cont.) | |

November 27 | Recommender systems | LRU Chapters 9. Instead of doing the UV decomposition of Chapter 9.4, we looked at the use of SVD and of nonnegative matrix factorization (NMF). For NMF, see A Chapter 6.8, but note, given that we have missing values, we find the best matrices by stochastig gradient descent, minimizing the RMSE error. |

November 29 | Discussion about HW3, Centrality measures | Chapter 2.2 from these notes |

December 4 | PageRank | LRU Chapter 5.1 |

December 6 | Personalized PageRank, embeddings, review of neural networks, Word2Vec | LRU Chapter 5.3, original paper on Word2Vec |

December 11 | Node embeddings, graph neural networks | Papers on DeepWalk and node2vec, material on GNNs sent by email |

December 13 | Lab on graph neural networks |

### Homeworks

Check the "Examination format" section below for information about collaborating, being late, and so on.

**Handing in:** You must
hand in the homeworks by the due date and time by an email to the TA
that will contain as attachment (not links!) a .zip or .tar.gz file with
all your answers and subject

[Data Mining class] Homework #

where # is the homework number. After you submit, you will receive an acknowledgement email that your homework has been received and at what date and time. If you have not received an acknowledgement email within 2 days after the deadline then contact Gianluca.

The solutions for the theoretical exercises must contain your answers either typed up or hand written clearly and scanned.

The solutions for the programming assignments must contain the source code, instructions to run it, and the output generated (to the screen or to files).

We will not post the solutions
online, but we will present them in class.

- Homework 1 (due: 15/10/2023, 23.59)
- Homework 2 (due: 12/11/2023, 23.59)
- Homework 3 (due:
~~3/12/2023~~10/12/2023, 23.59) - Homework 4 (due: 9/1/2024, 23.59)

### Notes, slides, and other material

**Book chapters and notes:**

Background reading on combinatorics, basic probability, random variables, and basic probability distributions.

**Collaboration policy (read
carefully!):** You can discuss with other students of the course
about the projects. However, you must understand well your solutions and
the final writeup must be yours and written in isolation. In addition,
even though you may discuss about how you could implement an algorithm,
what type of libraries to use, and so on, the final code must be yours.
You may also consult the internet for information, as long as it does
not reveal the solution. If a question asks you to design and implement
an algorithm for a problem, it's fine if you find information about how
to resolve a problem with character encoding, for example, but it is not
fine if you search for the code or the algorithm for the problem you are
being asked. For the projects, you can talk with other students of the
course about questions on the programming language, libraries, some API
issue, and so on, but both the solutions and the programming must be
yours.
**If we find out that you have violated the policy and you have
copied in any way you will automatically fail.** If you have any
doubts about whether something is allowed or not, ask the
instructor.

The same applies for **generative AI tools, such
as ChatGPT**. These can be useful tools in your work and there
are some homework questions in which we ask you explicitly to use them.
**However, the use of such tools when it is not explicitly allowed
will be treated as plagiarism and is strictly prohibited.**