Social Networks and Online Markets, 2023
Social Networks and Online Markets
Academic year 2022–2023
We are surrounded by networks. The Internet, one of the most advanced artifacts ever created by mankind, is the paradigmatic example of a "network of networks" with unprecedented technological, economical and social ramifications. Online social networks have become a major driving phenomenon on the web since the Internet has expanded as to include users and their social systems in its description and operation. Technological networks such as the cellular phone network or the energy grid support many aspects of our daily life. Moreover, there is a growing number of highly-popular user-centric applications in Internet that rely on social networks for mining and filtering information, for providing recommendations, as well as for ranking of documents and services. In this course we will present the design principles and the main structural properties and theoretical models of online social networks and technological networks, algorithms for data mining in social networks, and the basic network economic issues, with an eye towards the current research issues in the area.
Announcements
The next exam dates are June 20 and July 6.
There have been some changes in the exam format; check below. You must have received an email with details.
For the part on online markets, you need to register to Google classroom
Remember to register your email; for details send an email to Aris.
Classes start on Monday, February 20.
Topics that we will cover
- Properties of social networks
- Models for social networks
- Community detection
- Spectral techniques for community detection
- Cascading behavior in social networks and epidemics
- Influence maximization and viral marketing
- Influence and homophily
- Machine learning on graphs
- Introduction to Game Theory and Computational issues
- Price of Anarchy and Selfish Routing
- Stable matching, Markets, Competitive equilibria
- Sponsored Search Auctions, VCG, Revenue Maximization
- Voting and Fair division
- Equilibria and Incentives in blockchains and cryptocurrencies
Instructors
Aris Anagnostopoulos, Sapienza University of Rome
Stefano Leonardi, Sapienza University of Rome
Georgios Birmpas, Sapienza University of Rome
When and where:
Monday 15.00–17.00, Via Ariosto 25, Room A5
Wednesday 13.00–17.00, Via Ariosto 25, Room A3
Office hours
You can use the office hours for any question regarding the class material, general questions on networks, the meaning of life, pretty much anything. Send an email to the instructors for arrangement.
Textbook and references
The main textbook for the first part is the book Networks, Crowds, and Markets: Reasoning About a Highly Connected World, by David Easley and Jon Kleinberg.
The main textbook for the second part is the book Twenty Lectures on Algorithmic Game Theory, by Tim Roughgarden.
In addition, we will cover material from various other sources, which we will post online as the course proceeds.
Evaluation format (updated)
You will be evaluated for the two parts of the course (social networks, online markets) independently from each other, and your grade will be the average of the two parts. For each of them, you have the option of a project or a homework. For more details email the instructors, but you must have received an email. There is also the option of a written exam.
Syllabus
Date | Topic | Reading |
February 20 | Introduction to social networks and online markets, Properties of complex networks. | Chapters 1, 2, slides |
February 22 | Properties of complex networks (cont.), homophily, the Erdős–Rényi random-graph model | Chapter 4.0–4.1, notes |
February 27 | The small-world model, the preferential-attachment model | Notes |
March 1 | The preferential-attachment model (cont.), epidemics, influence maximization | Slides on epidemics and on influence maximization, paper by Kempe et al. |
March 6 | Influence maximization (cont.), influcence vs. correlation | Slides on influence vs. correlation, paper by Anagnostopoulos et al. |
March 8 | Influence vs. correlation (cont.), densest subgraph | Notes on Charikar's greedy algorithm for finding the densest subgraph |
March 13 | Community detection | Notes on community detection |
March 15 | Community detection (cont.) | |
March 20 | Community detection (cont.), node embeddings | Chapters 3.0–3.1 of Graph Representation Learning by Hamilton |
March 22 | Classification, node embeddings based on random walks, introduction to neural networks | Chapter 3.3.0 of Graph Representation Learning by Hamilton, papers on DeepWalk and node2vec; for NNs there are various sources online, for example, Chapter 6 of Deep Learning by Goodfellow, et al. |
March 27 | Graph neural networks | There are a lot of resources on GNNs. I linke the chapter of The Science of Deep Learning by Drori, however it is not available online. A starting point could be this video although it does not talk about the concept of computation graph. |
March 29 | Lab on graph neural networks | |
April 1–end | Online markets | Google classroom |
Homeworks
- Homework 1 (due: 16/6/2023, 23.59)
Collaboration policy (read carefully!): You can discuss with other students of the course about the homeworks and the projects. However, you must understand well your solutions and the final writeup must be yours and written in isolation. In addition, even though you may discuss about how you could implement an algorithm, what type of libraries to use, and so on, the final code must be yours. You may also consult the internet for information, as long as it does not reveal the solution. If a question asks you to design and implement an algorithm for a problem, it's fine if you find information about how to resolve a problem with character encoding, for example, but it is not fine if you search for the code or the algorithm for the problem you are being asked. For the homeworks and projects, you can talk with other students of the course about questions on the programming language, libraries, some API issue, and so on, but both the solutions and the programming must be yours. If we find out that you have violated the policy and you have copied in any way you will automatically fail. If you have any doubts about whether something is allowed or not, ask the instructor.
Algorithmic Methods of Data Mining (Sc.M. in Data Science), 2023
Algorithmic Methods of Data Mining (Sc.M. in Data Science)
Academic year 2023–2024
"The success of companies like Google, Facebook, Amazon, and Netflix, not to mention Wall Street firms and industries from manufacturing and retail to healthcare, is increasingly driven by better tools for extracting meaning from very large quantities of data. 'Data Scientist' is now the hottest job title in Silicon Valley." – Tim O'Reilly
- Data Scientist: The Sexiest Job of the 21st Century
- Find true love with data mining
The course will develop the basic algorithmic techniques for data analysis and mining, with emphasis on massive data sets such as large network data. It will cover the main theoretical and practical aspects behind data mining.
The goal of the course is twofold. First, it will present the main theory behind the analysis of data. Second, it will be hands-on and at the end students will become familiar with various state-of-the-art tools and techniques for analyzing data.
We will cover some very basic topics necessary for handling large data, such as hashing, sorting, graphs, data structures, and databases. We will then move to more advanced data mining topics: text mining, clustering, classification, mining of frequent itemsets, graph mining, visualization.
The theoretical part will be complemented by a laboratory where students will learn how to use tools for analyzing and mining large data. We will base it on Amazon's AWS. After finishing the course, the students will have a large part of the knowledge required to pursue (independently) for an Amazon AWS Certification.
Announcements
The fifth homework is out. It is due on January 7.
The deadline for the third homework has been extended to December 17.
The fourth homework is out. It is due on December 10.
The third homework is out. It is due on November 26.
The deadline for the second homework has been extended to November 5.
The second homework is out. It is due on October 29.
The first homework is out. It is due on October 15.
You need to register to:
- the class mailing list to be able to do homeworks, be part of groups, receive announcements, etc.: Send email to Aris.
- Amazon Web Services (AWS): Ioannis will open an account for you and provide you with details.
- Slack: Send email to Daniel.
Instructors
Aris Anagnostopoulos, Sapienza University of Rome.
Ioannis Chatzigiannakis, Sapienza University of Rome.
Teaching Assistants (TA)
The best way to ask any questions is through slack. If you are registered in the course mailing list and have not received an invitation from Daniel, email Daniel.
Daniel Jiménez (This email address is being protected from spambots. You need JavaScript enabled to view it. ), Ph.D. candidate in Data Science, Sapienza University of Rome. (LinkedIn)
Leonardo Di Nino, (This email address is being protected from spambots. You need JavaScript enabled to view it. ) Data Science student, Sapienza University of Rome. (LinkedIn)
Mehrdad Hassanzadeh (This email address is being protected from spambots. You need JavaScript enabled to view it. ), Data Science student, Sapienza University of Rome. (LinkedIn)
Sara Pepe (This email address is being protected from spambots. You need JavaScript enabled to view it. ), Data Science student, Sapienza University of Rome. (LinkedIn)
Eric Rubia Aguilera, (This email address is being protected from spambots. You need JavaScript enabled to view it. ) Data Science student, Sapienza University of Rome. (LinkedIn)
When and where:
Monday 14.00–16.00, Via di Castro Laurenziano 7a (Building RM018), Room 3
Thursday 14.00–16.00, Via di Castro Laurenziano 7a (Building RM018), Room 6.
Lab: Tuesday 15.00–19.00, Via Tiburtina 205, Room 17.
Online lectures
There will not be lectures online, only in class.
Office hours
You can use the office hours for any question regarding the class material, past or current homeworks, general questions on data mining, the meaning of life, pretty much anything. Send an email to the TAs and, if needed, to the instructors for arrangement.
Textbook and references
We will use a variety of textbooks. Whenever we can, we will try to find books that are available online. As the course progresses, we will indicate what you should read. The main books that we will use are the:
- (A) C. Aggarwal, "Data Mining: The Textbook," Springer (must be downloaded from Sapienza)
- (ZAL) R. Zafarani, M. A. Abbasi, and H. Liu, "Social Media Mining: An Introduction," Cambridge University Press
- (LRU) J. Leskovec, A. Rajaraman, and J. Ullman, "Mining of Massive Datasets," Cambridge University Press
- (MRS) C. D. Manning, P. Raghavan and H. Schütze, "Introduction to Information Retrieval," Cambridge University Press
- (J) J. Janssens, "Data Science at the Command Line", O'Reilly
In addition, we will cover material from various other sources, which we will post online as the course proceeds.
If you are interested in the topic of algorithms, we recommend the following books:
- T. Cormen, C. Leiserson, R. Rivest, and S. Stein, "Introduction to Algorithms" (4th ed): This is a classic book, very detailed, sometimes too verbose
- S. Dasgupta, C. Papadimitriou, and U. Vazirani, "Algorithms": Very succint but well written, probably the first book to check out. If you cannot follow it, try one of the other books first
- T. Roughgarden, "Algorithms Illuminated": Another introductory text. It is a more recent book. Tim writes very well.
- J. Kleinberg, and E. Tardos, "Algorithm Design": This is a more advanced book, probably not recommended if it is your first contanct with algorithms, but will increase your knowledge a lot if you already know the basic concepts.
Python resources
The main programming language that we will use in the course is Python 3.
To learn the language you can find a lot of material online. You can start from Python's documentation site: https://www.python.org/doc/.
If you would like to buy some books, you can check the
- "Learning Python, 5th edition," by Mark Lutz. It is a bit verbose, but it presents well the features of the language.
- "Python Pocket Reference, 5th edition," by Mark Lutz. It is usefull as a quick reference if you know more or less the language and you are searching for some information.
We will use several libraries in the class. For Windows users the Anaconda distribution has packaged all of them together and you can download it for free. For MAC/Linux users, all packages can be installed using the pip3 tool.
We will also use Python (Jupyter) notebooks. You can find instructions for the installation at the Jupyter web site.
If you have problems with Python installation you can obtain an ubuntu virtual machine with Python preinstalled. Contact the instructor for more information.
Syllabus
Chapters for which no book is mentioned refer to the "Mining of Massive Datasets" (see below). For the other textbooks, we refer to with the author initials: A, ZAL, MRS.
Date | Topic | Reading |
September 25 | Introduction to data science, Introduction to algorithmic data mining | Introduction to Sapienza's Data Science, Introduction to data mining |
September 28 | Basic data types, introduction to the analysis of algorithms | Notes, Section 5–5.3 |
October 9 | Introduction to the analysis of algorithms (Big O notation cont.) | Notes, Section 5.2–5.4 |
October 10 | Introduction to Data Pre-processing & Data Visualization | Laboratory 1: Data Pre-processing & Data Visualization The Pandas DataFrame: Make Working With Data Delightful |
October 12 | Introduction to the analysis of algorithms (recursion) | Notes, Section 5.5 |
October 16 | Complexity classes and NP-completeness | Notes, Section 6–6.1 |
October 17 | Introduction to cloud computing Introduction to AWS, AWS Academy, S3 |
Laboratory 2: Introductory course to AWS Cloud and S3 |
October 19 | NP-completeness (cont.), Introduction to distance measures | Notes, Section 6.2, (LRU) Chapter 3.5 |
October 23 | The vector-space model for document similarity | (MRS) Section 6.3 |
October 24 | Introduction to AWS EC2, Unix Shell Programming |
Laboratory 3a: Introductory to AWS EC2 |
October 30 | Distance measures (cont.) | |
October 31 | Introduction to HTML Introduction to Web Scraping |
Laboratory 4: Web Scrapping Introduction to HTML5 Structuring the web with HTML Introduction to BeautifulSoup Selenium with Python |
November 2 | Dynamic programming, basic data representation | Notes, Sections 1 4, 8.1 |
November 6 | Basic data representation (cont.), preprocessing for text mining | (MRS) Chapters 2.0–2.2 |
November 7 | tf-idf, inverted indexes | (MRS) Chapters 6.2, 6.3.1–6.3.2, 1.0–1.4, 6.3.3 |
November 9 | Sorting | Book chapters on mergesort and quicksort from the book "Introduction to Algorithms" by Cormern, Leiserson, Rivest, and Stein. |
November 13 | Data structures, heaps, heapsort | Notes, Section 7, Wikipedia page on data structures book chapter on heaps and heapsort from the book "Introduction to Algorithms" by Cormern, Leiserson, Rivest, and Stein. |
November 14 | Introduction to NLP with Python | Natural Language Toolkit Natural Language Processing with spaCy Tutorial: How to Use the Apply Method in Pandas Python's collections: specialized data types Python's reduce: how to use folding Working With Text Data using scikit-learn |
November 16 | Data structures (cont.), hashing | Notes, Section 7, Wikipedia page on linked lists, book chapter on hashing from the book "Algorithms" by Dasgupta, Papadimitriou, and Vazirani |
November 20 | MapReduce | Most of the things we talked about can be found on this quick introduction |
November 21 | Introduction to Elastic Map Reduce | Laboratory 6: Elastic Map Reduce Big Data Analytics Options on AWS Bigtable: A Distributed Storage System for Structured Data Getting Started: Analyzing Big Data with Amazon EMR Using EMR Notebooks PySpark: the Python API for Spark TF-IDF using Map Reduce |
November 23 | Hierarchical clustering, the k-means problem | Chapters 7.0–7.2, Slides |
November 27 | The k-means algorithm, discussion about initialization | Chapters 7.3.0–7.3.2, Slides |
November 28 | Introduction to Clustering in Python | Clustering in Python K-Means in Python Elbow Method using SciKit |
November 30 | k-means++, choosing k | Chapter 7.3.3, Slides |
December 4 | Mathematical background for PCA | Notes sent by email |
December 5 | Singular Value Decomposition and Principal Component Analysis | Chapters 3.1–3.5, 3.8, 3.9.1, 3.9.2 (you can skip the proofs) from the book Foundations of Data Science by Blum, Hopcroft, and Kannan |
December 7 | Queues, stacks, introduction to graphs |
Notes, Section 7, Notes on graphs, book chapter on graph representaton from the book "Introduction to Algorithms" by Cormern, Leiserson, Rivest, and Stein |
December 11 | Shortest paths, breadth-first search, depth-first search, Dijkstra's algorith | Book chapter on BFS and DFS from the book "Algorithms Illuminated" by Roughgarden, book chapter on Dijkstra's algorithm from the book "Algorithms" by Dasgupta, Papadimitriou, and Vazirani, book chapter on Dijkstra's algorithm from the book "Introduction to Algorithms" by Cormern, Leiserson, Rivest, and Stein |
December 12 | Minimum spanning trees, randomized minimum cut, centrality measures | Book chapter on MST from the book "Algorithms Illuminated" by Roughgarden, book chapter on MST from the book "Introduction to Algorithms" by Cormern, Leiserson, Rivest, and Stein book chapter on the randomized algorithm for mincut from the book "Probability and Computing" by Mitzenmacher and Upfal Notes on graphs, Section 3 |
December 13 | PageRank | Chapter 5.1.0–5.1.2, 5.1.4, 5.1.5 |
December 18 | PageRank (cont.) | |
December 19 | Introduction to Graphs in Python Twitter API |
Laboratory 8: Graphs NetworkX: Network Analysis in Python Tweepy: An easy-to-use Python library for accessing the Twitter API Twitter API Documentation |
Homeworks
- Homework 1 (due on 15/10, 23.59)
- Homework 2 (due on
29/105/11, 23.59) - Homework 3 (due on 26/11)
- Homework 4 (due on
10/1217/12, 23.59) - Homework 5 (due on 1/1, 23.59)
Collaboration policy (read carefully!): You can discuss with other students of the course about the projects. However, you must understand well your solutions and the final writeup must be yours and written in isolation. In addition, even though you may discuss about how you could implement an algorithm, what type of libraries to use, and so on, the final code must be yours. You may also consult the internet for information, as long as it does not reveal the solution. If a question asks you to design and implement an algorithm for a problem, it's fine if you find information about how to resolve a problem with character encoding, for example, but it is not fine if you search for the code or the algorithm for the problem you are being asked. For the projects, you can talk with other students of the course about questions on the programming language, libraries, some API issue, and so on, but both the solutions and the programming must be yours. If we find out that you have violated the policy and you have copied in any way you will automatically fail. If you have any doubts about whether something is allowed or not, ask the instructor.
The same applies for generative AI tools, such as ChatGPT, Bard, Bing, and so on. These can be useful tools in your work and there are some homework questions in which we ask you explicitly to use them. However, the use of such tools when it is not explicitly allowed will be treated as plagiarism and is strictly prohibited.
Social Networks and Online Markets, 2024
Social Networks and Online Markets
Academic year 2023–2024
We are surrounded by networks. The Internet, one of the most advanced artifacts ever created by mankind, is the paradigmatic example of a "network of networks" with unprecedented technological, economical and social ramifications. Online social networks have become a major driving phenomenon on the web since the Internet has expanded as to include users and their social systems in its description and operation. Technological networks such as the cellular phone network or the energy grid support many aspects of our daily life. Moreover, there is a growing number of highly-popular user-centric applications in Internet that rely on social networks for mining and filtering information, for providing recommendations, as well as for ranking of documents and services.
In this course we will present the design principles and the main structural properties and theoretical models of online social networks and technological networks, algorithms for data mining in social networks, and the basic network economic issues, with an eye towards the current research issues in the area.
Announcements
For the second part of the course you need to register to Google classroom. Email Stefano Leonardi for details.
Remember to register your email; email Aris for details.
Classes start on Monday, February 26.
Topics that we will cover
- Properties of social networks
- Models for social networks
- Community detection
- Spectral techniques for community detection
- Cascading behavior in social networks and epidemics
- Influence maximization and viral marketing
- Influence and homophily
- Opinion dynamics
- Machine learning on graphs
- Introduction to Game Theory and Computational issues
- Price of Anarchy and Selfish Routing
- Stable matching, Markets, Competitive equilibria
- Sponsored Search Auctions, VCG, Revenue Maximization
- Voting and Fair division
- Equilibria and Incentives in blockchains and cryptocurrencies
Instructors
Aris Anagnostopoulos, Sapienza University of Rome
Aristides Gionis, KTH Royal Institute of Technology
Stefano Leonardi, Sapienza University of Rome
When and where:
Monday 15.00–17.00, Via Ariosto 25, Room A5
Wednesday 12.00–16.00, Via Ariosto 25, Room A7
Office hours
You can use the office hours for any question regarding the class material, general questions on networks, the meaning of life, pretty much anything. Send an email to the instructors for arrangement.
Textbook and references
The main textbook for the first part is the book Networks, Crowds, and Markets: Reasoning About a Highly Connected World, by David Easley and Jon Kleinberg.
The main textbook for the second part is the book Twenty Lectures on Algorithmic Game Theory, by Tim Roughgarden.
In addition, we will cover material from various other sources, which we will post online as the course proceeds.
Evaluation format
You will be evaluated for the two parts of the course (social networks, online markets) independently from each other, and your grade will be the average of the two parts.
For the part of social networks, there will be (1) an individual homework (deadline in the first week of June), (2) a small project that can be done in groups of at most two people (deadline 4 working days before the date that you try the oral exam), and (3) a light oral exam.
For the part of online markets, Stefano Leonardi will provide more details.
Alternatively, there is also the option of a written exam for both parts.
Syllabus
Date | Topic | Reading |
February 26 | Introduction to social networks and online markets, Properties of complex networks. | Chapters 1, 2, slides |
February 28 | Properties of social networks, tie strength, homophily, triadic closure, affiliation networks | Chapters 3–3.1, 4–4.3, slides, notes |
March 4 | Modeling phenomena and social networks, the Erdős–Rényi random-graph model | Notes |
March 6 | The preferential attachment model, Epidemics and influence | Notes, slides |
March 11 | Models of influence, influcence maximization in social networks | Paper by D. Kempe, J. Kleinberg, and E. Tardos, slides |
March 13 | Social influence vs. social correlation, the densest subgraph problem | Chapter 8.4 of the book Social Media Mining by R. Zafarani, M. Ali Abbasi, and H. Liu, slides on influce vs. correlation, notes on Charikar's greedy algorithm on densest sugraph |
March 18 | Community detection and sparsest cut | Notes on spectral graph theory. |
March 20 | Community detection and sparsest cut (cont.) | |
March 25 | Node embeddings based on random walks, introduction to neural networks | Chapter 3.3.0 of Graph Representation Learning by Hamilton, papers on DeepWalk and node2vec; for NNs there are various sources online, for example, Chapter 6 of Deep Learning by Goodfellow, et al. |
March 27 | Graph neural networks | There are a lot of resources on GNNs. I linke the chapter of The Science of Deep Learning by Drori, however it is not available online. A starting point could be this video although it does not talk about the concept of computation graph. |
April 3 | Opinion formation in social network | Slides on opinion formation. |
April 8 | Signed networks | Slides on signed networks. |
April 10 | Temporal networks | Slides on temporal networks. |
Homeworks
Collaboration policy (read
carefully!): You can discuss with other students of the course
about the projects. However, you must understand well your solutions and
the final writeup must be yours and written in isolation. In addition,
even though you may discuss about how you could implement an algorithm,
what type of libraries to use, and so on, the final code must be yours.
You may also consult the internet for information, as long as it does
not reveal the solution. If a question asks you to design and implement
an algorithm for a problem, it's fine if you find information about how
to resolve a problem with character encoding, for example, but it is not
fine if you search for the code or the algorithm for the problem you are
being asked. For the projects, you can talk with other students of the
course about questions on the programming language, libraries, some API
issue, and so on, but both the solutions and the programming must be
yours.
If we find out that you have violated the policy and you have
copied in any way you will automatically fail. If you have any
doubts about whether something is allowed or not, ask the
instructor.
The same applies for generative AI tools, such
as ChatGPT. These can be useful tools in your work and there
are some homework questions in which we ask you explicitly to use them.
However, the use of such tools when it is not explicitly allowed
will be treated as plagiarism and is strictly prohibited.
Data Mining, 2023
Data Mining
Academic year 2023–2024
"The success of companies like Google, Facebook, Amazon, and Netflix, not to mention Wall Street firms and industries from manufacturing and retail to healthcare, is increasingly driven by better tools for extracting meaning from very large quantities of data. 'Data Scientist' is now the hottest job title in Silicon Valley." – Tim O'Reilly
- Data Scientist: The Sexiest Job of the 21st Century
- Find true love with data mining
The course will develop algorithms and statistical techniques for data analysis and mining, with emphasis on massive data sets such as large network data. It will cover the main theoretical and practical aspects behind data mining.
The goal of the course is twofold. First, it will present the main theory behind the analysis of data. Second, it will be hands-on and at the end students will become familiar with various state-of-the-art tools and techniques for analyzing data.
We will use Python for downloading data and implementing various algorithms using its rich libraries and frameworks such as Spark, Storm, Giraph, and TensorFlow for mining of large-scale data.
Prerequisites
Students who wish to take this course should be familiar with Python programming.
Announcements
The fourth homework is out. It is due on January 9.
The deadline for the second homework has been extended to December 10.
The third homework is out. It is due on December 3.
The second homework is out. It is due on November 12.
The first homework is out. It is due on October 15.
There is no class on October 2 and October 4.
Make sure to register in the class mailing list. Send and email to Aris for details.
Instructor
Aris Anagnostopoulos, Sapienza University of Rome.
Teaching Assistant (TA)
This email address is being protected from spambots. You need JavaScript enabled to view it. , Sapienza University of Rome.
When and where:
Monday 16.00–20.00, Room A5-A6
Wednesday 14.00–16.00, Room A6
Office hours
You can use the office hours for any question regarding the class material, past or current homeworks, general questions on data mining, the meaning of life, pretty much anything. Send an email to Gianluca or, if needed, to Aris for arrangement.
Textbook and references
The main textbook is the "Mining of Massive Datasets," by J. Leskovec, A. Rajaraman, and J. D. Ullman. The printed version has been updated and you can download the latest version (currently 3) from the book's web site.
In addition, we will also use some chapters fro some other textbooks, all available online:
- C. Aggarwal, "Data Mining: The Textbook," Springer (must be downloaded from Sapienza)
- M. J. Zaki and W. Meira, Jr., "Data Mining and Analysis: Fundamental Concepts and Algorithms," Cambridge University Press
- R. Zafarani, M. A. Abbasi, and H. Liu, "Social Media Mining: An Introduction," Cambridge University Press
- C. D. Manning, P. Raghavan, and H. Schütze, "Introduction to Information Retrieval," Cambridge University Press
- A. Blum, J. Hopcroft, and R. Kannan, "Foundations of Data Science," Cambridge University Press
The following book is not obligatory for the class, but is a vary useful book for the topic of feature engineering
- Pablo Duboue, "The Art of Feature Engineering," Cambridge University Press
For neural networks and GNNs, a nice book is
- Iddo Drori, "The Science of Deep Learning," Cambridge University Press
Finally, we will cover material from various sources, which we will post online as the course proceeds.
Python resources
The main programming language that we will use in the course is Python 3.
To learn the language you can find a lot of material online. You can start from Python's documentation site: https://www.python.org/doc/.
We will use several libraries in the class. The Anaconda distribution has packaged all of them together and you can download it for free.
If you have problems with Python installation you can obtain an ubuntu virtual machine with Python preinstalled. Contact Gianluca for more information.
Examination format
The evaluation will consist of two parts:
- 4 sets of homeworks
- A final project. Details will be given during the course
Late policy: Every homework must be returned by the due date. Homeworks that are late will lose 10% of the grade if they are up to 1 day (24h) late, 20% if they are 2 days late, 30% if they are 3 days late, and they will receive no credit if they are late for more than 3 days. However, you have a bonus of 10 late days, which you can distribute as you wish among all the homeworks. The homeworks will be discussed and graded at the end, during the final exam.
In addition, we will take into account participation during class.
Syllabus
Chapters for which no book is mentioned refer to the "Mining of Massive Datasets" (see below). For the other textbooks, we refer to with the author initials: A, ZM, ZAL, MRS, BHK.
Date | Topic | Reading |
September 25 | Introduction to data mining and data types, a crash course in probability | LRU Chapters 1.1, 1.3 Introduction to data mining, A crash course on discrete probability Check the background probability chapters below |
September 27 | A crash course in probability (cont.) | |
October 9 | Discussion on HW1, crash course in probability (cont.), similarity and distance measures |
Book chapter on quicksort
from the book of Mitzenmacher and Upfal on Probability
and Computing Chapter 3.5 |
October 11 | Discussion about HW 1, similarity and distance measures | Chapter 3.5, A Chapter 3.4 |
October 16 | Similarity and distance measures (cont.), Preprocessing for text mining, the vector-space model, TF-IDF scoring | MRS Chapters 2.0–2.2, 6.2 |
October 18 | Inverted indexes | MRS Chapters 1.0–1.4, 7.1.0 |
October 23 | Shingles, minwise hashing | LRU Chapters 3.0–3.3 |
October 25 | Brief recap of Hadoop, MapReduce, and Spark; construction of large indexes. | LRU Chapters 3.4, 2.0–2.4, Quick introduction to MapReduce |
October 30 | Lab on Apache SPARK | |
November 6 | Introduction to clustering, hierarchical clustering, k-means | Chapters 7.0–7.1.2, 7.2, 7.3.0–7.3.2. Chapter on k-means of the book of Christopher M. Bishop |
November 8 | k-means++, introduction to generative models | Paper by D. Arthur and S. Vassilvitskii, Notes |
November 13 | Soft clustering and expectation–maximization, projections in vector spaces. | |
November 15 | Introduction to principal componenent analysis | Notes sent by email |
November 20 | Principal componenent analysis (cont.) | BHK Chapters 3.1–3.5, 3.8, 3.9.1, 3.9.2 (you can skip the proofs) |
November 22 | Principal componenent analysis (cont.) | |
November 27 | Recommender systems | LRU Chapters 9. Instead of doing the UV decomposition of Chapter 9.4, we looked at the use of SVD and of nonnegative matrix factorization (NMF). For NMF, see A Chapter 6.8, but note, given that we have missing values, we find the best matrices by stochastig gradient descent, minimizing the RMSE error. |
November 29 | Discussion about HW3, Centrality measures | Chapter 2.2 from these notes |
December 4 | PageRank | LRU Chapter 5.1 |
December 6 | Personalized PageRank, embeddings, review of neural networks, Word2Vec | LRU Chapter 5.3, original paper on Word2Vec |
December 11 | Node embeddings, graph neural networks | Papers on DeepWalk and node2vec, material on GNNs sent by email |
December 13 | Lab on graph neural networks |
Homeworks
Check the "Examination format" section below for information about collaborating, being late, and so on.
Handing in: You must hand in the homeworks by the due date and time by an email to the TA that will contain as attachment (not links!) a .zip or .tar.gz file with all your answers and subject
[Data Mining class] Homework #
where # is the homework number. After you submit, you will receive an acknowledgement email that your homework has been received and at what date and time. If you have not received an acknowledgement email within 2 days after the deadline then contact Gianluca.
The solutions for the theoretical exercises must contain your answers either typed up or hand written clearly and scanned.
The solutions for the programming assignments must contain the source code, instructions to run it, and the output generated (to the screen or to files).
We will not post the solutions
online, but we will present them in class.
- Homework 1 (due: 15/10/2023, 23.59)
- Homework 2 (due: 12/11/2023, 23.59)
- Homework 3 (due:
3/12/202310/12/2023, 23.59) - Homework 4 (due: 9/1/2024, 23.59)
Notes, slides, and other material
Book chapters and notes:
Background reading on combinatorics, basic probability, random variables, and basic probability distributions.
Collaboration policy (read
carefully!): You can discuss with other students of the course
about the projects. However, you must understand well your solutions and
the final writeup must be yours and written in isolation. In addition,
even though you may discuss about how you could implement an algorithm,
what type of libraries to use, and so on, the final code must be yours.
You may also consult the internet for information, as long as it does
not reveal the solution. If a question asks you to design and implement
an algorithm for a problem, it's fine if you find information about how
to resolve a problem with character encoding, for example, but it is not
fine if you search for the code or the algorithm for the problem you are
being asked. For the projects, you can talk with other students of the
course about questions on the programming language, libraries, some API
issue, and so on, but both the solutions and the programming must be
yours.
If we find out that you have violated the policy and you have
copied in any way you will automatically fail. If you have any
doubts about whether something is allowed or not, ask the
instructor.
The same applies for generative AI tools, such
as ChatGPT. These can be useful tools in your work and there
are some homework questions in which we ask you explicitly to use them.
However, the use of such tools when it is not explicitly allowed
will be treated as plagiarism and is strictly prohibited.