Difference between revisions of "MSc: Advanced Information Retrieval"

From IU
Jump to navigation Jump to search
Line 25: Line 25:
 
* [https://eduwiki.innopolis.university/index.php/BSc:_Analytic_Geometry_And_Linear_Algebra_I1 CSE202] — Analytical Geometry and Linear Algebra I and [https://eduwiki.innopolis.university/index.php/BSc:_Analytic_Geometry_And_Linear_Algebra_II CSE204] — Analytic Geometry And Linear Algebra II: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue.
 
* [https://eduwiki.innopolis.university/index.php/BSc:_Analytic_Geometry_And_Linear_Algebra_I1 CSE202] — Analytical Geometry and Linear Algebra I and [https://eduwiki.innopolis.university/index.php/BSc:_Analytic_Geometry_And_Linear_Algebra_II CSE204] — Analytic Geometry And Linear Algebra II: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue.
 
* [https://eduwiki.innopolis.university/index.php/BSc:Logic_and_Discrete_Mathematics CSE113] — Philosophy I - (Discrete Math and Logic): graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path.
 
* [https://eduwiki.innopolis.university/index.php/BSc:Logic_and_Discrete_Mathematics CSE113] — Philosophy I - (Discrete Math and Logic): graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path.
* [https://eduwiki.innopolis.university/index.php/BSc:_Probability_And_Statistics CSE206] — Probability And Statistics: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties.
+
* [https://eduwiki.innopolis.university/index.php/BSc:_Probability_And_Statistics CSE206 — Probability And Statistics]: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties.
 
* Analysis: DFT, [discrete] gradient.
 
* Analysis: DFT, [discrete] gradient.
   

Revision as of 10:41, 21 April 2022

Advanced Information Retrieval

  • Course name: Advanced Information Retrieval
  • Course number: N/A

Course Characteristics

What subject area does your course (discipline) belong to?

Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web

Key concepts of the class

  • Data indexing
  • Recommendations
  • Relevance and ranking

What is the purpose of this course?

The course is designed to prepare students to understand and learn contemporary tools of information retrieval systems. Students, who will later dedicate their engineering or scientific careers to implementation of search engines, social networks, recommender systems and other content services will obtain necessary knowledge and skills in designing and implementing essential parts of such systems.

Prerequisites

  • CSE101 — Introduction to Programming I and CSE102 — Introduction to Programming II
  • CSE202 — Analytical Geometry and Linear Algebra I and CSE204 — Analytic Geometry And Linear Algebra II: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue.
  • CSE113 — Philosophy I - (Discrete Math and Logic): graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path.
  • CSE206 — Probability And Statistics: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties.
  • Analysis: DFT, [discrete] gradient.


How can students fill the gap?


Course objectives based on Bloom’s taxonomy

- What should a student remember at the end of the course?

By the end of the course, the students should be able to remember and recognize

  • Terms and definitions used in area of information retrieval,
  • Search engine and recommender system essential parts,
  • Quality metrics of information retrieval systems,
  • Contemporary approaches to semantic data analysis,
  • Indexing strategies.

- What should a student be able to understand at the end of the course?

By the end of the course, the students should be able to describe and explain

  • How to design a recommender system from scratch,
  • How to evaluate quality of a particular information retrieval system,
  • Core ideas and system implementation and maintenance,
  • How to identify and fix information retrieval system problems.

- What should a student be able to apply at the end of the course?

By the end of the course, the students should be able to

  • Build a recommender service from scratch,
  • Implement proper index for an unstructured dataset,
  • Plan quality measures for a new recommender service,
  • Run initial data analysis and problem evaluation for a business task, related to information retrieval.

Course evaluation

Course grade breakdown
Proposed points
Labs/seminar classes 20 30
Interim performance assessment 30 0
Assessments (homework) 0 70
Exams 50 0

7 hometasks will cost you up to 70 points in total (10 points each). 7 contest labs can bring you up to 5 points each. Work in teams up to 3, you will get +2 points for each successful completion and +3 points for each submission in top 3.

Exam and retake planning

Exam

No exam.

Retake 1

First retake is conducted in a form of project defense. Student is given a week to prepare. Student takes any technical paper from Information Retrieval Journal (https://www.springer.com/journal/10791) for the last 3 years and approves it until the next day with a professor to avoid collisions and misunderstanding. Student implements the paper in a search engine (this can be a technique, metric, ...). At the retake day student presents a paper. Presentation is followed by QA session. After QA session student presents implementation of the paper. Grading criteria as follows:

  • 30% – paper presentation is clear, discussion of results is full.
  • 30% – search engine implementation is correct and clear. Well-structured and dedicated to a separate service.
  • 30% – paper implementation is correct.

Retake 2

Second retake is conducted in front of the committee. Four (4) questions are randomly selected for a student: two (2) theoretical from "Test questions for final assessment in this section" and two (2) practical from "Typical questions for ongoing performance evaluation". Each question costs 25% of the grade. Student is given 15 minutes to prepare for theoretical questions. Then (s)he answers in front of the committee. After this student if given additional 40 minutes to solve practical questions.

Grades range

Course grading range
Proposed range
A. Excellent 90-100 84-100
B. Good 75-89 70-83
C. Satisfactory 60-74 60-69
D. Poor 0-59 0-59

Resources and reference material

Main textbook:

  • "An Introduction to Information Retrieval" by Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Cambridge University Press (any edition)

Other reference material:

  • “Natural Language Processing with Python – Analyzing Text with the Natural Language Toolkit,” Steven Bird, Ewan Klein, and Edward Loper. [link]

Course Sections

The main sections of the course and approximate hour distribution between them is as follows:

Course Sections
Section Section Title Teaching Hours
1 Introduction. Crawling and quality basics 16
2 Text indexing and language processing 20
3 Advanced index data structures 8
4 Advanced retrieval topics. Media retrieval 16

Section 1

Section title:

Introduction. Crawling and quality basics

Topics covered in this section:

  • Introduction to information retrieval
  • Crawling
  • Quality assessment

What forms of evaluation were used to test students’ performance in this section?

Yes/No
Development of individual parts of software product code 1
Homework and group projects 1
Midterm evaluation 0
Testing (written or computer based) 0
Reports 0
Essays 0
Oral polls 0
Discussions 0

Typical questions for ongoing performance evaluation within this section

  1. Enumerate limitations for web crawling.
  2. Propose a strategy for A/B testing.
  3. Propose recommender quality metric.
  4. Implement DCG metric.
  5. Discuss relevance metric.
  6. Crawl website with respect to robots.txt.

Typical questions for seminar classes (labs) within this section

  1. Show how to parse a dynamic web page.
  2. Provide a framework to accept/reject A/B testing results.
  3. Compute DCG for an example query for random search engine.
  4. Implement a metric for a recommender system.
  5. Implement pFound.

Test questions for final assessment in this section

  1. Implement text crawler for a news site.
  2. What is SBS (side-by-side) and how is it used in search engines?
  3. Compare pFound with CTR and with DCG.
  4. Explain how A/B testing works.

Section 2

Section title:

Text indexing and language processing

Topics covered in this section:

  • Building inverted index for text documents. Boolean retrieval model.
  • Language, tokenization, stemming, searching, scoring.
  • Spellchecking.
  • Language model. Topic model.
  • Vector model for texts.
  • ML for text embedding.

What forms of evaluation were used to test students’ performance in this section?

Yes/No
Development of individual parts of software product code 1
Homework and group projects 1
Midterm evaluation 0
Testing (written or computer based) 0
Reports 0
Essays 0
Oral polls 0
Discussions 0

Typical questions for ongoing performance evaluation within this section

  1. Build inverted index for a text.
  2. Tokenize a text.
  3. Implement simple spellchecker.
  4. Embed the text with a model.

Typical questions for seminar classes (labs) within this section

  1. Build inverted index for a set of web pages.
  2. build a distribution of stems/lexemes for a text.
  3. Choose and implement persistent index for a given text collection.
  4. Visualize a dataset for text classification.

Test questions for final assessment in this section

  1. Explain how (and why) KD-trees work.
  2. What are weak places of inverted index?
  3. Compare different text vectorization approaches.
  4. Compare tolerant retrieval to spellchecking.

Section 3

Section title:

Advanced index data structures

Topics covered in this section:

  • Vector-based tree data structures.
  • Graph-based data structures. Inverted index and multi-index.

What forms of evaluation were used to test students’ performance in this section?

Yes/No
Development of individual parts of software product code 1
Homework and group projects 1
Midterm evaluation 0
Testing (written or computer based) 0
Reports 0
Essays 0
Oral polls 0
Discussions 0

Typical questions for ongoing performance evaluation within this section

  1. Build kd-tree index for a given dataset.
  2. Why kd-trees work badly in 100-dimensional environment?
  3. What is the difference between metric space and vector space?

Typical questions for seminar classes (labs) within this section

  1. Build (H)NSW index for a dataset.
  2. Compare HNSW to Annoy index.
  3. What are metric space index structures you know?

Test questions for final assessment in this section

  1. Compare inverted index to HNSW in terms of speed, memory consumption?
  2. Choose the best index for a given dataset.
  3. Implement range search in KD-tree.

Section 4

Section title:

Advanced retrieval topics. Media retrieval

Topics covered in this section:

  • Image and video processing
  • Image understanding
  • Video understanding
  • Audio processing
  • Speech-to-text
  • Relevance feedback
  • PageRank

What forms of evaluation were used to test students’ performance in this section?

Yes/No
Development of individual parts of software product code 1
Homework and group projects 1
Midterm evaluation 0
Testing (written or computer based) 0
Reports 0
Essays 0
Oral polls 0
Discussions 0

Typical questions for ongoing performance evaluation within this section

  1. Extract semantic information from images.
  2. Build an image hash.
  3. Build a spectral representation of a song.
  4. Whats is relevance feedback?

Typical questions for seminar classes (labs) within this section

  1. Build a "search by color" feature.
  2. Extract scenes from video.
  3. Write a voice-controlled search.
  4. Semantic search within unlabelled image dataset.

Test questions for final assessment in this section

  1. What are the approaches to image understanding?
  2. How to cluster a video into scenes and shots?
  3. How speech-to-text technology works?
  4. How to build audio fingerprints?