Difference between revisions of "BSc: Information Retrieval"

From IU
Jump to navigation Jump to search
 
(5 intermediate revisions by 3 users not shown)
Line 1: Line 1:
  +
 
= Information Retrieval =
 
= Information Retrieval =
  +
* '''Course name''': Information Retrieval
  +
* '''Code discipline''': CSE306
  +
* '''Subject area''': Data Science; Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web; Recommender systems
   
  +
== Short Description ==
* <span>'''Course name:'''</span> Information Retrieval
 
  +
The course gives an introduction to practical and theoretical aspects of information search and recommender systems.
* <span>'''Course number:'''</span> XYZ
 
  +
This course covers the following concepts: Indexing; Search quality assessment; Relevance; Ranking; Information retrieval; Query; Recommendations; Multimedia retrieval.
* <span>'''Subject area:'''</span> Data Science; Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web
 
 
== Course Characteristics ==
 
 
=== Key concepts of the class ===
 
 
* Indexing
 
* Relevance
 
* Ranking
 
* Information retrieval
 
* Query
 
 
=== What is the purpose of this course? ===
 
 
The course is designed to prepare students to understand background theories of information retrieval systems and introduce different information retrieval systems. The course will focus on the evaluation and analysis of such systems as well as how they are implemented. Throughout the course, students will be involved in discussions, readings, and assignments to experience real world systems. The technologies and algorithms covered in this class include machine learning, data mining, natural language processing, data indexing, and so on.
 
   
 
== Prerequisites ==
 
== Prerequisites ==
   
  +
=== Prerequisite subjects ===
Analytic Geometry And Linear Algebra I
 
* [https://eduwiki.innopolis.university/index.php/BSc:AnalyticGeometryAndLinearAlgebraII CSE204 — Analytic Geometry And Linear Algebra II]: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue.
+
* CSE204 — Analytic Geometry And Linear Algebra II: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue.
  +
* CSE113 — Philosophy I - (Discrete Math and Logic): graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path.
  +
* CSE206 — Probability And Statistics: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties. Analysis: DFT, [discrete] gradient.
   
  +
=== Prerequisite topics ===
* [https://eduwiki.innopolis.university/index.php/BSc:Logic_and_Discrete_Mathematics CSE113 — Philosophy I - (Discrete Math and Logic)]: graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path.
 
   
* [https://eduwiki.innopolis.university/index.php/BSc:ProbabilityAndStatistics CSE206 — Probability And Statistics]: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties. Analysis: DFT, [discrete] gradient.
 
   
  +
== Course Topics ==
  +
{| class="wikitable"
  +
|+ Course Sections and Topics
  +
|-
  +
! Section !! Topics within the section
  +
|-
  +
| Information retrieval basics ||
  +
# Introduction to IR, major concepts.
  +
# Crawling and Web.
  +
# Quality assessment.
  +
|-
  +
| Text processing and indexing ||
  +
# Building inverted index for text documents. Boolean retrieval model.
  +
# Language, tokenization, stemming, searching, scoring.
  +
# Spellchecking and wildcard search.
  +
# Suggest and query expansion.
  +
# Language modelling. Topic modelling.
  +
|-
  +
| Vector model and vector indexing ||
  +
# Vector model
  +
# Machine learning for vector embedding
  +
# Vector-based index structures
  +
|-
  +
| Advanced topics. Media processing ||
  +
# Image and video processing, understanding and indexing
  +
# Content-based image retrieval
  +
# Audio retrieval
  +
# Relevance feedback
  +
|}
  +
== Intended Learning Outcomes (ILOs) ==
   
  +
=== What is the main purpose of this course? ===
== Course Objectives Based on Bloom’s Taxonomy ==
 
  +
The course is designed to prepare students to understand background theories of information retrieval systems and introduce different information retrieval systems. The course will focus on the evaluation and analysis of such systems as well as how they are implemented. Throughout the course, students will be involved in discussions, readings, and assignments to experience real world systems. The technologies and algorithms covered in this class include machine learning, data mining, natural language processing, data indexing, and so on.
   
=== - What should a student remember at the end of the course? ===
+
=== ILOs defined at three levels ===
   
  +
==== Level 1: What concepts should a student know/remember/explain? ====
  +
By the end of the course, the students should be able to ...
 
* Terms and definitions used in area of information retrieval,
 
* Terms and definitions used in area of information retrieval,
 
* Search engine and recommender system essential parts,
 
* Search engine and recommender system essential parts,
Line 39: Line 63:
 
* Indexing strategies.
 
* Indexing strategies.
   
=== - What should a student be able to understand at the end of the course? ===
+
==== Level 2: What basic practical skills should a student be able to perform? ====
  +
By the end of the course, the students should be able to ...
 
 
* Understand background theories behind information retrieval systems,
 
* Understand background theories behind information retrieval systems,
 
* How to design a recommender system from scratch,
 
* How to design a recommender system from scratch,
Line 47: Line 71:
 
* How to identify and fix information retrieval system problems.
 
* How to identify and fix information retrieval system problems.
   
=== - What should a student be able to apply at the end of the course? ===
+
==== Level 3: What complex comprehensive skills should a student be able to apply in real-life scenarios? ====
  +
By the end of the course, the students should be able to ...
 
 
* Build a recommender service from scratch,
 
* Build a recommender service from scratch,
 
* Implement a proper index for an unstructured dataset,
 
* Implement a proper index for an unstructured dataset,
 
* Plan quality measures for a new recommender service,
 
* Plan quality measures for a new recommender service,
* Run initial data analysis and problem evaluation for a business task, related to information retrieval.
+
* Run initial data analysis and problem evaluation for a business task, related to information retrieval.
  +
== Grading ==
   
=== Course evaluation ===
+
=== Course grading range ===
  +
{| class="wikitable"
  +
|+
  +
|-
  +
! Grade !! Range !! Description of performance
  +
|-
  +
| A. Excellent || 90-100 || -
  +
|-
  +
| B. Good || 75-89 || -
  +
|-
  +
| C. Satisfactory || 60-74 || -
  +
|-
  +
| D. Poor || 0-59 || -
  +
|}
   
  +
=== Course activities and grading breakdown ===
{|
 
  +
{| class="wikitable"
|+ Course grade breakdown
 
  +
|+
!
 
  +
|-
!
 
  +
! Activity Type !! Percentage of the overall course grade
!align="center"| '''Proposed points'''
 
 
|-
 
|-
  +
| Assignments || 60
| Labs/seminar classes
 
| 50
 
|align="center"| 35
 
 
|-
 
|-
  +
| Quizzes || 40
| Interim performance assessment
 
| 25
 
|align="center"| 70
 
 
|-
 
|-
| Exams
+
| Exams || 0
| 25
 
|align="center"| 0
 
 
|}
 
|}
   
  +
=== Recommendations for students on how to succeed in the course ===
There are labs of 2 types: tutorial and contest. '''Tutorial labs''' are followed by the home works. Home works are covering 70 points out 100-based grade. There are 7 home works, up to 10 points each. Student also has a chance for extra points (+2 for each home work) in case he/she submits a solution for extra problems. Part of the home work grade can be redistributed to '''Moodle-based''' quizzes, conducted during tutorial labs. '''Contest labs''' are time framed problems (1.5h) with an element of competition. There are 7 of them, students can work in groups up to 2. If a group solves the problem until midnight, it gets 2 points for the lab for each participant. If the group hits top-3 standing during the lab, it gets 3 additional points for the lab (5 in total).
 
   
  +
The simples way to succeed is to participate in labs and pass coding assignments in timely manner. This guarantees up to 60% of the grade. Participation in lecture quizzes allow to differentiate the grade.
=== Exam and retake planning ===
 
   
  +
== Resources, literature and reference materials ==
'''Exam'''
 
   
  +
=== Open access resources ===
No exam, grade is an aggregation of home works, contests and quizzes.
 
  +
* Manning, Raghavan, Schütze, An Introduction to Information Retrieval, 2008, Cambridge University Press
 
  +
* Baeza-Yates, Ribeiro-Neto, Modern Information Retrieval, 2011, Addison-Wesley
'''Retake 1'''
 
  +
* Buttcher, Clarke, Cormack, Information Retrieval: Implementing and Evaluating Search Engines, 2010, MIT Press
 
  +
* [https://github.com/IUCVLab/information-retrieval Course repository in github].
First retake is conducted in a form of project defense. Student is given a week to prepare. Student takes any technical paper from [https://www.springer.com/journal/10791 Information Retrieval Journal] for '''the last 3 years''' and approves it until the next day with a professor to avoid collisions and misunderstanding. Student implements the paper in a search engine (this can be a technique, metric, ...). At the retake day student presents a paper. Presentation is followed by QA session. After QA session student presents implementation of the paper. Grading criteria as follows:
 
   
  +
=== Closed access resources ===
* 30% – paper presentation is clear, discussion of results is full.
 
* 30% – search engine implementation is correct and clear. Well-structured and dedicated to a separate service.
 
* 30% – paper implementation is correct.
 
   
'''Retake 2'''
 
   
  +
=== Software and tools used within the course ===
Second retake is conducted in front of the committee. Four (4) questions are randomly selected for a student: two (2) theoretical from &quot;Test questions for final assessment in this section&quot; and two (2) practical from &quot;Typical questions for ongoing performance evaluation&quot;. Each question costs 25% of the grade. Student is given 15 minutes to prepare for theoretical questions. Then (s)he answers in front of the committee. After this student if given additional 40 minutes to solve practical questions.
 
   
  +
= Teaching Methodology: Methods, techniques, & activities =
=== Grades range ===
 
   
  +
== Activities and Teaching Methods ==
{|
 
  +
{| class="wikitable"
|+ Course grading range
 
  +
|+ Activities within each section
!
 
! '''Proposed range'''
 
!align="center"|
 
 
|-
 
|-
  +
! Learning Activities !! Section 1 !! Section 2 !! Section 3 !! Section 4
| A. Excellent
 
| 84-100
 
|align="center"|
 
 
|-
 
|-
  +
| Development of individual parts of software product code || 1 || 1 || 1 || 1
| B. Good
 
| 72-83.99
 
|align="center"|
 
 
|-
 
|-
  +
| Homework and group projects || 1 || 1 || 1 || 1
| C. Satisfactory
 
| 60-71.99
 
|align="center"|
 
 
|-
 
|-
  +
| Testing (written or computer based) || 1 || 1 || 1 || 1
| D. Poor
 
  +
|}
| 0-59.99
 
  +
== Formative Assessment and Course Activities ==
|align="center"|
 
|}
 
   
=== Resources and reference material ===
+
=== Ongoing performance assessment ===
   
==== Textbook: ====
+
==== Section 1 ====
  +
{| class="wikitable"
 
  +
|+
* Manning, Raghavan, Schütze, An Introduction to Information Retrieval, 2008, Cambridge University Press
 
  +
|-
 
  +
! Activity Type !! Content !! Is Graded?
==== Reference material: ====
 
  +
|-
 
  +
| Question || Enumerate limitations for web crawling. || 1
* Baeza-Yates, Ribeiro-Neto, Modern Information Retrieval, 2011, Addison-Wesley
 
  +
|-
* Buttcher, Clarke, Cormack, Information Retrieval: Implementing and Evaluating Search Engines, 2010, MIT Press
 
  +
| Question || Propose a strategy for A/B testing. || 1
* [https://github.com/IUCVLab/information-retrieval '''Course repository in github'''].
 
  +
|-
 
  +
| Question || Propose recommender quality metric. || 1
== Course Sections ==
 
  +
|-
 
  +
| Question || Implement DCG metric. || 1
The main sections of the course and approximate hour distribution between them is as follows:
 
  +
|-
 
  +
| Question || Discuss relevance metric. || 1
{|
 
  +
|-
|+ Course Sections
 
  +
| Question || Crawl website with respect to robots.txt. || 1
|align="center"| '''Section'''
 
  +
|-
| '''Section Title'''
 
  +
| Question || What is typical IR system architecture? || 0
|align="center"| '''Lectures'''
 
|align="center"| '''Seminars'''
 
|align="center"| '''Self-study'''
 
|align="center"| '''Knowledge'''
 
 
|-
 
|-
  +
| Question || Show how to parse a dynamic web page. || 0
|align="center"| '''Number'''
 
|
 
|align="center"| '''(hours)'''
 
|align="center"| '''(labs)'''
 
|align="center"|
 
|align="center"| '''evaluation'''
 
 
|-
 
|-
  +
| Question || Provide a framework to accept/reject A/B testing results. || 0
|align="center"| 1
 
| Information retrieval basics
 
|align="center"| 10
 
|align="center"| 10
 
|align="center"| 20
 
|align="center"| 0
 
 
|-
 
|-
  +
| Question || Compute DCG for an example query for random search engine. || 0
|align="center"| 2
 
| Text processing and indexing
 
|align="center"| 10
 
|align="center"| 10
 
|align="center"| 20
 
|align="center"| 0
 
 
|-
 
|-
  +
| Question || Implement a metric for a recommender system. || 0
|align="center"| 3
 
| Vector model and vector indexing
 
|align="center"| 12
 
|align="center"| 12
 
|align="center"| 12
 
|align="center"| 0
 
 
|-
 
|-
  +
| Question || Implement pFound. || 0
|align="center"| 4
 
  +
|}
| Advanced topics. Media processing
 
  +
==== Section 2 ====
|align="center"| 12
 
  +
{| class="wikitable"
|align="center"| 12
 
  +
|+
|align="center"| 12
 
|align="center"| 0
 
 
|-
 
|-
  +
! Activity Type !! Content !! Is Graded?
|align="center"| Final examination
 
|
+
|-
  +
| Question || Build inverted index for a text. || 1
|align="center"|
 
  +
|-
|align="center"|
 
  +
| Question || Tokenize a text. || 1
|align="center"|
 
  +
|-
|align="center"| 4
 
  +
| Question || Implement simple spellchecker. || 1
|}
 
  +
|-
 
  +
| Question || Implement wildcard search. || 1
=== Section 1 ===
 
  +
|-
 
  +
| Question || Build inverted index for a set of web pages. || 0
==== Section title: ====
 
  +
|-
 
  +
| Question || build a distribution of stems/lexemes for a text. || 0
Information retrieval basics
 
  +
|-
 
  +
| Question || Choose and implement case-insensitive index for a given text collection. || 0
=== Topics covered in this section: ===
 
  +
|-
 
  +
| Question || Choose and implement semantic vector-based index for a given text collection. || 0
* Introduction to IR, major concepts.
 
  +
|}
* Crawling and Web.
 
  +
==== Section 3 ====
* Quality assessment.
 
  +
{| class="wikitable"
 
  +
|+
=== What forms of evaluation were used to test students’ performance in this section? ===
 
  +
|-
 
  +
! Activity Type !! Content !! Is Graded?
<div class="tabular">
 
  +
|-
 
  +
| Question || Embed the text with an ML model. || 1
<span>|a|c|</span> &amp; '''Yes/No'''<br />
 
  +
|-
Development of individual parts of software product code &amp; 1<br />
 
  +
| Question || Build term-document matrix. || 1
Homework and group projects &amp; 1<br />
 
  +
|-
Midterm evaluation &amp; 0<br />
 
  +
| Question || Build semantic index for a dataset using Annoy. || 1
Testing (written or computer based) &amp; 1<br />
 
  +
|-
Reports &amp; 0<br />
 
  +
| Question || Build kd-tree index for a given dataset. || 1
Essays &amp; 0<br />
 
  +
|-
Oral polls &amp; 0<br />
 
  +
| Question || Why kd-trees work badly in 100-dimensional environment? || 1
Discussions &amp; 0<br />
 
  +
|-
 
  +
| Question || What is the difference between metric space and vector space? || 1
 
  +
|-
 
  +
| Question || Choose and implement persistent index for a given text collection. || 0
</div>
 
  +
|-
=== Typical questions for ongoing performance evaluation within this section ===
 
  +
| Question || Visualize a dataset for text classification. || 0
 
  +
|-
# Enumerate limitations for web crawling.
 
  +
| Question || Build (H)NSW index for a dataset. || 0
# Propose a strategy for A/B testing.
 
  +
|-
# Propose recommender quality metric.
 
  +
| Question || Compare HNSW to Annoy index. || 0
# Implement DCG metric.
 
  +
|-
# Discuss relevance metric.
 
  +
| Question || What are metric space index structures you know? || 0
# Crawl website with respect to robots.txt.
 
  +
|}
 
  +
==== Section 4 ====
=== Typical questions for seminar classes (labs) within this section ===
 
  +
{| class="wikitable"
 
  +
|+
# What is typical IR system architecture?
 
  +
|-
# Show how to parse a dynamic web page.
 
  +
! Activity Type !! Content !! Is Graded?
# Provide a framework to accept/reject A/B testing results.
 
  +
|-
# Compute DCG for an example query for random search engine.
 
  +
| Question || Extract semantic information from images. || 1
# Implement a metric for a recommender system.
 
  +
|-
# Implement pFound.
 
  +
| Question || Build an image hash. || 1
 
  +
|-
=== Test questions for final assessment in this section ===
 
  +
| Question || Build a spectral representation of a song. || 1
 
  +
|-
  +
| Question || Whats is relevance feedback? || 1
  +
|-
  +
| Question || Build a "search by color" feature. || 0
  +
|-
  +
| Question || Extract scenes from video. || 0
  +
|-
  +
| Question || Write a voice-controlled search. || 0
  +
|-
  +
| Question || Semantic search within unlabelled image dataset. || 0
  +
|}
  +
=== Final assessment ===
  +
'''Section 1'''
 
# Implement text crawler for a news site.
 
# Implement text crawler for a news site.
 
# What is SBS (side-by-side) and how is it used in search engines?
 
# What is SBS (side-by-side) and how is it used in search engines?
Line 242: Line 251:
 
# Explain how A/B testing works.
 
# Explain how A/B testing works.
 
# Describe PageRank algorithm.
 
# Describe PageRank algorithm.
  +
'''Section 2'''
 
=== Section 2 ===
 
 
==== Section title: ====
 
 
Text processing and indexing
 
 
=== Topics covered in this section: ===
 
 
* Building inverted index for text documents. Boolean retrieval model.
 
* Language, tokenization, stemming, searching, scoring.
 
* Spellchecking and wildcard search.
 
* Suggest and query expansion.
 
* Language modelling. Topic modelling.
 
 
=== What forms of evaluation were used to test students’ performance in this section? ===
 
 
<div class="tabular">
 
 
<span>|a|c|</span> &amp; '''Yes/No'''<br />
 
Development of individual parts of software product code &amp; 1<br />
 
Homework and group projects &amp; 1<br />
 
Midterm evaluation &amp; 0<br />
 
Testing (written or computer based) &amp; 1<br />
 
Reports &amp; 0<br />
 
Essays &amp; 0<br />
 
Oral polls &amp; 0<br />
 
Discussions &amp; 0<br />
 
 
 
 
</div>
 
=== Typical questions for ongoing performance evaluation within this section ===
 
 
# Build inverted index for a text.
 
# Tokenize a text.
 
# Implement simple spellchecker.
 
# Implement wildcard search.
 
 
=== Typical questions for seminar classes (labs) within this section ===
 
 
# Build inverted index for a set of web pages.
 
# build a distribution of stems/lexemes for a text.
 
# Choose and implement case-insensitive index for a given text collection.
 
# Choose and implement semantic vector-based index for a given text collection.
 
 
=== Test questions for final assessment in this section ===
 
 
 
# Explain how (and why) KD-trees work.
 
# Explain how (and why) KD-trees work.
 
# What are weak places of inverted index?
 
# What are weak places of inverted index?
 
# Compare different text vectorization approaches.
 
# Compare different text vectorization approaches.
 
# Compare tolerant retrieval to spellchecking.
 
# Compare tolerant retrieval to spellchecking.
  +
'''Section 3'''
 
=== Section 3 ===
 
 
==== Section title: ====
 
 
Vector model and vector indexing
 
 
=== Topics covered in this section: ===
 
 
* Vector model
 
* Machine learning for vector embedding
 
* Vector-based index structures
 
 
=== What forms of evaluation were used to test students’ performance in this section? ===
 
 
<div class="tabular">
 
 
<span>|a|c|</span> &amp; '''Yes/No'''<br />
 
Development of individual parts of software product code &amp; 1<br />
 
Homework and group projects &amp; 1<br />
 
Midterm evaluation &amp; 0<br />
 
Testing (written or computer based) &amp; 1<br />
 
Reports &amp; 0<br />
 
Essays &amp; 0<br />
 
Oral polls &amp; 0<br />
 
Discussions &amp; 0<br />
 
 
 
 
</div>
 
=== Typical questions for ongoing performance evaluation within this section ===
 
 
# Embed the text with an ML model.
 
# Build term-document matrix.
 
# Build semantic index for a dataset using Annoy.
 
# Build kd-tree index for a given dataset.
 
# Why kd-trees work badly in 100-dimensional environment?
 
# What is the difference between metric space and vector space?
 
 
=== Typical questions for seminar classes (labs) within this section ===
 
 
# Choose and implement persistent index for a given text collection.
 
# Visualize a dataset for text classification.
 
# Build (H)NSW index for a dataset.
 
# Compare HNSW to Annoy index.
 
# What are metric space index structures you know?
 
 
=== Test questions for final assessment in this section ===
 
 
 
# Compare inverted index to HNSW in terms of speed, memory consumption?
 
# Compare inverted index to HNSW in terms of speed, memory consumption?
 
# Choose the best index for a given dataset.
 
# Choose the best index for a given dataset.
 
# Implement range search in KD-tree.
 
# Implement range search in KD-tree.
  +
'''Section 4'''
 
=== Section 4 ===
 
 
==== Section title: ====
 
 
Advanced topics. Media processing
 
 
=== Topics covered in this section: ===
 
 
* Image and video processing, understanding and indexing
 
* Content-based image retrieval
 
* Audio retrieval
 
* Hum to search
 
* Relevance feedback
 
 
=== What forms of evaluation were used to test students’ performance in this section? ===
 
 
<div class="tabular">
 
 
<span>|a|c|</span> &amp; '''Yes/No'''<br />
 
Development of individual parts of software product code &amp; 1<br />
 
Homework and group projects &amp; 1<br />
 
Midterm evaluation &amp; 0<br />
 
Testing (written or computer based) &amp; 1<br />
 
Reports &amp; 0<br />
 
Essays &amp; 0<br />
 
Oral polls &amp; 0<br />
 
Discussions &amp; 0<br />
 
 
 
 
</div>
 
=== Typical questions for ongoing performance evaluation within this section ===
 
 
# Extract semantic information from images.
 
# Build an image hash.
 
# Build a spectral representation of a song.
 
# Whats is relevance feedback?
 
 
=== Typical questions for seminar classes (labs) within this section ===
 
 
# Build a &quot;search by color&quot; feature.
 
# Extract scenes from video.
 
# Write a voice-controlled search.
 
# Semantic search within unlabelled image dataset.
 
 
=== Test questions for final assessment in this section ===
 
 
 
# What are the approaches to image understanding?
 
# What are the approaches to image understanding?
 
# How to cluster a video into scenes and shots?
 
# How to cluster a video into scenes and shots?
 
# How speech-to-text technology works?
 
# How speech-to-text technology works?
 
# How to build audio fingerprints?
 
# How to build audio fingerprints?
  +
  +
=== The retake exam ===
  +
'''Section 1'''
  +
# Solve a complex coding problem similar to one of the homework or lab.
  +
'''Section 2'''
  +
# Solve a complex coding problem similar to one of the homework or lab.
  +
'''Section 3'''
  +
# Solve a complex coding problem similar to one of the homework or lab.
  +
'''Section 4'''
  +
# Solve a complex coding problem similar to one of the homework or lab.

Latest revision as of 12:46, 16 September 2023

Information Retrieval

  • Course name: Information Retrieval
  • Code discipline: CSE306
  • Subject area: Data Science; Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web; Recommender systems

Short Description

The course gives an introduction to practical and theoretical aspects of information search and recommender systems. This course covers the following concepts: Indexing; Search quality assessment; Relevance; Ranking; Information retrieval; Query; Recommendations; Multimedia retrieval.

Prerequisites

Prerequisite subjects

  • CSE204 — Analytic Geometry And Linear Algebra II: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue.
  • CSE113 — Philosophy I - (Discrete Math and Logic): graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path.
  • CSE206 — Probability And Statistics: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties. Analysis: DFT, [discrete] gradient.

Prerequisite topics

Course Topics

Course Sections and Topics
Section Topics within the section
Information retrieval basics
  1. Introduction to IR, major concepts.
  2. Crawling and Web.
  3. Quality assessment.
Text processing and indexing
  1. Building inverted index for text documents. Boolean retrieval model.
  2. Language, tokenization, stemming, searching, scoring.
  3. Spellchecking and wildcard search.
  4. Suggest and query expansion.
  5. Language modelling. Topic modelling.
Vector model and vector indexing
  1. Vector model
  2. Machine learning for vector embedding
  3. Vector-based index structures
Advanced topics. Media processing
  1. Image and video processing, understanding and indexing
  2. Content-based image retrieval
  3. Audio retrieval
  4. Relevance feedback

Intended Learning Outcomes (ILOs)

What is the main purpose of this course?

The course is designed to prepare students to understand background theories of information retrieval systems and introduce different information retrieval systems. The course will focus on the evaluation and analysis of such systems as well as how they are implemented. Throughout the course, students will be involved in discussions, readings, and assignments to experience real world systems. The technologies and algorithms covered in this class include machine learning, data mining, natural language processing, data indexing, and so on.

ILOs defined at three levels

Level 1: What concepts should a student know/remember/explain?

By the end of the course, the students should be able to ...

  • Terms and definitions used in area of information retrieval,
  • Search engine and recommender system essential parts,
  • Quality metrics of information retrieval systems,
  • Contemporary approaches to semantic data analysis,
  • Indexing strategies.

Level 2: What basic practical skills should a student be able to perform?

By the end of the course, the students should be able to ...

  • Understand background theories behind information retrieval systems,
  • How to design a recommender system from scratch,
  • How to evaluate quality of a particular information retrieval system,
  • Core ideas and system implementation and maintenance,
  • How to identify and fix information retrieval system problems.

Level 3: What complex comprehensive skills should a student be able to apply in real-life scenarios?

By the end of the course, the students should be able to ...

  • Build a recommender service from scratch,
  • Implement a proper index for an unstructured dataset,
  • Plan quality measures for a new recommender service,
  • Run initial data analysis and problem evaluation for a business task, related to information retrieval.

Grading

Course grading range

Grade Range Description of performance
A. Excellent 90-100 -
B. Good 75-89 -
C. Satisfactory 60-74 -
D. Poor 0-59 -

Course activities and grading breakdown

Activity Type Percentage of the overall course grade
Assignments 60
Quizzes 40
Exams 0

Recommendations for students on how to succeed in the course

The simples way to succeed is to participate in labs and pass coding assignments in timely manner. This guarantees up to 60% of the grade. Participation in lecture quizzes allow to differentiate the grade.

Resources, literature and reference materials

Open access resources

  • Manning, Raghavan, Schütze, An Introduction to Information Retrieval, 2008, Cambridge University Press
  • Baeza-Yates, Ribeiro-Neto, Modern Information Retrieval, 2011, Addison-Wesley
  • Buttcher, Clarke, Cormack, Information Retrieval: Implementing and Evaluating Search Engines, 2010, MIT Press
  • Course repository in github.

Closed access resources

Software and tools used within the course

Teaching Methodology: Methods, techniques, & activities

Activities and Teaching Methods

Activities within each section
Learning Activities Section 1 Section 2 Section 3 Section 4
Development of individual parts of software product code 1 1 1 1
Homework and group projects 1 1 1 1
Testing (written or computer based) 1 1 1 1

Formative Assessment and Course Activities

Ongoing performance assessment

Section 1

Activity Type Content Is Graded?
Question Enumerate limitations for web crawling. 1
Question Propose a strategy for A/B testing. 1
Question Propose recommender quality metric. 1
Question Implement DCG metric. 1
Question Discuss relevance metric. 1
Question Crawl website with respect to robots.txt. 1
Question What is typical IR system architecture? 0
Question Show how to parse a dynamic web page. 0
Question Provide a framework to accept/reject A/B testing results. 0
Question Compute DCG for an example query for random search engine. 0
Question Implement a metric for a recommender system. 0
Question Implement pFound. 0

Section 2

Activity Type Content Is Graded?
Question Build inverted index for a text. 1
Question Tokenize a text. 1
Question Implement simple spellchecker. 1
Question Implement wildcard search. 1
Question Build inverted index for a set of web pages. 0
Question build a distribution of stems/lexemes for a text. 0
Question Choose and implement case-insensitive index for a given text collection. 0
Question Choose and implement semantic vector-based index for a given text collection. 0

Section 3

Activity Type Content Is Graded?
Question Embed the text with an ML model. 1
Question Build term-document matrix. 1
Question Build semantic index for a dataset using Annoy. 1
Question Build kd-tree index for a given dataset. 1
Question Why kd-trees work badly in 100-dimensional environment? 1
Question What is the difference between metric space and vector space? 1
Question Choose and implement persistent index for a given text collection. 0
Question Visualize a dataset for text classification. 0
Question Build (H)NSW index for a dataset. 0
Question Compare HNSW to Annoy index. 0
Question What are metric space index structures you know? 0

Section 4

Activity Type Content Is Graded?
Question Extract semantic information from images. 1
Question Build an image hash. 1
Question Build a spectral representation of a song. 1
Question Whats is relevance feedback? 1
Question Build a "search by color" feature. 0
Question Extract scenes from video. 0
Question Write a voice-controlled search. 0
Question Semantic search within unlabelled image dataset. 0

Final assessment

Section 1

  1. Implement text crawler for a news site.
  2. What is SBS (side-by-side) and how is it used in search engines?
  3. Compare pFound with CTR and with DCG.
  4. Explain how A/B testing works.
  5. Describe PageRank algorithm.

Section 2

  1. Explain how (and why) KD-trees work.
  2. What are weak places of inverted index?
  3. Compare different text vectorization approaches.
  4. Compare tolerant retrieval to spellchecking.

Section 3

  1. Compare inverted index to HNSW in terms of speed, memory consumption?
  2. Choose the best index for a given dataset.
  3. Implement range search in KD-tree.

Section 4

  1. What are the approaches to image understanding?
  2. How to cluster a video into scenes and shots?
  3. How speech-to-text technology works?
  4. How to build audio fingerprints?

The retake exam

Section 1

  1. Solve a complex coding problem similar to one of the homework or lab.

Section 2

  1. Solve a complex coding problem similar to one of the homework or lab.

Section 3

  1. Solve a complex coding problem similar to one of the homework or lab.

Section 4

  1. Solve a complex coding problem similar to one of the homework or lab.