BSc:InformationRetrieval.previous version

From IU
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Information Retrieval

  • Course name: Information Retrieval
  • Course number: XYZ
  • Subject area: Data Science; Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web

Course Characteristics

Key concepts of the class

  • Indexing
  • Relevance
  • Ranking
  • Information retrieval
  • Query

What is the purpose of this course?

The course is designed to prepare students to understand background theories of information retrieval systems and introduce different information retrieval systems. The course will focus on the evaluation and analysis of such systems as well as how they are implemented. Throughout the course, students will be involved in discussions, readings, and assignments to experience real world systems. The technologies and algorithms covered in this class include machine learning, data mining, natural language processing, data indexing, and so on.

Prerequisites

Analytic Geometry And Linear Algebra I

  • CSE204 — Analytic Geometry And Linear Algebra II: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue.


Course Objectives Based on Bloom’s Taxonomy

What should a student remember at the end of the course?

  • Terms and definitions used in area of information retrieval,
  • Search engine and recommender system essential parts,
  • Quality metrics of information retrieval systems,
  • Contemporary approaches to semantic data analysis,
  • Indexing strategies.

What should a student be able to understand at the end of the course?

  • Understand background theories behind information retrieval systems,
  • How to design a recommender system from scratch,
  • How to evaluate quality of a particular information retrieval system,
  • Core ideas and system implementation and maintenance,
  • How to identify and fix information retrieval system problems.

What should a student be able to apply at the end of the course?

  • Build a recommender service from scratch,
  • Implement a proper index for an unstructured dataset,
  • Plan quality measures for a new recommender service,
  • Run initial data analysis and problem evaluation for a business task, related to information retrieval.

Course evaluation

Course grade breakdown
Proposed points
Labs/seminar classes 50 35
Interim performance assessment 25 70
Exams 25 0

There are labs of 2 types: tutorial and contest. Tutorial labs are followed by the home works. Home works are covering 70 points out 100-based grade. There are 7 home works, up to 10 points each. Student also has a chance for extra points (+2 for each home work) in case he/she submits a solution for extra problems. Part of the home work grade can be redistributed to Moodle-based quizzes, conducted during tutorial labs. Contest labs are time framed problems (1.5h) with an element of competition. There are 7 of them, students can work in groups up to 2. If a group solves the problem until midnight, it gets 2 points for the lab for each participant. If the group hits top-3 standing during the lab, it gets 3 additional points for the lab (5 in total).

Exam and retake planning

Exam

No exam, grade is an aggregation of home works, contests and quizzes.

Retake 1

First retake is conducted in a form of project defense. Student is given a week to prepare. Student takes any technical paper from Information Retrieval Journal for the last 3 years and approves it until the next day with a professor to avoid collisions and misunderstanding. Student implements the paper in a search engine (this can be a technique, metric, ...). At the retake day student presents a paper. Presentation is followed by QA session. After QA session student presents implementation of the paper. Grading criteria as follows:

  • 30% – paper presentation is clear, discussion of results is full.
  • 30% – search engine implementation is correct and clear. Well-structured and dedicated to a separate service.
  • 30% – paper implementation is correct.

Retake 2

Second retake is conducted in front of the committee. Four (4) questions are randomly selected for a student: two (2) theoretical from "Test questions for final assessment in this section" and two (2) practical from "Typical questions for ongoing performance evaluation". Each question costs 25% of the grade. Student is given 15 minutes to prepare for theoretical questions. Then (s)he answers in front of the committee. After this student if given additional 40 minutes to solve practical questions.

Grades range

Course grading range
Proposed range
A. Excellent 84-100
B. Good 72-83.99
C. Satisfactory 60-71.99
D. Poor 0-59.99

Resources and reference material

Textbook:

  • Manning, Raghavan, Schütze, An Introduction to Information Retrieval, 2008, Cambridge University Press

Reference material:

  • Baeza-Yates, Ribeiro-Neto, Modern Information Retrieval, 2011, Addison-Wesley
  • Buttcher, Clarke, Cormack, Information Retrieval: Implementing and Evaluating Search Engines, 2010, MIT Press
  • Course repository in github.

Course Sections

The main sections of the course and approximate hour distribution between them is as follows:

Course Sections
Section Section Title Lectures Seminars Self-study Knowledge
Number (hours) (labs) evaluation
1 Information retrieval basics 10 10 20 0
2 Text processing and indexing 10 10 20 0
3 Vector model and vector indexing 12 12 12 0
4 Advanced topics. Media processing 12 12 12 0
Final examination 4

Section 1

Section title:

Information retrieval basics

Topics covered in this section:

  • Introduction to IR, major concepts.
  • Crawling and Web.
  • Quality assessment.

What forms of evaluation were used to test students’ performance in this section?

|a|c| & Yes/No
Development of individual parts of software product code & 1
Homework and group projects & 1
Midterm evaluation & 0
Testing (written or computer based) & 1
Reports & 0
Essays & 0
Oral polls & 0
Discussions & 0


Typical questions for ongoing performance evaluation within this section

  1. Enumerate limitations for web crawling.
  2. Propose a strategy for A/B testing.
  3. Propose recommender quality metric.
  4. Implement DCG metric.
  5. Discuss relevance metric.
  6. Crawl website with respect to robots.txt.

Typical questions for seminar classes (labs) within this section

  1. What is typical IR system architecture?
  2. Show how to parse a dynamic web page.
  3. Provide a framework to accept/reject A/B testing results.
  4. Compute DCG for an example query for random search engine.
  5. Implement a metric for a recommender system.
  6. Implement pFound.

Test questions for final assessment in this section

  1. Implement text crawler for a news site.
  2. What is SBS (side-by-side) and how is it used in search engines?
  3. Compare pFound with CTR and with DCG.
  4. Explain how A/B testing works.
  5. Describe PageRank algorithm.

Section 2

Section title:

Text processing and indexing

Topics covered in this section:

  • Building inverted index for text documents. Boolean retrieval model.
  • Language, tokenization, stemming, searching, scoring.
  • Spellchecking and wildcard search.
  • Suggest and query expansion.
  • Language modelling. Topic modelling.

What forms of evaluation were used to test students’ performance in this section?

|a|c| & Yes/No
Development of individual parts of software product code & 1
Homework and group projects & 1
Midterm evaluation & 0
Testing (written or computer based) & 1
Reports & 0
Essays & 0
Oral polls & 0
Discussions & 0


Typical questions for ongoing performance evaluation within this section

  1. Build inverted index for a text.
  2. Tokenize a text.
  3. Implement simple spellchecker.
  4. Implement wildcard search.

Typical questions for seminar classes (labs) within this section

  1. Build inverted index for a set of web pages.
  2. build a distribution of stems/lexemes for a text.
  3. Choose and implement case-insensitive index for a given text collection.
  4. Choose and implement semantic vector-based index for a given text collection.

Test questions for final assessment in this section

  1. Explain how (and why) KD-trees work.
  2. What are weak places of inverted index?
  3. Compare different text vectorization approaches.
  4. Compare tolerant retrieval to spellchecking.

Section 3

Section title:

Vector model and vector indexing

Topics covered in this section:

  • Vector model
  • Machine learning for vector embedding
  • Vector-based index structures

What forms of evaluation were used to test students’ performance in this section?

|a|c| & Yes/No
Development of individual parts of software product code & 1
Homework and group projects & 1
Midterm evaluation & 0
Testing (written or computer based) & 1
Reports & 0
Essays & 0
Oral polls & 0
Discussions & 0


Typical questions for ongoing performance evaluation within this section

  1. Embed the text with an ML model.
  2. Build term-document matrix.
  3. Build semantic index for a dataset using Annoy.
  4. Build kd-tree index for a given dataset.
  5. Why kd-trees work badly in 100-dimensional environment?
  6. What is the difference between metric space and vector space?

Typical questions for seminar classes (labs) within this section

  1. Choose and implement persistent index for a given text collection.
  2. Visualize a dataset for text classification.
  3. Build (H)NSW index for a dataset.
  4. Compare HNSW to Annoy index.
  5. What are metric space index structures you know?

Test questions for final assessment in this section

  1. Compare inverted index to HNSW in terms of speed, memory consumption?
  2. Choose the best index for a given dataset.
  3. Implement range search in KD-tree.

Section 4

Section title:

Advanced topics. Media processing

Topics covered in this section:

  • Image and video processing, understanding and indexing
  • Content-based image retrieval
  • Audio retrieval
  • Hum to search
  • Relevance feedback

What forms of evaluation were used to test students’ performance in this section?

|a|c| & Yes/No
Development of individual parts of software product code & 1
Homework and group projects & 1
Midterm evaluation & 0
Testing (written or computer based) & 1
Reports & 0
Essays & 0
Oral polls & 0
Discussions & 0


Typical questions for ongoing performance evaluation within this section

  1. Extract semantic information from images.
  2. Build an image hash.
  3. Build a spectral representation of a song.
  4. Whats is relevance feedback?

Typical questions for seminar classes (labs) within this section

  1. Build a "search by color" feature.
  2. Extract scenes from video.
  3. Write a voice-controlled search.
  4. Semantic search within unlabelled image dataset.

Test questions for final assessment in this section

  1. What are the approaches to image understanding?
  2. How to cluster a video into scenes and shots?
  3. How speech-to-text technology works?
  4. How to build audio fingerprints?