Difference between revisions of "MSc: Advanced Information Retrieval"
I.konyukhov (talk | contribs) m (I.konyukhov moved page MSc:AdvancedInformationRetrieval to MSc:AdvancedInformationRetrieval.S22) |
m (M.petrishchev moved page MSc:AdvancedInformationRetrieval.S22 to MSc:Advanced Information Retrieval) |
(No difference)
|
Revision as of 15:28, 8 April 2022
Advanced Information Retrieval
- Course name: Advanced Information Retrieval
- Course number: N/A
Course Characteristics
What subject area does your course (discipline) belong to?
Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web
Key concepts of the class
- Data indexing
- Recommendations
- Relevance and ranking
What is the purpose of this course?
The course is designed to prepare students to understand and learn contemporary tools of information retrieval systems. Students, who will later dedicate their engineering or scientific careers to implementation of search engines, social networks, recommender systems and other content services will obtain necessary knowledge and skills in designing and implementing essential parts of such systems.
Course objectives based on Bloom’s taxonomy
- What should a student remember at the end of the course?
By the end of the course, the students should be able to remember and recognize
- Terms and definitions used in area of information retrieval,
- Search engine and recommender system essential parts,
- Quality metrics of information retrieval systems,
- Contemporary approaches to semantic data analysis,
- Indexing strategies.
- What should a student be able to understand at the end of the course?
By the end of the course, the students should be able to describe and explain
- How to design a recommender system from scratch,
- How to evaluate quality of a particular information retrieval system,
- Core ideas and system implementation and maintenance,
- How to identify and fix information retrieval system problems.
- What should a student be able to apply at the end of the course?
By the end of the course, the students should be able to
- Build a recommender service from scratch,
- Implement proper index for an unstructured dataset,
- Plan quality measures for a new recommender service,
- Run initial data analysis and problem evaluation for a business task, related to information retrieval.
Course evaluation
Proposed points | ||
---|---|---|
Labs/seminar classes | 20 | 30 |
Interim performance assessment | 30 | 0 |
Assessments (homework) | 0 | 70 |
Exams | 50 | 0 |
7 hometasks will cost you up to 70 points in total (10 points each). 7 contest labs can bring you up to 5 points each. Work in teams up to 3, you will get +2 points for each successful completion and +3 points for each submission in top 3.
Exam and retake planning
Exam
No exam.
Retake 1
First retake is conducted in a form of project defense. Student is given a week to prepare. Student takes any technical paper from Information Retrieval Journal (https://www.springer.com/journal/10791) for the last 3 years and approves it until the next day with a professor to avoid collisions and misunderstanding. Student implements the paper in a search engine (this can be a technique, metric, ...). At the retake day student presents a paper. Presentation is followed by QA session. After QA session student presents implementation of the paper. Grading criteria as follows:
- 30% – paper presentation is clear, discussion of results is full.
- 30% – search engine implementation is correct and clear. Well-structured and dedicated to a separate service.
- 30% – paper implementation is correct.
Retake 2
Second retake is conducted in front of the committee. Four (4) questions are randomly selected for a student: two (2) theoretical from "Test questions for final assessment in this section" and two (2) practical from "Typical questions for ongoing performance evaluation". Each question costs 25% of the grade. Student is given 15 minutes to prepare for theoretical questions. Then (s)he answers in front of the committee. After this student if given additional 40 minutes to solve practical questions.
Grades range
Proposed range | ||
---|---|---|
A. Excellent | 90-100 | 84-100 |
B. Good | 75-89 | 70-83 |
C. Satisfactory | 60-74 | 60-69 |
D. Poor | 0-59 | 0-59 |
Resources and reference material
Main textbook:
- "An Introduction to Information Retrieval" by Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Cambridge University Press (any edition)
Other reference material:
- “Natural Language Processing with Python – Analyzing Text with the Natural Language Toolkit,” Steven Bird, Ewan Klein, and Edward Loper. [link]
Course Sections
The main sections of the course and approximate hour distribution between them is as follows:
Section | Section Title | Teaching Hours |
---|---|---|
1 | Introduction. Crawling and quality basics | 16 |
2 | Text indexing and language processing | 20 |
3 | Advanced index data structures | 8 |
4 | Advanced retrieval topics. Media retrieval | 16 |
Section 1
Section title:
Introduction. Crawling and quality basics
Topics covered in this section:
- Introduction to information retrieval
- Crawling
- Quality assessment
What forms of evaluation were used to test students’ performance in this section?
Yes/No | |
---|---|
Development of individual parts of software product code | 1 |
Homework and group projects | 1 |
Midterm evaluation | 0 |
Testing (written or computer based) | 0 |
Reports | 0 |
Essays | 0 |
Oral polls | 0 |
Discussions | 0 |
Typical questions for ongoing performance evaluation within this section
- Enumerate limitations for web crawling.
- Propose a strategy for A/B testing.
- Propose recommender quality metric.
- Implement DCG metric.
- Discuss relevance metric.
- Crawl website with respect to robots.txt.
Typical questions for seminar classes (labs) within this section
- Show how to parse a dynamic web page.
- Provide a framework to accept/reject A/B testing results.
- Compute DCG for an example query for random search engine.
- Implement a metric for a recommender system.
- Implement pFound.
Test questions for final assessment in this section
- Implement text crawler for a news site.
- What is SBS (side-by-side) and how is it used in search engines?
- Compare pFound with CTR and with DCG.
- Explain how A/B testing works.
Section 2
Section title:
Text indexing and language processing
Topics covered in this section:
- Building inverted index for text documents. Boolean retrieval model.
- Language, tokenization, stemming, searching, scoring.
- Spellchecking.
- Language model. Topic model.
- Vector model for texts.
- ML for text embedding.
What forms of evaluation were used to test students’ performance in this section?
Yes/No | |
---|---|
Development of individual parts of software product code | 1 |
Homework and group projects | 1 |
Midterm evaluation | 0 |
Testing (written or computer based) | 0 |
Reports | 0 |
Essays | 0 |
Oral polls | 0 |
Discussions | 0 |
Typical questions for ongoing performance evaluation within this section
- Build inverted index for a text.
- Tokenize a text.
- Implement simple spellchecker.
- Embed the text with a model.
Typical questions for seminar classes (labs) within this section
- Build inverted index for a set of web pages.
- build a distribution of stems/lexemes for a text.
- Choose and implement persistent index for a given text collection.
- Visualize a dataset for text classification.
Test questions for final assessment in this section
- Explain how (and why) KD-trees work.
- What are weak places of inverted index?
- Compare different text vectorization approaches.
- Compare tolerant retrieval to spellchecking.
Section 3
Section title:
Advanced index data structures
Topics covered in this section:
- Vector-based tree data structures.
- Graph-based data structures. Inverted index and multi-index.
What forms of evaluation were used to test students’ performance in this section?
Yes/No | |
---|---|
Development of individual parts of software product code | 1 |
Homework and group projects | 1 |
Midterm evaluation | 0 |
Testing (written or computer based) | 0 |
Reports | 0 |
Essays | 0 |
Oral polls | 0 |
Discussions | 0 |
Typical questions for ongoing performance evaluation within this section
- Build kd-tree index for a given dataset.
- Why kd-trees work badly in 100-dimensional environment?
- What is the difference between metric space and vector space?
Typical questions for seminar classes (labs) within this section
- Build (H)NSW index for a dataset.
- Compare HNSW to Annoy index.
- What are metric space index structures you know?
Test questions for final assessment in this section
- Compare inverted index to HNSW in terms of speed, memory consumption?
- Choose the best index for a given dataset.
- Implement range search in KD-tree.
Section 4
Section title:
Advanced retrieval topics. Media retrieval
Topics covered in this section:
- Image and video processing
- Image understanding
- Video understanding
- Audio processing
- Speech-to-text
- Relevance feedback
- PageRank
What forms of evaluation were used to test students’ performance in this section?
Yes/No | |
---|---|
Development of individual parts of software product code | 1 |
Homework and group projects | 1 |
Midterm evaluation | 0 |
Testing (written or computer based) | 0 |
Reports | 0 |
Essays | 0 |
Oral polls | 0 |
Discussions | 0 |
Typical questions for ongoing performance evaluation within this section
- Extract semantic information from images.
- Build an image hash.
- Build a spectral representation of a song.
- Whats is relevance feedback?
Typical questions for seminar classes (labs) within this section
- Build a "search by color" feature.
- Extract scenes from video.
- Write a voice-controlled search.
- Semantic search within unlabelled image dataset.
Test questions for final assessment in this section
- What are the approaches to image understanding?
- How to cluster a video into scenes and shots?
- How speech-to-text technology works?
- How to build audio fingerprints?