Difference between revisions of "BSc: Information Retrieval"
(Created page with "= Information Retrieval = * <span>'''Course name:'''</span> Information Retrieval * <span>'''Course number:'''</span> XYZ * <span>'''Subject area:'''</span> Data Science ==...") |
|||
(12 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
+ | |||
= Information Retrieval = |
= Information Retrieval = |
||
+ | * '''Course name''': Information Retrieval |
||
+ | * '''Code discipline''': CSE306 |
||
+ | * '''Subject area''': Data Science; Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web; Recommender systems |
||
+ | == Short Description == |
||
− | * <span>'''Course name:'''</span> Information Retrieval |
||
+ | The course gives an introduction to practical and theoretical aspects of information search and recommender systems. |
||
− | * <span>'''Course number:'''</span> XYZ |
||
+ | This course covers the following concepts: Indexing; Search quality assessment; Relevance; Ranking; Information retrieval; Query; Recommendations; Multimedia retrieval. |
||
− | * <span>'''Subject area:'''</span> Data Science |
||
+ | == Prerequisites == |
||
− | == What subject area does your course (discipline) belong to? == |
||
+ | === Prerequisite subjects === |
||
− | Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web |
||
+ | * CSE204 — Analytic Geometry And Linear Algebra II: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue. |
||
+ | * CSE113 — Philosophy I - (Discrete Math and Logic): graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path. |
||
+ | * CSE206 — Probability And Statistics: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties. Analysis: DFT, [discrete] gradient. |
||
− | === |
+ | === Prerequisite topics === |
− | * Indexing |
||
− | * Relevance |
||
− | * Ranking |
||
− | * Information retrieval |
||
− | * Query |
||
+ | == Course Topics == |
||
− | === What is the purpose of this course? === |
||
+ | {| class="wikitable" |
||
+ | |+ Course Sections and Topics |
||
+ | |- |
||
+ | ! Section !! Topics within the section |
||
+ | |- |
||
+ | | Information retrieval basics || |
||
+ | # Introduction to IR, major concepts. |
||
+ | # Crawling and Web. |
||
+ | # Quality assessment. |
||
+ | |- |
||
+ | | Text processing and indexing || |
||
+ | # Building inverted index for text documents. Boolean retrieval model. |
||
+ | # Language, tokenization, stemming, searching, scoring. |
||
+ | # Spellchecking and wildcard search. |
||
+ | # Suggest and query expansion. |
||
+ | # Language modelling. Topic modelling. |
||
+ | |- |
||
+ | | Vector model and vector indexing || |
||
+ | # Vector model |
||
+ | # Machine learning for vector embedding |
||
+ | # Vector-based index structures |
||
+ | |- |
||
+ | | Advanced topics. Media processing || |
||
+ | # Image and video processing, understanding and indexing |
||
+ | # Content-based image retrieval |
||
+ | # Audio retrieval |
||
+ | # Relevance feedback |
||
+ | |} |
||
+ | == Intended Learning Outcomes (ILOs) == |
||
+ | === What is the main purpose of this course? === |
||
The course is designed to prepare students to understand background theories of information retrieval systems and introduce different information retrieval systems. The course will focus on the evaluation and analysis of such systems as well as how they are implemented. Throughout the course, students will be involved in discussions, readings, and assignments to experience real world systems. The technologies and algorithms covered in this class include machine learning, data mining, natural language processing, data indexing, and so on. |
The course is designed to prepare students to understand background theories of information retrieval systems and introduce different information retrieval systems. The course will focus on the evaluation and analysis of such systems as well as how they are implemented. Throughout the course, students will be involved in discussions, readings, and assignments to experience real world systems. The technologies and algorithms covered in this class include machine learning, data mining, natural language processing, data indexing, and so on. |
||
− | === |
+ | === ILOs defined at three levels === |
− | |||
− | === - What should a student remember at the end of the course? === |
||
+ | ==== Level 1: What concepts should a student know/remember/explain? ==== |
||
+ | By the end of the course, the students should be able to ... |
||
* Terms and definitions used in area of information retrieval, |
* Terms and definitions used in area of information retrieval, |
||
* Search engine and recommender system essential parts, |
* Search engine and recommender system essential parts, |
||
Line 31: | Line 63: | ||
* Indexing strategies. |
* Indexing strategies. |
||
− | === |
+ | ==== Level 2: What basic practical skills should a student be able to perform? ==== |
+ | By the end of the course, the students should be able to ... |
||
− | |||
* Understand background theories behind information retrieval systems, |
* Understand background theories behind information retrieval systems, |
||
* How to design a recommender system from scratch, |
* How to design a recommender system from scratch, |
||
Line 39: | Line 71: | ||
* How to identify and fix information retrieval system problems. |
* How to identify and fix information retrieval system problems. |
||
− | === |
+ | ==== Level 3: What complex comprehensive skills should a student be able to apply in real-life scenarios? ==== |
+ | By the end of the course, the students should be able to ... |
||
− | |||
* Build a recommender service from scratch, |
* Build a recommender service from scratch, |
||
* Implement a proper index for an unstructured dataset, |
* Implement a proper index for an unstructured dataset, |
||
* Plan quality measures for a new recommender service, |
* Plan quality measures for a new recommender service, |
||
− | * Run initial data analysis and problem evaluation for a business task, related to information retrieval. |
+ | * Run initial data analysis and problem evaluation for a business task, related to information retrieval. |
+ | == Grading == |
||
− | === Course |
+ | === Course grading range === |
+ | {| class="wikitable" |
||
+ | |+ |
||
+ | |- |
||
+ | ! Grade !! Range !! Description of performance |
||
+ | |- |
||
+ | | A. Excellent || 90-100 || - |
||
+ | |- |
||
+ | | B. Good || 75-89 || - |
||
+ | |- |
||
+ | | C. Satisfactory || 60-74 || - |
||
+ | |- |
||
+ | | D. Poor || 0-59 || - |
||
+ | |} |
||
+ | === Course activities and grading breakdown === |
||
− | {| |
||
+ | {| class="wikitable" |
||
− | |+ Course grade breakdown |
||
+ | |+ |
||
− | ! |
||
+ | |- |
||
− | ! |
||
+ | ! Activity Type !! Percentage of the overall course grade |
||
− | !align="center"| '''Proposed points''' |
||
|- |
|- |
||
+ | | Assignments || 60 |
||
− | | Labs/seminar classes |
||
− | | 50 |
||
− | |align="center"| 35 |
||
|- |
|- |
||
+ | | Quizzes || 40 |
||
− | | Interim performance assessment |
||
− | | 25 |
||
− | |align="center"| 70 |
||
|- |
|- |
||
− | | Exams |
+ | | Exams || 0 |
− | | 25 |
||
− | |align="center"| 0 |
||
|} |
|} |
||
+ | === Recommendations for students on how to succeed in the course === |
||
− | There are labs of 2 types: tutorial and contest. '''Tutorial labs''' are followed by the home works. Home works are covering 70 points out 100-based grade. There are 7 home works, up to 10 points each. Student also has a chance for extra points (+2 for each home work) in case he/she submits a solution for extra problems. Part of the home work grade can be redistributed to '''Moodle-based''' quizzes, conducted during tutorial labs. '''Contest labs''' are time framed problems (1.5h) with an element of competition. There are 7 of them, students can work in groups up to 2. If a group solves the problem until midnight, it gets 2 points for the lab for each participant. If the group hits top-3 standing during the lab, it gets 3 additional points for the lab (5 in total). |
||
+ | The simples way to succeed is to participate in labs and pass coding assignments in timely manner. This guarantees up to 60% of the grade. Participation in lecture quizzes allow to differentiate the grade. |
||
− | === Exam and retake planning === |
||
+ | == Resources, literature and reference materials == |
||
− | '''Exam''' |
||
+ | === Open access resources === |
||
− | No exam, grade is an aggregation of home works, contests and quizzes. |
||
+ | * Manning, Raghavan, Schütze, An Introduction to Information Retrieval, 2008, Cambridge University Press |
||
+ | * Baeza-Yates, Ribeiro-Neto, Modern Information Retrieval, 2011, Addison-Wesley |
||
+ | * Buttcher, Clarke, Cormack, Information Retrieval: Implementing and Evaluating Search Engines, 2010, MIT Press |
||
+ | * [https://github.com/IUCVLab/information-retrieval Course repository in github]. |
||
+ | === Closed access resources === |
||
− | '''Retake 1''' |
||
− | First retake is conducted in a form of project defense. Student is given a week to prepare. Student takes any technical paper from [https://www.springer.com/journal/10791 Information Retrieval Journal] for '''the last 3 years''' and approves it until the next day with a professor to avoid collisions and misunderstanding. Student implements the paper in a search engine (this can be a technique, metric, ...). At the retake day student presents a paper. Presentation is followed by QA session. After QA session student presents implementation of the paper. Grading criteria as follows: |
||
+ | === Software and tools used within the course === |
||
− | * 30% – paper presentation is clear, discussion of results is full. |
||
− | * 30% – search engine implementation is correct and clear. Well-structured and dedicated to a separate service. |
||
− | * 30% – paper implementation is correct. |
||
+ | = Teaching Methodology: Methods, techniques, & activities = |
||
− | '''Retake 2''' |
||
+ | == Activities and Teaching Methods == |
||
− | Second retake is conducted in front of the committee. Four (4) questions are randomly selected for a student: two (2) theoretical from "Test questions for final assessment in this section" and two (2) practical from "Typical questions for ongoing performance evaluation". Each question costs 25% of the grade. Student is given 15 minutes to prepare for theoretical questions. Then (s)he answers in front of the committee. After this student if given additional 40 minutes to solve practical questions. |
||
+ | {| class="wikitable" |
||
− | |||
+ | |+ Activities within each section |
||
− | === Grades range === |
||
− | |||
− | {| |
||
− | |+ Course grading range |
||
− | ! |
||
− | ! '''Proposed range''' |
||
− | !align="center"| |
||
|- |
|- |
||
+ | ! Learning Activities !! Section 1 !! Section 2 !! Section 3 !! Section 4 |
||
− | | A. Excellent |
||
− | | 84-100 |
||
− | |align="center"| |
||
|- |
|- |
||
+ | | Development of individual parts of software product code || 1 || 1 || 1 || 1 |
||
− | | B. Good |
||
− | | 72-83.99 |
||
− | |align="center"| |
||
|- |
|- |
||
+ | | Homework and group projects || 1 || 1 || 1 || 1 |
||
− | | C. Satisfactory |
||
− | | 60-71.99 |
||
− | |align="center"| |
||
|- |
|- |
||
+ | | Testing (written or computer based) || 1 || 1 || 1 || 1 |
||
− | | D. Poor |
||
+ | |} |
||
− | | 0-59.99 |
||
+ | == Formative Assessment and Course Activities == |
||
− | |align="center"| |
||
− | |} |
||
− | === |
+ | === Ongoing performance assessment === |
− | ==== |
+ | ==== Section 1 ==== |
+ | {| class="wikitable" |
||
− | |||
+ | |+ |
||
− | * Manning, Raghavan, Schütze, An Introduction to Information Retrieval, 2008, Cambridge University Press |
||
+ | |- |
||
− | |||
+ | ! Activity Type !! Content !! Is Graded? |
||
− | ==== Reference material: ==== |
||
+ | |- |
||
− | |||
+ | | Question || Enumerate limitations for web crawling. || 1 |
||
− | * Baeza-Yates, Ribeiro-Neto, Modern Information Retrieval, 2011, Addison-Wesley |
||
+ | |- |
||
− | * Buttcher, Clarke, Cormack, Information Retrieval: Implementing and Evaluating Search Engines, 2010, MIT Press |
||
+ | | Question || Propose a strategy for A/B testing. || 1 |
||
− | * [https://github.com/IUCVLab/information-retrieval '''Course repository in github''']. |
||
+ | |- |
||
− | |||
+ | | Question || Propose recommender quality metric. || 1 |
||
− | == Course Sections == |
||
+ | |- |
||
− | |||
+ | | Question || Implement DCG metric. || 1 |
||
− | The main sections of the course and approximate hour distribution between them is as follows: |
||
+ | |- |
||
− | |||
+ | | Question || Discuss relevance metric. || 1 |
||
− | {| |
||
+ | |- |
||
− | |+ Course Sections |
||
+ | | Question || Crawl website with respect to robots.txt. || 1 |
||
− | |align="center"| '''Section''' |
||
+ | |- |
||
− | | '''Section Title''' |
||
+ | | Question || What is typical IR system architecture? || 0 |
||
− | |align="center"| '''Lectures''' |
||
− | |align="center"| '''Seminars''' |
||
− | |align="center"| '''Self-study''' |
||
− | |align="center"| '''Knowledge''' |
||
|- |
|- |
||
+ | | Question || Show how to parse a dynamic web page. || 0 |
||
− | |align="center"| '''Number''' |
||
− | | |
||
− | |align="center"| '''(hours)''' |
||
− | |align="center"| '''(labs)''' |
||
− | |align="center"| |
||
− | |align="center"| '''evaluation''' |
||
|- |
|- |
||
+ | | Question || Provide a framework to accept/reject A/B testing results. || 0 |
||
− | |align="center"| 1 |
||
− | | Information retrieval basics |
||
− | |align="center"| 10 |
||
− | |align="center"| 10 |
||
− | |align="center"| 20 |
||
− | |align="center"| 0 |
||
|- |
|- |
||
+ | | Question || Compute DCG for an example query for random search engine. || 0 |
||
− | |align="center"| 2 |
||
− | | Text processing and indexing |
||
− | |align="center"| 10 |
||
− | |align="center"| 10 |
||
− | |align="center"| 20 |
||
− | |align="center"| 0 |
||
|- |
|- |
||
+ | | Question || Implement a metric for a recommender system. || 0 |
||
− | |align="center"| 3 |
||
− | | Vector model and vector indexing |
||
− | |align="center"| 12 |
||
− | |align="center"| 12 |
||
− | |align="center"| 12 |
||
− | |align="center"| 0 |
||
|- |
|- |
||
+ | | Question || Implement pFound. || 0 |
||
− | |align="center"| 4 |
||
+ | |} |
||
− | | Advanced topics. Media processing |
||
+ | ==== Section 2 ==== |
||
− | |align="center"| 12 |
||
+ | {| class="wikitable" |
||
− | |align="center"| 12 |
||
+ | |+ |
||
− | |align="center"| 12 |
||
− | |align="center"| 0 |
||
|- |
|- |
||
+ | ! Activity Type !! Content !! Is Graded? |
||
− | |align="center"| Final examination |
||
− | | |
+ | |- |
+ | | Question || Build inverted index for a text. || 1 |
||
− | |align="center"| |
||
+ | |- |
||
− | |align="center"| |
||
+ | | Question || Tokenize a text. || 1 |
||
− | |align="center"| |
||
+ | |- |
||
− | |align="center"| 4 |
||
+ | | Question || Implement simple spellchecker. || 1 |
||
− | |} |
||
+ | |- |
||
− | |||
+ | | Question || Implement wildcard search. || 1 |
||
− | === Section 1 === |
||
+ | |- |
||
− | |||
+ | | Question || Build inverted index for a set of web pages. || 0 |
||
− | ==== Section title: ==== |
||
+ | |- |
||
− | |||
+ | | Question || build a distribution of stems/lexemes for a text. || 0 |
||
− | Information retrieval basics |
||
+ | |- |
||
− | |||
+ | | Question || Choose and implement case-insensitive index for a given text collection. || 0 |
||
− | === Topics covered in this section: === |
||
+ | |- |
||
− | |||
+ | | Question || Choose and implement semantic vector-based index for a given text collection. || 0 |
||
− | * Introduction to IR, major concepts. |
||
+ | |} |
||
− | * Crawling and Web. |
||
+ | ==== Section 3 ==== |
||
− | * Quality assessment. |
||
+ | {| class="wikitable" |
||
− | |||
+ | |+ |
||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
+ | |- |
||
− | |||
+ | ! Activity Type !! Content !! Is Graded? |
||
− | <div class="tabular"> |
||
+ | |- |
||
− | |||
+ | | Question || Embed the text with an ML model. || 1 |
||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
+ | |- |
||
− | Development of individual parts of software product code & 1<br /> |
||
+ | | Question || Build term-document matrix. || 1 |
||
− | Homework and group projects & 1<br /> |
||
+ | |- |
||
− | Midterm evaluation & 0<br /> |
||
+ | | Question || Build semantic index for a dataset using Annoy. || 1 |
||
− | Testing (written or computer based) & 1<br /> |
||
+ | |- |
||
− | Reports & 0<br /> |
||
+ | | Question || Build kd-tree index for a given dataset. || 1 |
||
− | Essays & 0<br /> |
||
+ | |- |
||
− | Oral polls & 0<br /> |
||
+ | | Question || Why kd-trees work badly in 100-dimensional environment? || 1 |
||
− | Discussions & 0<br /> |
||
+ | |- |
||
− | |||
+ | | Question || What is the difference between metric space and vector space? || 1 |
||
− | |||
+ | |- |
||
− | |||
+ | | Question || Choose and implement persistent index for a given text collection. || 0 |
||
− | </div> |
||
+ | |- |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
+ | | Question || Visualize a dataset for text classification. || 0 |
||
− | |||
+ | |- |
||
− | # Enumerate limitations for web crawling. |
||
+ | | Question || Build (H)NSW index for a dataset. || 0 |
||
− | # Propose a strategy for A/B testing. |
||
+ | |- |
||
− | # Propose recommender quality metric. |
||
+ | | Question || Compare HNSW to Annoy index. || 0 |
||
− | # Implement DCG metric. |
||
+ | |- |
||
− | # Discuss relevance metric. |
||
+ | | Question || What are metric space index structures you know? || 0 |
||
− | # Crawl website with respect to robots.txt. |
||
+ | |} |
||
− | |||
+ | ==== Section 4 ==== |
||
− | === Typical questions for seminar classes (labs) within this section === |
||
+ | {| class="wikitable" |
||
− | |||
+ | |+ |
||
− | # What is typical IR system architecture? |
||
+ | |- |
||
− | # Show how to parse a dynamic web page. |
||
+ | ! Activity Type !! Content !! Is Graded? |
||
− | # Provide a framework to accept/reject A/B testing results. |
||
+ | |- |
||
− | # Compute DCG for an example query for random search engine. |
||
+ | | Question || Extract semantic information from images. || 1 |
||
− | # Implement a metric for a recommender system. |
||
+ | |- |
||
− | # Implement pFound. |
||
+ | | Question || Build an image hash. || 1 |
||
− | |||
+ | |- |
||
− | === Test questions for final assessment in this section === |
||
+ | | Question || Build a spectral representation of a song. || 1 |
||
− | |||
+ | |- |
||
+ | | Question || Whats is relevance feedback? || 1 |
||
+ | |- |
||
+ | | Question || Build a "search by color" feature. || 0 |
||
+ | |- |
||
+ | | Question || Extract scenes from video. || 0 |
||
+ | |- |
||
+ | | Question || Write a voice-controlled search. || 0 |
||
+ | |- |
||
+ | | Question || Semantic search within unlabelled image dataset. || 0 |
||
+ | |} |
||
+ | === Final assessment === |
||
+ | '''Section 1''' |
||
# Implement text crawler for a news site. |
# Implement text crawler for a news site. |
||
# What is SBS (side-by-side) and how is it used in search engines? |
# What is SBS (side-by-side) and how is it used in search engines? |
||
Line 234: | Line 251: | ||
# Explain how A/B testing works. |
# Explain how A/B testing works. |
||
# Describe PageRank algorithm. |
# Describe PageRank algorithm. |
||
+ | '''Section 2''' |
||
− | |||
− | === Section 2 === |
||
− | |||
− | ==== Section title: ==== |
||
− | |||
− | Text processing and indexing |
||
− | |||
− | === Topics covered in this section: === |
||
− | |||
− | * Building inverted index for text documents. Boolean retrieval model. |
||
− | * Language, tokenization, stemming, searching, scoring. |
||
− | * Spellchecking and wildcard search. |
||
− | * Suggest and query expansion. |
||
− | * Language modelling. Topic modelling. |
||
− | |||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
− | |||
− | <div class="tabular"> |
||
− | |||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
− | Development of individual parts of software product code & 1<br /> |
||
− | Homework and group projects & 1<br /> |
||
− | Midterm evaluation & 0<br /> |
||
− | Testing (written or computer based) & 1<br /> |
||
− | Reports & 0<br /> |
||
− | Essays & 0<br /> |
||
− | Oral polls & 0<br /> |
||
− | Discussions & 0<br /> |
||
− | |||
− | |||
− | |||
− | </div> |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
− | |||
− | # Build inverted index for a text. |
||
− | # Tokenize a text. |
||
− | # Implement simple spellchecker. |
||
− | # Implement wildcard search. |
||
− | |||
− | === Typical questions for seminar classes (labs) within this section === |
||
− | |||
− | # Build inverted index for a set of web pages. |
||
− | # build a distribution of stems/lexemes for a text. |
||
− | # Choose and implement case-insensitive index for a given text collection. |
||
− | # Choose and implement semantic vector-based index for a given text collection. |
||
− | |||
− | === Test questions for final assessment in this section === |
||
− | |||
# Explain how (and why) KD-trees work. |
# Explain how (and why) KD-trees work. |
||
# What are weak places of inverted index? |
# What are weak places of inverted index? |
||
# Compare different text vectorization approaches. |
# Compare different text vectorization approaches. |
||
# Compare tolerant retrieval to spellchecking. |
# Compare tolerant retrieval to spellchecking. |
||
+ | '''Section 3''' |
||
− | |||
− | === Section 3 === |
||
− | |||
− | ==== Section title: ==== |
||
− | |||
− | Vector model and vector indexing |
||
− | |||
− | === Topics covered in this section: === |
||
− | |||
− | * Vector model |
||
− | * Machine learning for vector embedding |
||
− | * Vector-based index structures |
||
− | |||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
− | |||
− | <div class="tabular"> |
||
− | |||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
− | Development of individual parts of software product code & 1<br /> |
||
− | Homework and group projects & 1<br /> |
||
− | Midterm evaluation & 0<br /> |
||
− | Testing (written or computer based) & 1<br /> |
||
− | Reports & 0<br /> |
||
− | Essays & 0<br /> |
||
− | Oral polls & 0<br /> |
||
− | Discussions & 0<br /> |
||
− | |||
− | |||
− | |||
− | </div> |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
− | |||
− | # Embed the text with an ML model. |
||
− | # Build term-document matrix. |
||
− | # Build semantic index for a dataset using Annoy. |
||
− | # Build kd-tree index for a given dataset. |
||
− | # Why kd-trees work badly in 100-dimensional environment? |
||
− | # What is the difference between metric space and vector space? |
||
− | |||
− | === Typical questions for seminar classes (labs) within this section === |
||
− | |||
− | # Choose and implement persistent index for a given text collection. |
||
− | # Visualize a dataset for text classification. |
||
− | # Build (H)NSW index for a dataset. |
||
− | # Compare HNSW to Annoy index. |
||
− | # What are metric space index structures you know? |
||
− | |||
− | === Test questions for final assessment in this section === |
||
− | |||
# Compare inverted index to HNSW in terms of speed, memory consumption? |
# Compare inverted index to HNSW in terms of speed, memory consumption? |
||
# Choose the best index for a given dataset. |
# Choose the best index for a given dataset. |
||
# Implement range search in KD-tree. |
# Implement range search in KD-tree. |
||
+ | '''Section 4''' |
||
− | |||
− | === Section 4 === |
||
− | |||
− | ==== Section title: ==== |
||
− | |||
− | Advanced topics. Media processing |
||
− | |||
− | === Topics covered in this section: === |
||
− | |||
− | * Image and video processing, understanding and indexing |
||
− | * Content-based image retrieval |
||
− | * Audio retrieval |
||
− | * Hum to search |
||
− | * Relevance feedback |
||
− | |||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
− | |||
− | <div class="tabular"> |
||
− | |||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
− | Development of individual parts of software product code & 1<br /> |
||
− | Homework and group projects & 1<br /> |
||
− | Midterm evaluation & 0<br /> |
||
− | Testing (written or computer based) & 1<br /> |
||
− | Reports & 0<br /> |
||
− | Essays & 0<br /> |
||
− | Oral polls & 0<br /> |
||
− | Discussions & 0<br /> |
||
− | |||
− | |||
− | |||
− | </div> |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
− | |||
− | # Extract semantic information from images. |
||
− | # Build an image hash. |
||
− | # Build a spectral representation of a song. |
||
− | # Whats is relevance feedback? |
||
− | |||
− | === Typical questions for seminar classes (labs) within this section === |
||
− | |||
− | # Build a "search by color" feature. |
||
− | # Extract scenes from video. |
||
− | # Write a voice-controlled search. |
||
− | # Semantic search within unlabelled image dataset. |
||
− | |||
− | === Test questions for final assessment in this section === |
||
− | |||
# What are the approaches to image understanding? |
# What are the approaches to image understanding? |
||
# How to cluster a video into scenes and shots? |
# How to cluster a video into scenes and shots? |
||
# How speech-to-text technology works? |
# How speech-to-text technology works? |
||
# How to build audio fingerprints? |
# How to build audio fingerprints? |
||
+ | |||
+ | === The retake exam === |
||
+ | '''Section 1''' |
||
+ | # Solve a complex coding problem similar to one of the homework or lab. |
||
+ | '''Section 2''' |
||
+ | # Solve a complex coding problem similar to one of the homework or lab. |
||
+ | '''Section 3''' |
||
+ | # Solve a complex coding problem similar to one of the homework or lab. |
||
+ | '''Section 4''' |
||
+ | # Solve a complex coding problem similar to one of the homework or lab. |
Latest revision as of 12:46, 16 September 2023
Information Retrieval
- Course name: Information Retrieval
- Code discipline: CSE306
- Subject area: Data Science; Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web; Recommender systems
Short Description
The course gives an introduction to practical and theoretical aspects of information search and recommender systems. This course covers the following concepts: Indexing; Search quality assessment; Relevance; Ranking; Information retrieval; Query; Recommendations; Multimedia retrieval.
Prerequisites
Prerequisite subjects
- CSE204 — Analytic Geometry And Linear Algebra II: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue.
- CSE113 — Philosophy I - (Discrete Math and Logic): graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path.
- CSE206 — Probability And Statistics: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties. Analysis: DFT, [discrete] gradient.
Prerequisite topics
Course Topics
Section | Topics within the section |
---|---|
Information retrieval basics |
|
Text processing and indexing |
|
Vector model and vector indexing |
|
Advanced topics. Media processing |
|
Intended Learning Outcomes (ILOs)
What is the main purpose of this course?
The course is designed to prepare students to understand background theories of information retrieval systems and introduce different information retrieval systems. The course will focus on the evaluation and analysis of such systems as well as how they are implemented. Throughout the course, students will be involved in discussions, readings, and assignments to experience real world systems. The technologies and algorithms covered in this class include machine learning, data mining, natural language processing, data indexing, and so on.
ILOs defined at three levels
Level 1: What concepts should a student know/remember/explain?
By the end of the course, the students should be able to ...
- Terms and definitions used in area of information retrieval,
- Search engine and recommender system essential parts,
- Quality metrics of information retrieval systems,
- Contemporary approaches to semantic data analysis,
- Indexing strategies.
Level 2: What basic practical skills should a student be able to perform?
By the end of the course, the students should be able to ...
- Understand background theories behind information retrieval systems,
- How to design a recommender system from scratch,
- How to evaluate quality of a particular information retrieval system,
- Core ideas and system implementation and maintenance,
- How to identify and fix information retrieval system problems.
Level 3: What complex comprehensive skills should a student be able to apply in real-life scenarios?
By the end of the course, the students should be able to ...
- Build a recommender service from scratch,
- Implement a proper index for an unstructured dataset,
- Plan quality measures for a new recommender service,
- Run initial data analysis and problem evaluation for a business task, related to information retrieval.
Grading
Course grading range
Grade | Range | Description of performance |
---|---|---|
A. Excellent | 90-100 | - |
B. Good | 75-89 | - |
C. Satisfactory | 60-74 | - |
D. Poor | 0-59 | - |
Course activities and grading breakdown
Activity Type | Percentage of the overall course grade |
---|---|
Assignments | 60 |
Quizzes | 40 |
Exams | 0 |
Recommendations for students on how to succeed in the course
The simples way to succeed is to participate in labs and pass coding assignments in timely manner. This guarantees up to 60% of the grade. Participation in lecture quizzes allow to differentiate the grade.
Resources, literature and reference materials
Open access resources
- Manning, Raghavan, Schütze, An Introduction to Information Retrieval, 2008, Cambridge University Press
- Baeza-Yates, Ribeiro-Neto, Modern Information Retrieval, 2011, Addison-Wesley
- Buttcher, Clarke, Cormack, Information Retrieval: Implementing and Evaluating Search Engines, 2010, MIT Press
- Course repository in github.
Closed access resources
Software and tools used within the course
Teaching Methodology: Methods, techniques, & activities
Activities and Teaching Methods
Learning Activities | Section 1 | Section 2 | Section 3 | Section 4 |
---|---|---|---|---|
Development of individual parts of software product code | 1 | 1 | 1 | 1 |
Homework and group projects | 1 | 1 | 1 | 1 |
Testing (written or computer based) | 1 | 1 | 1 | 1 |
Formative Assessment and Course Activities
Ongoing performance assessment
Section 1
Activity Type | Content | Is Graded? |
---|---|---|
Question | Enumerate limitations for web crawling. | 1 |
Question | Propose a strategy for A/B testing. | 1 |
Question | Propose recommender quality metric. | 1 |
Question | Implement DCG metric. | 1 |
Question | Discuss relevance metric. | 1 |
Question | Crawl website with respect to robots.txt. | 1 |
Question | What is typical IR system architecture? | 0 |
Question | Show how to parse a dynamic web page. | 0 |
Question | Provide a framework to accept/reject A/B testing results. | 0 |
Question | Compute DCG for an example query for random search engine. | 0 |
Question | Implement a metric for a recommender system. | 0 |
Question | Implement pFound. | 0 |
Section 2
Activity Type | Content | Is Graded? |
---|---|---|
Question | Build inverted index for a text. | 1 |
Question | Tokenize a text. | 1 |
Question | Implement simple spellchecker. | 1 |
Question | Implement wildcard search. | 1 |
Question | Build inverted index for a set of web pages. | 0 |
Question | build a distribution of stems/lexemes for a text. | 0 |
Question | Choose and implement case-insensitive index for a given text collection. | 0 |
Question | Choose and implement semantic vector-based index for a given text collection. | 0 |
Section 3
Activity Type | Content | Is Graded? |
---|---|---|
Question | Embed the text with an ML model. | 1 |
Question | Build term-document matrix. | 1 |
Question | Build semantic index for a dataset using Annoy. | 1 |
Question | Build kd-tree index for a given dataset. | 1 |
Question | Why kd-trees work badly in 100-dimensional environment? | 1 |
Question | What is the difference between metric space and vector space? | 1 |
Question | Choose and implement persistent index for a given text collection. | 0 |
Question | Visualize a dataset for text classification. | 0 |
Question | Build (H)NSW index for a dataset. | 0 |
Question | Compare HNSW to Annoy index. | 0 |
Question | What are metric space index structures you know? | 0 |
Section 4
Activity Type | Content | Is Graded? |
---|---|---|
Question | Extract semantic information from images. | 1 |
Question | Build an image hash. | 1 |
Question | Build a spectral representation of a song. | 1 |
Question | Whats is relevance feedback? | 1 |
Question | Build a "search by color" feature. | 0 |
Question | Extract scenes from video. | 0 |
Question | Write a voice-controlled search. | 0 |
Question | Semantic search within unlabelled image dataset. | 0 |
Final assessment
Section 1
- Implement text crawler for a news site.
- What is SBS (side-by-side) and how is it used in search engines?
- Compare pFound with CTR and with DCG.
- Explain how A/B testing works.
- Describe PageRank algorithm.
Section 2
- Explain how (and why) KD-trees work.
- What are weak places of inverted index?
- Compare different text vectorization approaches.
- Compare tolerant retrieval to spellchecking.
Section 3
- Compare inverted index to HNSW in terms of speed, memory consumption?
- Choose the best index for a given dataset.
- Implement range search in KD-tree.
Section 4
- What are the approaches to image understanding?
- How to cluster a video into scenes and shots?
- How speech-to-text technology works?
- How to build audio fingerprints?
The retake exam
Section 1
- Solve a complex coding problem similar to one of the homework or lab.
Section 2
- Solve a complex coding problem similar to one of the homework or lab.
Section 3
- Solve a complex coding problem similar to one of the homework or lab.
Section 4
- Solve a complex coding problem similar to one of the homework or lab.