Difference between revisions of "BSc: Information Retrieval"
R.sirgalina (talk | contribs) |
|||
Line 1: | Line 1: | ||
+ | |||
= Information Retrieval = |
= Information Retrieval = |
||
+ | * '''Course name''': Information Retrieval |
||
+ | * '''Code discipline''': XYZ |
||
+ | * '''Subject area''': Data Science; Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web |
||
+ | == Short Description == |
||
− | * <span>'''Course name:'''</span> Information Retrieval |
||
+ | This course covers the following concepts: Indexing; Relevance; Ranking; Information retrieval; Query. |
||
− | * <span>'''Course number:'''</span> XYZ |
||
− | * <span>'''Subject area:'''</span> Data Science; Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web |
||
− | |||
− | == Course Characteristics == |
||
− | |||
− | === Key concepts of the class === |
||
− | |||
− | * Indexing |
||
− | * Relevance |
||
− | * Ranking |
||
− | * Information retrieval |
||
− | * Query |
||
− | |||
− | === What is the purpose of this course? === |
||
− | |||
− | The course is designed to prepare students to understand background theories of information retrieval systems and introduce different information retrieval systems. The course will focus on the evaluation and analysis of such systems as well as how they are implemented. Throughout the course, students will be involved in discussions, readings, and assignments to experience real world systems. The technologies and algorithms covered in this class include machine learning, data mining, natural language processing, data indexing, and so on. |
||
== Prerequisites == |
== Prerequisites == |
||
+ | === Prerequisite subjects === |
||
− | Analytic Geometry And Linear Algebra I |
||
− | * |
+ | * CSE204 — Analytic Geometry And Linear Algebra II: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue. |
+ | * CSE113 — Philosophy I - (Discrete Math and Logic): graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path. |
||
+ | * CSE206 — Probability And Statistics: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties. Analysis: DFT, [discrete] gradient. |
||
+ | === Prerequisite topics === |
||
− | * [https://eduwiki.innopolis.university/index.php/BSc:Logic_and_Discrete_Mathematics CSE113 — Philosophy I - (Discrete Math and Logic)]: graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path. |
||
− | * [https://eduwiki.innopolis.university/index.php/BSc:ProbabilityAndStatistics CSE206 — Probability And Statistics]: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties. Analysis: DFT, [discrete] gradient. |
||
+ | == Course Topics == |
||
+ | {| class="wikitable" |
||
+ | |+ Course Sections and Topics |
||
+ | |- |
||
+ | ! Section !! Topics within the section |
||
+ | |- |
||
+ | | Information retrieval basics || |
||
+ | # Introduction to IR, major concepts. |
||
+ | # Crawling and Web. |
||
+ | # Quality assessment. |
||
+ | |- |
||
+ | | Text processing and indexing || |
||
+ | # Building inverted index for text documents. Boolean retrieval model. |
||
+ | # Language, tokenization, stemming, searching, scoring. |
||
+ | # Spellchecking and wildcard search. |
||
+ | # Suggest and query expansion. |
||
+ | # Language modelling. Topic modelling. |
||
+ | |- |
||
+ | | Vector model and vector indexing || |
||
+ | # Vector model |
||
+ | # Machine learning for vector embedding |
||
+ | # Vector-based index structures |
||
+ | |- |
||
+ | | Advanced topics. Media processing || |
||
+ | # Image and video processing, understanding and indexing |
||
+ | # Content-based image retrieval |
||
+ | # Audio retrieval |
||
+ | # Hum to search |
||
+ | # Relevance feedback |
||
+ | |} |
||
+ | == Intended Learning Outcomes (ILOs) == |
||
+ | === What is the main purpose of this course? === |
||
− | == Course Objectives Based on Bloom’s Taxonomy == |
||
+ | The course is designed to prepare students to understand background theories of information retrieval systems and introduce different information retrieval systems. The course will focus on the evaluation and analysis of such systems as well as how they are implemented. Throughout the course, students will be involved in discussions, readings, and assignments to experience real world systems. The technologies and algorithms covered in this class include machine learning, data mining, natural language processing, data indexing, and so on. |
||
− | === |
+ | === ILOs defined at three levels === |
+ | ==== Level 1: What concepts should a student know/remember/explain? ==== |
||
+ | By the end of the course, the students should be able to ... |
||
* Terms and definitions used in area of information retrieval, |
* Terms and definitions used in area of information retrieval, |
||
* Search engine and recommender system essential parts, |
* Search engine and recommender system essential parts, |
||
Line 39: | Line 63: | ||
* Indexing strategies. |
* Indexing strategies. |
||
− | === What should a student be able to |
+ | ==== Level 2: What basic practical skills should a student be able to perform? ==== |
+ | By the end of the course, the students should be able to ... |
||
− | |||
* Understand background theories behind information retrieval systems, |
* Understand background theories behind information retrieval systems, |
||
* How to design a recommender system from scratch, |
* How to design a recommender system from scratch, |
||
Line 47: | Line 71: | ||
* How to identify and fix information retrieval system problems. |
* How to identify and fix information retrieval system problems. |
||
− | === What should a student be able to apply |
+ | ==== Level 3: What complex comprehensive skills should a student be able to apply in real-life scenarios? ==== |
+ | By the end of the course, the students should be able to ... |
||
− | |||
* Build a recommender service from scratch, |
* Build a recommender service from scratch, |
||
* Implement a proper index for an unstructured dataset, |
* Implement a proper index for an unstructured dataset, |
||
* Plan quality measures for a new recommender service, |
* Plan quality measures for a new recommender service, |
||
− | * Run initial data analysis and problem evaluation for a business task, related to information retrieval. |
+ | * Run initial data analysis and problem evaluation for a business task, related to information retrieval. |
+ | == Grading == |
||
− | === Course |
+ | === Course grading range === |
+ | {| class="wikitable" |
||
− | |||
− | + | |+ |
|
− | |+ Course grade breakdown |
||
− | ! |
||
− | ! |
||
− | !align="center"| '''Proposed points''' |
||
|- |
|- |
||
+ | ! Grade !! Range !! Description of performance |
||
− | | Labs/seminar classes |
||
− | | 50 |
||
− | |align="center"| 35 |
||
|- |
|- |
||
+ | | A. Excellent || 84-100 || - |
||
− | | Interim performance assessment |
||
− | | 25 |
||
− | |align="center"| 70 |
||
|- |
|- |
||
+ | | B. Good || 72-83 || - |
||
− | | Exams |
||
− | | |
+ | |- |
+ | | C. Satisfactory || 60-71 || - |
||
− | |align="center"| 0 |
||
+ | |- |
||
+ | | D. Poor || 0-59 || - |
||
|} |
|} |
||
+ | === Course activities and grading breakdown === |
||
− | There are labs of 2 types: tutorial and contest. '''Tutorial labs''' are followed by the home works. Home works are covering 70 points out 100-based grade. There are 7 home works, up to 10 points each. Student also has a chance for extra points (+2 for each home work) in case he/she submits a solution for extra problems. Part of the home work grade can be redistributed to '''Moodle-based''' quizzes, conducted during tutorial labs. '''Contest labs''' are time framed problems (1.5h) with an element of competition. There are 7 of them, students can work in groups up to 2. If a group solves the problem until midnight, it gets 2 points for the lab for each participant. If the group hits top-3 standing during the lab, it gets 3 additional points for the lab (5 in total). |
||
+ | {| class="wikitable" |
||
− | |||
+ | |+ |
||
− | === Exam and retake planning === |
||
− | |||
− | '''Exam''' |
||
− | |||
− | No exam, grade is an aggregation of home works, contests and quizzes. |
||
− | |||
− | '''Retake 1''' |
||
− | |||
− | First retake is conducted in a form of project defense. Student is given a week to prepare. Student takes any technical paper from [https://www.springer.com/journal/10791 Information Retrieval Journal] for '''the last 3 years''' and approves it until the next day with a professor to avoid collisions and misunderstanding. Student implements the paper in a search engine (this can be a technique, metric, ...). At the retake day student presents a paper. Presentation is followed by QA session. After QA session student presents implementation of the paper. Grading criteria as follows: |
||
− | |||
− | * 30% – paper presentation is clear, discussion of results is full. |
||
− | * 30% – search engine implementation is correct and clear. Well-structured and dedicated to a separate service. |
||
− | * 30% – paper implementation is correct. |
||
− | |||
− | '''Retake 2''' |
||
− | |||
− | Second retake is conducted in front of the committee. Four (4) questions are randomly selected for a student: two (2) theoretical from "Test questions for final assessment in this section" and two (2) practical from "Typical questions for ongoing performance evaluation". Each question costs 25% of the grade. Student is given 15 minutes to prepare for theoretical questions. Then (s)he answers in front of the committee. After this student if given additional 40 minutes to solve practical questions. |
||
− | |||
− | === Grades range === |
||
− | |||
− | {| |
||
− | |+ Course grading range |
||
− | ! |
||
− | ! '''Proposed range''' |
||
− | !align="center"| |
||
|- |
|- |
||
+ | ! Activity Type !! Percentage of the overall course grade |
||
− | | A. Excellent |
||
− | | 84-100 |
||
− | |align="center"| |
||
|- |
|- |
||
+ | | Labs/seminar classes || 35 |
||
− | | B. Good |
||
− | | 72-83.99 |
||
− | |align="center"| |
||
|- |
|- |
||
+ | | Interim performance assessment || 70 |
||
− | | C. Satisfactory |
||
− | | 60-71.99 |
||
− | |align="center"| |
||
|- |
|- |
||
− | | |
+ | | Exams || 0 |
− | | 0-59.99 |
||
− | |align="center"| |
||
|} |
|} |
||
+ | === Recommendations for students on how to succeed in the course === |
||
− | === Resources and reference material === |
||
− | ==== Textbook: ==== |
||
+ | == Resources, literature and reference materials == |
||
− | * Manning, Raghavan, Schütze, An Introduction to Information Retrieval, 2008, Cambridge University Press |
||
− | |||
− | ==== Reference material: ==== |
||
+ | === Open access resources === |
||
+ | * Manning, Raghavan, Schütze, An Introduction to Information Retrieval, 2008, Cambridge University Press |
||
* Baeza-Yates, Ribeiro-Neto, Modern Information Retrieval, 2011, Addison-Wesley |
* Baeza-Yates, Ribeiro-Neto, Modern Information Retrieval, 2011, Addison-Wesley |
||
* Buttcher, Clarke, Cormack, Information Retrieval: Implementing and Evaluating Search Engines, 2010, MIT Press |
* Buttcher, Clarke, Cormack, Information Retrieval: Implementing and Evaluating Search Engines, 2010, MIT Press |
||
− | * |
+ | * Course repository in github. |
− | == |
+ | === Closed access resources === |
− | The main sections of the course and approximate hour distribution between them is as follows: |
||
+ | === Software and tools used within the course === |
||
− | {| |
||
+ | |||
− | |+ Course Sections |
||
+ | = Teaching Methodology: Methods, techniques, & activities = |
||
− | |align="center"| '''Section''' |
||
+ | |||
− | | '''Section Title''' |
||
+ | == Activities and Teaching Methods == |
||
− | |align="center"| '''Lectures''' |
||
+ | {| class="wikitable" |
||
− | |align="center"| '''Seminars''' |
||
+ | |+ Activities within each section |
||
− | |align="center"| '''Self-study''' |
||
− | |align="center"| '''Knowledge''' |
||
|- |
|- |
||
+ | ! Learning Activities !! Section 1 !! Section 2 !! Section 3 !! Section 4 |
||
− | |align="center"| '''Number''' |
||
− | | |
||
− | |align="center"| '''(hours)''' |
||
− | |align="center"| '''(labs)''' |
||
− | |align="center"| |
||
− | |align="center"| '''evaluation''' |
||
|- |
|- |
||
+ | | Development of individual parts of software product code || 1 || 1 || 1 || 1 |
||
− | |align="center"| 1 |
||
− | | Information retrieval basics |
||
− | |align="center"| 10 |
||
− | |align="center"| 10 |
||
− | |align="center"| 20 |
||
− | |align="center"| 0 |
||
|- |
|- |
||
+ | | Homework and group projects || 1 || 1 || 1 || 1 |
||
− | |align="center"| 2 |
||
− | | Text processing and indexing |
||
− | |align="center"| 10 |
||
− | |align="center"| 10 |
||
− | |align="center"| 20 |
||
− | |align="center"| 0 |
||
|- |
|- |
||
+ | | Testing (written or computer based) || 1 || 1 || 1 || 1 |
||
− | |align="center"| 3 |
||
+ | |} |
||
− | | Vector model and vector indexing |
||
+ | == Formative Assessment and Course Activities == |
||
− | |align="center"| 12 |
||
+ | |||
− | |align="center"| 12 |
||
+ | === Ongoing performance assessment === |
||
− | |align="center"| 12 |
||
+ | |||
− | |align="center"| 0 |
||
+ | ==== Section 1 ==== |
||
+ | {| class="wikitable" |
||
+ | |+ |
||
|- |
|- |
||
+ | ! Activity Type !! Content !! Is Graded? |
||
− | |align="center"| 4 |
||
− | | Advanced topics. Media processing |
||
− | |align="center"| 12 |
||
− | |align="center"| 12 |
||
− | |align="center"| 12 |
||
− | |align="center"| 0 |
||
|- |
|- |
||
+ | | Question || Enumerate limitations for web crawling. || 1 |
||
− | |align="center"| Final examination |
||
− | | |
+ | |- |
+ | | Question || Propose a strategy for A/B testing. || 1 |
||
− | |align="center"| |
||
+ | |- |
||
− | |align="center"| |
||
+ | | Question || Propose recommender quality metric. || 1 |
||
− | |align="center"| |
||
+ | |- |
||
− | |align="center"| 4 |
||
+ | | Question || Implement DCG metric. || 1 |
||
− | |} |
||
+ | |- |
||
− | |||
+ | | Question || Discuss relevance metric. || 1 |
||
− | === Section 1 === |
||
+ | |- |
||
− | |||
+ | | Question || Crawl website with respect to robots.txt. || 1 |
||
− | ==== Section title: ==== |
||
+ | |- |
||
− | |||
+ | | Question || What is typical IR system architecture? || 0 |
||
− | Information retrieval basics |
||
+ | |- |
||
− | |||
+ | | Question || Show how to parse a dynamic web page. || 0 |
||
− | === Topics covered in this section: === |
||
+ | |- |
||
− | |||
+ | | Question || Provide a framework to accept/reject A/B testing results. || 0 |
||
− | * Introduction to IR, major concepts. |
||
+ | |- |
||
− | * Crawling and Web. |
||
+ | | Question || Compute DCG for an example query for random search engine. || 0 |
||
− | * Quality assessment. |
||
+ | |- |
||
− | |||
+ | | Question || Implement a metric for a recommender system. || 0 |
||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
+ | |- |
||
− | |||
+ | | Question || Implement pFound. || 0 |
||
− | <div class="tabular"> |
||
+ | |} |
||
− | |||
+ | ==== Section 2 ==== |
||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
+ | {| class="wikitable" |
||
− | Development of individual parts of software product code & 1<br /> |
||
+ | |+ |
||
− | Homework and group projects & 1<br /> |
||
+ | |- |
||
− | Midterm evaluation & 0<br /> |
||
+ | ! Activity Type !! Content !! Is Graded? |
||
− | Testing (written or computer based) & 1<br /> |
||
+ | |- |
||
− | Reports & 0<br /> |
||
+ | | Question || Build inverted index for a text. || 1 |
||
− | Essays & 0<br /> |
||
+ | |- |
||
− | Oral polls & 0<br /> |
||
+ | | Question || Tokenize a text. || 1 |
||
− | Discussions & 0<br /> |
||
+ | |- |
||
− | |||
+ | | Question || Implement simple spellchecker. || 1 |
||
− | |||
+ | |- |
||
− | |||
+ | | Question || Implement wildcard search. || 1 |
||
− | </div> |
||
+ | |- |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
+ | | Question || Build inverted index for a set of web pages. || 0 |
||
− | |||
+ | |- |
||
− | # Enumerate limitations for web crawling. |
||
+ | | Question || build a distribution of stems/lexemes for a text. || 0 |
||
− | # Propose a strategy for A/B testing. |
||
+ | |- |
||
− | # Propose recommender quality metric. |
||
+ | | Question || Choose and implement case-insensitive index for a given text collection. || 0 |
||
− | # Implement DCG metric. |
||
+ | |- |
||
− | # Discuss relevance metric. |
||
+ | | Question || Choose and implement semantic vector-based index for a given text collection. || 0 |
||
− | # Crawl website with respect to robots.txt. |
||
+ | |} |
||
− | |||
+ | ==== Section 3 ==== |
||
− | === Typical questions for seminar classes (labs) within this section === |
||
+ | {| class="wikitable" |
||
− | |||
+ | |+ |
||
− | # What is typical IR system architecture? |
||
+ | |- |
||
− | # Show how to parse a dynamic web page. |
||
+ | ! Activity Type !! Content !! Is Graded? |
||
− | # Provide a framework to accept/reject A/B testing results. |
||
+ | |- |
||
− | # Compute DCG for an example query for random search engine. |
||
+ | | Question || Embed the text with an ML model. || 1 |
||
− | # Implement a metric for a recommender system. |
||
+ | |- |
||
− | # Implement pFound. |
||
+ | | Question || Build term-document matrix. || 1 |
||
− | |||
+ | |- |
||
− | === Test questions for final assessment in this section === |
||
+ | | Question || Build semantic index for a dataset using Annoy. || 1 |
||
− | |||
+ | |- |
||
+ | | Question || Build kd-tree index for a given dataset. || 1 |
||
+ | |- |
||
+ | | Question || Why kd-trees work badly in 100-dimensional environment? || 1 |
||
+ | |- |
||
+ | | Question || What is the difference between metric space and vector space? || 1 |
||
+ | |- |
||
+ | | Question || Choose and implement persistent index for a given text collection. || 0 |
||
+ | |- |
||
+ | | Question || Visualize a dataset for text classification. || 0 |
||
+ | |- |
||
+ | | Question || Build (H)NSW index for a dataset. || 0 |
||
+ | |- |
||
+ | | Question || Compare HNSW to Annoy index. || 0 |
||
+ | |- |
||
+ | | Question || What are metric space index structures you know? || 0 |
||
+ | |} |
||
+ | ==== Section 4 ==== |
||
+ | {| class="wikitable" |
||
+ | |+ |
||
+ | |- |
||
+ | ! Activity Type !! Content !! Is Graded? |
||
+ | |- |
||
+ | | Question || Extract semantic information from images. || 1 |
||
+ | |- |
||
+ | | Question || Build an image hash. || 1 |
||
+ | |- |
||
+ | | Question || Build a spectral representation of a song. || 1 |
||
+ | |- |
||
+ | | Question || Whats is relevance feedback? || 1 |
||
+ | |- |
||
+ | | Question || Build a "search by color" feature. || 0 |
||
+ | |- |
||
+ | | Question || Extract scenes from video. || 0 |
||
+ | |- |
||
+ | | Question || Write a voice-controlled search. || 0 |
||
+ | |- |
||
+ | | Question || Semantic search within unlabelled image dataset. || 0 |
||
+ | |} |
||
+ | === Final assessment === |
||
+ | '''Section 1''' |
||
# Implement text crawler for a news site. |
# Implement text crawler for a news site. |
||
# What is SBS (side-by-side) and how is it used in search engines? |
# What is SBS (side-by-side) and how is it used in search engines? |
||
Line 242: | Line 250: | ||
# Explain how A/B testing works. |
# Explain how A/B testing works. |
||
# Describe PageRank algorithm. |
# Describe PageRank algorithm. |
||
+ | '''Section 2''' |
||
− | |||
− | === Section 2 === |
||
− | |||
− | ==== Section title: ==== |
||
− | |||
− | Text processing and indexing |
||
− | |||
− | === Topics covered in this section: === |
||
− | |||
− | * Building inverted index for text documents. Boolean retrieval model. |
||
− | * Language, tokenization, stemming, searching, scoring. |
||
− | * Spellchecking and wildcard search. |
||
− | * Suggest and query expansion. |
||
− | * Language modelling. Topic modelling. |
||
− | |||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
− | |||
− | <div class="tabular"> |
||
− | |||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
− | Development of individual parts of software product code & 1<br /> |
||
− | Homework and group projects & 1<br /> |
||
− | Midterm evaluation & 0<br /> |
||
− | Testing (written or computer based) & 1<br /> |
||
− | Reports & 0<br /> |
||
− | Essays & 0<br /> |
||
− | Oral polls & 0<br /> |
||
− | Discussions & 0<br /> |
||
− | |||
− | |||
− | |||
− | </div> |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
− | |||
− | # Build inverted index for a text. |
||
− | # Tokenize a text. |
||
− | # Implement simple spellchecker. |
||
− | # Implement wildcard search. |
||
− | |||
− | === Typical questions for seminar classes (labs) within this section === |
||
− | |||
− | # Build inverted index for a set of web pages. |
||
− | # build a distribution of stems/lexemes for a text. |
||
− | # Choose and implement case-insensitive index for a given text collection. |
||
− | # Choose and implement semantic vector-based index for a given text collection. |
||
− | |||
− | === Test questions for final assessment in this section === |
||
− | |||
# Explain how (and why) KD-trees work. |
# Explain how (and why) KD-trees work. |
||
# What are weak places of inverted index? |
# What are weak places of inverted index? |
||
# Compare different text vectorization approaches. |
# Compare different text vectorization approaches. |
||
# Compare tolerant retrieval to spellchecking. |
# Compare tolerant retrieval to spellchecking. |
||
+ | '''Section 3''' |
||
− | |||
− | === Section 3 === |
||
− | |||
− | ==== Section title: ==== |
||
− | |||
− | Vector model and vector indexing |
||
− | |||
− | === Topics covered in this section: === |
||
− | |||
− | * Vector model |
||
− | * Machine learning for vector embedding |
||
− | * Vector-based index structures |
||
− | |||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
− | |||
− | <div class="tabular"> |
||
− | |||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
− | Development of individual parts of software product code & 1<br /> |
||
− | Homework and group projects & 1<br /> |
||
− | Midterm evaluation & 0<br /> |
||
− | Testing (written or computer based) & 1<br /> |
||
− | Reports & 0<br /> |
||
− | Essays & 0<br /> |
||
− | Oral polls & 0<br /> |
||
− | Discussions & 0<br /> |
||
− | |||
− | |||
− | |||
− | </div> |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
− | |||
− | # Embed the text with an ML model. |
||
− | # Build term-document matrix. |
||
− | # Build semantic index for a dataset using Annoy. |
||
− | # Build kd-tree index for a given dataset. |
||
− | # Why kd-trees work badly in 100-dimensional environment? |
||
− | # What is the difference between metric space and vector space? |
||
− | |||
− | === Typical questions for seminar classes (labs) within this section === |
||
− | |||
− | # Choose and implement persistent index for a given text collection. |
||
− | # Visualize a dataset for text classification. |
||
− | # Build (H)NSW index for a dataset. |
||
− | # Compare HNSW to Annoy index. |
||
− | # What are metric space index structures you know? |
||
− | |||
− | === Test questions for final assessment in this section === |
||
− | |||
# Compare inverted index to HNSW in terms of speed, memory consumption? |
# Compare inverted index to HNSW in terms of speed, memory consumption? |
||
# Choose the best index for a given dataset. |
# Choose the best index for a given dataset. |
||
# Implement range search in KD-tree. |
# Implement range search in KD-tree. |
||
+ | '''Section 4''' |
||
+ | # What are the approaches to image understanding? |
||
+ | # How to cluster a video into scenes and shots? |
||
+ | # How speech-to-text technology works? |
||
+ | # How to build audio fingerprints? |
||
− | === |
+ | === The retake exam === |
+ | '''Section 1''' |
||
− | + | '''Section 2''' |
|
+ | '''Section 3''' |
||
− | Advanced topics. Media processing |
||
+ | '''Section 4''' |
||
− | === Topics covered in this section: === |
||
− | |||
− | * Image and video processing, understanding and indexing |
||
− | * Content-based image retrieval |
||
− | * Audio retrieval |
||
− | * Hum to search |
||
− | * Relevance feedback |
||
− | |||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
− | |||
− | <div class="tabular"> |
||
− | |||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
− | Development of individual parts of software product code & 1<br /> |
||
− | Homework and group projects & 1<br /> |
||
− | Midterm evaluation & 0<br /> |
||
− | Testing (written or computer based) & 1<br /> |
||
− | Reports & 0<br /> |
||
− | Essays & 0<br /> |
||
− | Oral polls & 0<br /> |
||
− | Discussions & 0<br /> |
||
− | |||
− | |||
− | |||
− | </div> |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
− | |||
− | # Extract semantic information from images. |
||
− | # Build an image hash. |
||
− | # Build a spectral representation of a song. |
||
− | # Whats is relevance feedback? |
||
− | |||
− | === Typical questions for seminar classes (labs) within this section === |
||
− | |||
− | # Build a "search by color" feature. |
||
− | # Extract scenes from video. |
||
− | # Write a voice-controlled search. |
||
− | # Semantic search within unlabelled image dataset. |
||
− | |||
− | === Test questions for final assessment in this section === |
||
− | |||
− | # What are the approaches to image understanding? |
||
− | # How to cluster a video into scenes and shots? |
||
− | # How speech-to-text technology works? |
||
− | # How to build audio fingerprints? |
Revision as of 12:19, 12 July 2022
Information Retrieval
- Course name: Information Retrieval
- Code discipline: XYZ
- Subject area: Data Science; Computer systems organization; Information systems; Real-time systems; Information retrieval; World Wide Web
Short Description
This course covers the following concepts: Indexing; Relevance; Ranking; Information retrieval; Query.
Prerequisites
Prerequisite subjects
- CSE204 — Analytic Geometry And Linear Algebra II: matrix multiplication, matrix decomposition (SVD, ALS) and approximation (matrix norm), sparse matrix, stability of solution (decomposition), vector spaces, metric spaces, manifold, eigenvector and eigenvalue.
- CSE113 — Philosophy I - (Discrete Math and Logic): graphs, trees, binary trees, balanced trees, metric (proximity) graphs, diameter, clique, path, shortest path.
- CSE206 — Probability And Statistics: probability, likelihood, conditional probability, Bayesian rule, stochastic matrix and properties. Analysis: DFT, [discrete] gradient.
Prerequisite topics
Course Topics
Section | Topics within the section |
---|---|
Information retrieval basics |
|
Text processing and indexing |
|
Vector model and vector indexing |
|
Advanced topics. Media processing |
|
Intended Learning Outcomes (ILOs)
What is the main purpose of this course?
The course is designed to prepare students to understand background theories of information retrieval systems and introduce different information retrieval systems. The course will focus on the evaluation and analysis of such systems as well as how they are implemented. Throughout the course, students will be involved in discussions, readings, and assignments to experience real world systems. The technologies and algorithms covered in this class include machine learning, data mining, natural language processing, data indexing, and so on.
ILOs defined at three levels
Level 1: What concepts should a student know/remember/explain?
By the end of the course, the students should be able to ...
- Terms and definitions used in area of information retrieval,
- Search engine and recommender system essential parts,
- Quality metrics of information retrieval systems,
- Contemporary approaches to semantic data analysis,
- Indexing strategies.
Level 2: What basic practical skills should a student be able to perform?
By the end of the course, the students should be able to ...
- Understand background theories behind information retrieval systems,
- How to design a recommender system from scratch,
- How to evaluate quality of a particular information retrieval system,
- Core ideas and system implementation and maintenance,
- How to identify and fix information retrieval system problems.
Level 3: What complex comprehensive skills should a student be able to apply in real-life scenarios?
By the end of the course, the students should be able to ...
- Build a recommender service from scratch,
- Implement a proper index for an unstructured dataset,
- Plan quality measures for a new recommender service,
- Run initial data analysis and problem evaluation for a business task, related to information retrieval.
Grading
Course grading range
Grade | Range | Description of performance |
---|---|---|
A. Excellent | 84-100 | - |
B. Good | 72-83 | - |
C. Satisfactory | 60-71 | - |
D. Poor | 0-59 | - |
Course activities and grading breakdown
Activity Type | Percentage of the overall course grade |
---|---|
Labs/seminar classes | 35 |
Interim performance assessment | 70 |
Exams | 0 |
Recommendations for students on how to succeed in the course
Resources, literature and reference materials
Open access resources
- Manning, Raghavan, Schütze, An Introduction to Information Retrieval, 2008, Cambridge University Press
- Baeza-Yates, Ribeiro-Neto, Modern Information Retrieval, 2011, Addison-Wesley
- Buttcher, Clarke, Cormack, Information Retrieval: Implementing and Evaluating Search Engines, 2010, MIT Press
- Course repository in github.
Closed access resources
Software and tools used within the course
Teaching Methodology: Methods, techniques, & activities
Activities and Teaching Methods
Learning Activities | Section 1 | Section 2 | Section 3 | Section 4 |
---|---|---|---|---|
Development of individual parts of software product code | 1 | 1 | 1 | 1 |
Homework and group projects | 1 | 1 | 1 | 1 |
Testing (written or computer based) | 1 | 1 | 1 | 1 |
Formative Assessment and Course Activities
Ongoing performance assessment
Section 1
Activity Type | Content | Is Graded? |
---|---|---|
Question | Enumerate limitations for web crawling. | 1 |
Question | Propose a strategy for A/B testing. | 1 |
Question | Propose recommender quality metric. | 1 |
Question | Implement DCG metric. | 1 |
Question | Discuss relevance metric. | 1 |
Question | Crawl website with respect to robots.txt. | 1 |
Question | What is typical IR system architecture? | 0 |
Question | Show how to parse a dynamic web page. | 0 |
Question | Provide a framework to accept/reject A/B testing results. | 0 |
Question | Compute DCG for an example query for random search engine. | 0 |
Question | Implement a metric for a recommender system. | 0 |
Question | Implement pFound. | 0 |
Section 2
Activity Type | Content | Is Graded? |
---|---|---|
Question | Build inverted index for a text. | 1 |
Question | Tokenize a text. | 1 |
Question | Implement simple spellchecker. | 1 |
Question | Implement wildcard search. | 1 |
Question | Build inverted index for a set of web pages. | 0 |
Question | build a distribution of stems/lexemes for a text. | 0 |
Question | Choose and implement case-insensitive index for a given text collection. | 0 |
Question | Choose and implement semantic vector-based index for a given text collection. | 0 |
Section 3
Activity Type | Content | Is Graded? |
---|---|---|
Question | Embed the text with an ML model. | 1 |
Question | Build term-document matrix. | 1 |
Question | Build semantic index for a dataset using Annoy. | 1 |
Question | Build kd-tree index for a given dataset. | 1 |
Question | Why kd-trees work badly in 100-dimensional environment? | 1 |
Question | What is the difference between metric space and vector space? | 1 |
Question | Choose and implement persistent index for a given text collection. | 0 |
Question | Visualize a dataset for text classification. | 0 |
Question | Build (H)NSW index for a dataset. | 0 |
Question | Compare HNSW to Annoy index. | 0 |
Question | What are metric space index structures you know? | 0 |
Section 4
Activity Type | Content | Is Graded? |
---|---|---|
Question | Extract semantic information from images. | 1 |
Question | Build an image hash. | 1 |
Question | Build a spectral representation of a song. | 1 |
Question | Whats is relevance feedback? | 1 |
Question | Build a "search by color" feature. | 0 |
Question | Extract scenes from video. | 0 |
Question | Write a voice-controlled search. | 0 |
Question | Semantic search within unlabelled image dataset. | 0 |
Final assessment
Section 1
- Implement text crawler for a news site.
- What is SBS (side-by-side) and how is it used in search engines?
- Compare pFound with CTR and with DCG.
- Explain how A/B testing works.
- Describe PageRank algorithm.
Section 2
- Explain how (and why) KD-trees work.
- What are weak places of inverted index?
- Compare different text vectorization approaches.
- Compare tolerant retrieval to spellchecking.
Section 3
- Compare inverted index to HNSW in terms of speed, memory consumption?
- Choose the best index for a given dataset.
- Implement range search in KD-tree.
Section 4
- What are the approaches to image understanding?
- How to cluster a video into scenes and shots?
- How speech-to-text technology works?
- How to build audio fingerprints?
The retake exam
Section 1
Section 2
Section 3
Section 4