Difference between revisions of "BSc: Practical Machine Learning Deep Learning"
R.sirgalina (talk | contribs) |
|||
Line 1: | Line 1: | ||
− | = Practical Machine Learning and Deep Learning = |
||
− | + | = Practical Machine Learning and Deep Learning = |
|
− | * |
+ | * '''Course name''': Practical Machine Learning and Deep Learning |
+ | * '''Code discipline''': PMLDL-04 |
||
+ | * '''Subject area''': |
||
− | == |
+ | == Short Description == |
+ | This course covers the following concepts: Practical aspects of deep learning (DL); Practical applications of DL in Natural Language Processing, Computer Vision and generation.. |
||
+ | == Prerequisites == |
||
− | === Key concepts of the class === |
||
+ | === Prerequisite subjects === |
||
− | * Practical aspects of deep learning (DL) |
||
+ | * CSE202 — Analytical Geometry and Linear Algebra I / []: Manifolds "Linear Alg./Calculus: Manifolds |
||
− | * Practical applications of DL in Natural Language Processing, Computer Vision and generation. |
||
+ | * CSE203 — Mathematical Analysis II: Basics of optimisation |
||
+ | * CSE201 — Mathematical Analysis I: integration and differentiation. |
||
+ | * CSE103 — Theoretical Computer Science: Graph theory basics, Spectral decomposition. |
||
+ | * CSE206 — Probability And Statistics: Multivariate normal dist. |
||
+ | * CSE504 — Digital Signal Processing: convolution, cross-correlation" |
||
+ | === Prerequisite topics === |
||
− | === What is the purpose of this course? === |
||
− | The course is about the practical aspects of deep learning. In addition to frontal lectures, the flipped classes and student project presentations will be organized. During lab sessions the working language is Python. The primary framework for deep learning is PyTorch. Usage of TensorFlow and Keras is possible, usage of Docker is highly appreciated. |
||
− | == |
+ | == Course Topics == |
+ | {| class="wikitable" |
||
− | * [https://eduwiki.innopolis.university/index.php/BSc:AnalyticGeometryAndLinearAlgebraI CSE202 — Analytical Geometry and Linear Algebra I] / []: Manifolds "Linear Alg./Calculus: Manifolds |
||
+ | |+ Course Sections and Topics |
||
− | * [https://eduwiki.innopolis.university/index.php/BSc:MathematicalAnalysisII CSE203 — Mathematical Analysis II]: Basics of optimisation |
||
+ | |- |
||
− | * [https://eduwiki.innopolis.university/index.php/BSc:MathematicalAnalysisI CSE201 — Mathematical Analysis I]: integration and differentiation. |
||
+ | ! Section !! Topics within the section |
||
− | * [https://eduwiki.innopolis.university/index.php/BSc:TheoreticalComputerScience CSE103 — Theoretical Computer Science]: Graph theory basics, Spectral decomposition. |
||
+ | |- |
||
− | * [https://eduwiki.innopolis.university/index.php/BSc:ProbabilityAndStatistics CSE206 — Probability And Statistics]: Multivariate normal dist. |
||
+ | | Review. CNNs and RNNs || |
||
− | * [https://eduwiki.innopolis.university/index.php/BSc:SignalsAndSystems CSE504 — Digital Signal Processing]: convolution, cross-correlation" |
||
+ | # Image processing, FFNs, CNNs |
||
+ | # Training Deep NNs |
||
+ | # RNNs, LSTM, GRU, Embeddings |
||
+ | # Bidirectional RNNs |
||
+ | # Seq2seq |
||
+ | # Encoder-Decoder Networks |
||
+ | # Attention |
||
+ | # Memory Networks |
||
+ | |- |
||
+ | | Team Data Science Processes || |
||
+ | # Team Data Science Processes |
||
+ | # Team Data Science Roles |
||
+ | # Team Data Science Tools (MLFlow, KubeFlow) |
||
+ | # CRISP-DM |
||
+ | # Productionizing ML systems |
||
+ | |- |
||
+ | | VAEs, GANs || |
||
+ | # Autoencoders |
||
+ | # Variational Autoencoders |
||
+ | # GANs, DCGAN |
||
+ | |} |
||
+ | == Intended Learning Outcomes (ILOs) == |
||
+ | === What is the main purpose of this course? === |
||
− | == Course Objectives Based on Bloom’s Taxonomy == |
||
+ | The course is about the practical aspects of deep learning. In addition to frontal lectures, the flipped classes and student project presentations will be organized. During lab sessions the working language is Python. The primary framework for deep learning is PyTorch. Usage of TensorFlow and Keras is possible, usage of Docker is highly appreciated. |
||
+ | === ILOs defined at three levels === |
||
− | The course focuses on the following outcomes: |
||
− | |||
− | * Students should be able to apply deep learning methods to effectively solve practical (real-world) problems; |
||
− | * Students should be able to work in data science team; to understand of principles and a lifecycle of data science projects. |
||
− | |||
− | === What should a student remember at the end of the course? === |
||
− | |||
− | By the end of the course, the students should be able |
||
+ | ==== Level 1: What concepts should a student know/remember/explain? ==== |
||
+ | By the end of the course, the students should be able to ... |
||
* to apply deep learning methods to effectively solve practical (real-world) problems; |
* to apply deep learning methods to effectively solve practical (real-world) problems; |
||
* to work in data science team; |
* to work in data science team; |
||
* to understand of principles and a lifecycle of data science projects. |
* to understand of principles and a lifecycle of data science projects. |
||
− | === What should a student be able to |
+ | ==== Level 2: What basic practical skills should a student be able to perform? ==== |
+ | By the end of the course, the students should be able to ... |
||
− | |||
− | By the end of the course, the students should be able |
||
− | |||
* to understand modern deep NN architectures; |
* to understand modern deep NN architectures; |
||
* to compare modern deep NN architectures; |
* to compare modern deep NN architectures; |
||
* to create a prototype of a data-driven product. |
* to create a prototype of a data-driven product. |
||
− | === What should a student be able to apply |
+ | ==== Level 3: What complex comprehensive skills should a student be able to apply in real-life scenarios? ==== |
+ | By the end of the course, the students should be able to ... |
||
− | |||
− | By the end of the course, the students should be able |
||
− | |||
* to apply techniques for efficient training of deep NNs; |
* to apply techniques for efficient training of deep NNs; |
||
* to apply methods for data science team organisation; |
* to apply methods for data science team organisation; |
||
− | * to apply deep NNs in NLP and computer vision. |
+ | * to apply deep NNs in NLP and computer vision. |
+ | == Grading == |
||
− | === Course |
+ | === Course grading range === |
+ | {| class="wikitable" |
||
− | |||
− | + | |+ |
|
− | |+ Course grade breakdown |
||
− | ! |
||
− | ! |
||
− | !align="center"| '''Proposed points''' |
||
|- |
|- |
||
+ | ! Grade !! Range !! Description of performance |
||
− | | Labs/seminar classes |
||
− | | 20 |
||
− | |align="center"| |
||
|- |
|- |
||
+ | | A. Excellent || 90-100 || - |
||
− | | Interim performance assessment |
||
− | | 30 |
||
− | |align="center"| |
||
|- |
|- |
||
+ | | B. Good || 75-89 || - |
||
− | | Exams |
||
− | | |
+ | |- |
+ | | C. Satisfactory || 60-74 || - |
||
− | |align="center"| |
||
+ | |- |
||
+ | | D. Poor || 0-59 || - |
||
|} |
|} |
||
+ | === Course activities and grading breakdown === |
||
− | If necessary, please indicate freely your course’s features in terms of students’ performance assessment. |
||
+ | {| class="wikitable" |
||
− | |||
+ | |+ |
||
− | === Grades range === |
||
− | |||
− | {| |
||
− | |+ Course grading range |
||
− | ! |
||
− | ! |
||
− | !align="center"| '''Proposed range''' |
||
|- |
|- |
||
+ | ! Activity Type !! Percentage of the overall course grade |
||
− | | A. Excellent |
||
− | | 90-100 |
||
− | |align="center"| |
||
|- |
|- |
||
+ | | Labs/seminar classes || 20 |
||
− | | B. Good |
||
− | | 75-89 |
||
− | |align="center"| |
||
|- |
|- |
||
+ | | Interim performance assessment || 30 |
||
− | | C. Satisfactory |
||
− | | 60-74 |
||
− | |align="center"| |
||
|- |
|- |
||
− | | |
+ | | Exams || 50 |
− | | 0-59 |
||
− | |align="center"| |
||
|} |
|} |
||
+ | === Recommendations for students on how to succeed in the course === |
||
− | === Resources and reference material === |
||
+ | |||
+ | == Resources, literature and reference materials == |
||
+ | |||
+ | === Open access resources === |
||
* Goodfellow et al. Deep Learning, MIT Press. 2017 |
* Goodfellow et al. Deep Learning, MIT Press. 2017 |
||
* Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. 2017. |
* Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. 2017. |
||
* Osinga, Douwe. Deep Learning Cookbook: Practical Recipes to Get Started Quickly. O’Reilly Media, 2018. |
* Osinga, Douwe. Deep Learning Cookbook: Practical Recipes to Get Started Quickly. O’Reilly Media, 2018. |
||
− | == |
+ | === Closed access resources === |
+ | |||
+ | === Software and tools used within the course === |
||
− | The main sections of the course and approximate hour distribution between them is as follows: |
||
+ | |||
+ | = Teaching Methodology: Methods, techniques, & activities = |
||
+ | == Activities and Teaching Methods == |
||
− | {| |
||
+ | {| class="wikitable" |
||
− | |+ Course Sections |
||
+ | |+ Activities within each section |
||
− | !align="center"| '''Section''' |
||
− | ! '''Section Title''' |
||
− | !align="center"| '''Teaching Hours''' |
||
|- |
|- |
||
+ | ! Learning Activities !! Section 1 !! Section 2 !! Section 3 |
||
− | |align="center"| 1 |
||
− | | Review. CNNs and RNNs |
||
− | |align="center"| 12 hours of lectures, 12 hours of labs |
||
|- |
|- |
||
+ | | Development of individual parts of software product code || 1 || 1 || 1 |
||
− | |align="center"| 2 |
||
− | | Team Data Science Processes |
||
− | |align="center"| 6 hours of lectures, 6 hours of labs |
||
|- |
|- |
||
+ | | Homework and group projects || 1 || 1 || 1 |
||
− | |align="center"| 3 |
||
+ | |- |
||
− | | VAE, GANs |
||
+ | | Midterm evaluation || 1 || 1 || 1 |
||
− | |align="center"| 8 hours of lectures, 8 hours of labs |
||
− | | |
+ | |- |
+ | | Testing (written or computer based) || 1 || 1 || 1 |
||
+ | |- |
||
+ | | Discussions || 1 || 1 || 1 |
||
+ | |} |
||
+ | == Formative Assessment and Course Activities == |
||
− | === |
+ | === Ongoing performance assessment === |
− | |||
− | ==== Section title: ==== |
||
− | |||
− | Review. CNNs and RNNs |
||
− | |||
− | === Topics covered in this section: === |
||
− | |||
− | * Image processing, FFNs, CNNs |
||
− | * Training Deep NNs |
||
− | * RNNs, LSTM, GRU, Embeddings |
||
− | * Bidirectional RNNs |
||
− | * Seq2seq |
||
− | * Encoder-Decoder Networks |
||
− | * Attention |
||
− | * Memory Networks |
||
− | |||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
− | |||
− | <div class="tabular"> |
||
− | |||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
− | Development of individual parts of software product code & 1<br /> |
||
− | Homework and group projects & 1<br /> |
||
− | Midterm evaluation & 1<br /> |
||
− | Testing (written or computer based) & 1<br /> |
||
− | Reports & 0<br /> |
||
− | Essays & 0<br /> |
||
− | Oral polls & 0<br /> |
||
− | Discussions & 1<br /> |
||
− | |||
− | |||
− | |||
− | </div> |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
− | |||
− | # Suppose you use Batch Gradient Descent and you plot the validation error at every epoch. If you notice that the validation error consistently goes up, what is likely going on? How can you fix this? |
||
− | # Is it a good idea to stop Mini-batch Gradient Descent immediately when the validation error goes up? |
||
− | # List the optimizers that you know (except SGD) and explain one of them |
||
− | # Describe Xavier (or Glorot) initialization. Why do you need it? |
||
− | |||
− | === Typical questions for seminar classes (labs) within this section === |
||
− | |||
− | # Name advantages of the ELU activation function over ReLU. |
||
− | # Can you name the main innovations in AlexNet, compared to LeNet-5? What about the main innovations in GoogLeNet and ResNet? |
||
− | # What is the difference between LSTM and GRU cells? |
||
− | |||
− | === Test questions for final assessment in this section === |
||
+ | ==== Section 1 ==== |
||
+ | {| class="wikitable" |
||
+ | |+ |
||
+ | |- |
||
+ | ! Activity Type !! Content !! Is Graded? |
||
+ | |- |
||
+ | | Question || Suppose you use Batch Gradient Descent and you plot the validation error at every epoch. If you notice that the validation error consistently goes up, what is likely going on? How can you fix this? || 1 |
||
+ | |- |
||
+ | | Question || Is it a good idea to stop Mini-batch Gradient Descent immediately when the validation error goes up? || 1 |
||
+ | |- |
||
+ | | Question || List the optimizers that you know (except SGD) and explain one of them || 1 |
||
+ | |- |
||
+ | | Question || Describe Xavier (or Glorot) initialization. Why do you need it? || 1 |
||
+ | |- |
||
+ | | Question || Name advantages of the ELU activation function over ReLU. || 0 |
||
+ | |- |
||
+ | | Question || Can you name the main innovations in AlexNet, compared to LeNet-5? What about the main innovations in GoogLeNet and ResNet? || 0 |
||
+ | |- |
||
+ | | Question || What is the difference between LSTM and GRU cells? || 0 |
||
+ | |} |
||
+ | ==== Section 2 ==== |
||
+ | {| class="wikitable" |
||
+ | |+ |
||
+ | |- |
||
+ | ! Activity Type !! Content !! Is Graded? |
||
+ | |- |
||
+ | | Question || What is CRISP-DM? || 1 |
||
+ | |- |
||
+ | | Question || What is TDSP? || 1 |
||
+ | |- |
||
+ | | Question || How to use MLflow? || 1 |
||
+ | |- |
||
+ | | Question || What is TensorBoard? || 1 |
||
+ | |- |
||
+ | | Question || How to apply Kubeflow in practice? || 1 |
||
+ | |- |
||
+ | | Question || Explain issues in distributed learning of deep NNs. || 0 |
||
+ | |- |
||
+ | | Question || How do you organize your data science project? || 0 |
||
+ | |- |
||
+ | | Question || Recall a checklist for organization of a typical data science project. || 0 |
||
+ | |} |
||
+ | ==== Section 3 ==== |
||
+ | {| class="wikitable" |
||
+ | |+ |
||
+ | |- |
||
+ | ! Activity Type !! Content !! Is Graded? |
||
+ | |- |
||
+ | | Question || What is an Autoencoder? Can you list the structure and types of Autoencoders? || 1 |
||
+ | |- |
||
+ | | Question || Can you describe ways to train Stacked AEs? || 1 |
||
+ | |- |
||
+ | | Question || What is Denoising AE? Can you describe what is sparsity loss and why it can be useful? || 1 |
||
+ | |- |
||
+ | | Question || Can you make a distinction between AE and VAE? || 1 |
||
+ | |- |
||
+ | | Question || If an autoencoder perfectly reconstructs the inputs, is it necessarily a good autoencoder? How can you evaluate the performance of an autoencoder? || 0 |
||
+ | |- |
||
+ | | Question || How do you tie weights in a stacked autoencoder? What is the point of doing so? || 0 |
||
+ | |- |
||
+ | | Question || What about the main risk of an overcomplete autoencoder? || 0 |
||
+ | |- |
||
+ | | Question || How the loss function for VAE is defined? What is ELBO? || 0 |
||
+ | |- |
||
+ | | Question || Can you list the structure and types of a GAN? || 0 |
||
+ | |- |
||
+ | | Question || How would you train a GAN? || 0 |
||
+ | |- |
||
+ | | Question || How would you estimate the quality of a GAN? || 0 |
||
+ | |- |
||
+ | | Question || Can you describe cost function of a Discriminator? || 0 |
||
+ | |} |
||
+ | === Final assessment === |
||
+ | '''Section 1''' |
||
# Explain what the Teacher Forcing is. |
# Explain what the Teacher Forcing is. |
||
# Why do people use encoder–decoder RNNs rather than plain sequence-to-sequence RNNs for automatic translation? |
# Why do people use encoder–decoder RNNs rather than plain sequence-to-sequence RNNs for automatic translation? |
||
# How could you combine a convolutional neural network with an RNN to classify videos? |
# How could you combine a convolutional neural network with an RNN to classify videos? |
||
+ | '''Section 2''' |
||
− | |||
− | === Section 2 === |
||
− | |||
− | ==== Section title: ==== |
||
− | |||
− | Team Data Science Processes |
||
− | |||
− | === Topics covered in this section: === |
||
− | |||
− | * Team Data Science Processes |
||
− | * Team Data Science Roles |
||
− | * Team Data Science Tools (MLFlow, KubeFlow) |
||
− | * CRISP-DM |
||
− | * Productionizing ML systems |
||
− | |||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
− | |||
− | <div class="tabular"> |
||
− | |||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
− | Development of individual parts of software product code & 1<br /> |
||
− | Homework and group projects & 1<br /> |
||
− | Midterm evaluation & 1<br /> |
||
− | Testing (written or computer based) & 1<br /> |
||
− | Reports & 0<br /> |
||
− | Essays & 0<br /> |
||
− | Oral polls & 0<br /> |
||
− | Discussions & 1<br /> |
||
− | |||
− | |||
− | |||
− | </div> |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
− | |||
− | # What is CRISP-DM? |
||
− | # What is TDSP? |
||
− | # How to use MLflow? |
||
− | # What is TensorBoard? |
||
− | # How to apply Kubeflow in practice? |
||
− | |||
− | === Typical questions for seminar classes (labs) within this section === |
||
− | |||
− | # Explain issues in distributed learning of deep NNs. |
||
− | # How do you organize your data science project? |
||
− | # Recall a checklist for organization of a typical data science project. |
||
− | |||
− | === Test questions for final assessment in this section === |
||
− | |||
# Can you explain what it means for a company to be ML-ready? |
# Can you explain what it means for a company to be ML-ready? |
||
# What a company can do to become ML-ready / Data driven? |
# What a company can do to become ML-ready / Data driven? |
||
Line 236: | Line 223: | ||
# Can you list and define typical roles in a DS team? |
# Can you list and define typical roles in a DS team? |
||
# What do you think about practical aspects of processes and roles in Data Science projects/teams? |
# What do you think about practical aspects of processes and roles in Data Science projects/teams? |
||
+ | '''Section 3''' |
||
+ | # Can you make a distinction between Variational approximation of density and MCMC methods for density estimation? |
||
+ | # What is DCGAN? What is its purpose? What are main features of DCGAN? |
||
+ | # What is your opinion about Word Embeddings? What types do you know? Why are they useful? |
||
+ | # How would you classify different CNN architectures? |
||
+ | # How would you classify different RNN architectures? |
||
+ | # Explain attention mechanism. What is self-attention? |
||
+ | # Explain the Transformer architecture. What is BERT? |
||
− | === |
+ | === The retake exam === |
+ | '''Section 1''' |
||
− | |||
− | ==== Section title: ==== |
||
− | |||
− | VAEs, GANs |
||
− | |||
− | ==== Topics covered in this section: ==== |
||
− | |||
− | * Autoencoders |
||
− | * Variational Autoencoders |
||
− | * GANs, DCGAN |
||
− | |||
− | === What forms of evaluation were used to test students’ performance in this section? === |
||
− | |||
− | <div class="tabular"> |
||
− | |||
− | <span>|a|c|</span> & '''Yes/No'''<br /> |
||
− | Development of individual parts of software product code & 1<br /> |
||
− | Homework and group projects & 1<br /> |
||
− | Midterm evaluation & 1<br /> |
||
− | Testing (written or computer based) & 1<br /> |
||
− | Reports & 0<br /> |
||
− | Essays & 0<br /> |
||
− | Oral polls & 0<br /> |
||
− | Discussions & 1<br /> |
||
− | |||
− | |||
− | |||
− | </div> |
||
− | === Typical questions for ongoing performance evaluation within this section === |
||
− | |||
− | # What is an Autoencoder? Can you list the structure and types of Autoencoders? |
||
− | # Can you describe ways to train Stacked AEs? |
||
− | # What is Denoising AE? Can you describe what is sparsity loss and why it can be useful? |
||
− | # Can you make a distinction between AE and VAE? |
||
− | |||
− | ==== Typical questions for seminar classes (labs) within this section ==== |
||
− | |||
− | * If an autoencoder perfectly reconstructs the inputs, is it necessarily a good autoencoder? How can you evaluate the performance of an autoencoder? |
||
− | * How do you tie weights in a stacked autoencoder? What is the point of doing so? |
||
− | * What about the main risk of an overcomplete autoencoder? |
||
− | * How the loss function for VAE is defined? What is ELBO? |
||
− | * Can you list the structure and types of a GAN? |
||
− | * How would you train a GAN? |
||
− | * How would you estimate the quality of a GAN? |
||
− | * Can you describe cost function of a Discriminator? |
||
+ | '''Section 2''' |
||
− | ==== Test questions for final assessment in this section ==== |
||
+ | '''Section 3''' |
||
− | * Can you make a distinction between Variational approximation of density and MCMC methods for density estimation? |
||
− | * What is DCGAN? What is its purpose? What are main features of DCGAN? |
||
− | * What is your opinion about Word Embeddings? What types do you know? Why are they useful? |
||
− | * How would you classify different CNN architectures? |
||
− | * How would you classify different RNN architectures? |
||
− | * Explain attention mechanism. What is self-attention? |
||
− | * Explain the Transformer architecture. What is BERT? |
Latest revision as of 12:58, 12 July 2022
Practical Machine Learning and Deep Learning
- Course name: Practical Machine Learning and Deep Learning
- Code discipline: PMLDL-04
- Subject area:
Short Description
This course covers the following concepts: Practical aspects of deep learning (DL); Practical applications of DL in Natural Language Processing, Computer Vision and generation..
Prerequisites
Prerequisite subjects
- CSE202 — Analytical Geometry and Linear Algebra I / []: Manifolds "Linear Alg./Calculus: Manifolds
- CSE203 — Mathematical Analysis II: Basics of optimisation
- CSE201 — Mathematical Analysis I: integration and differentiation.
- CSE103 — Theoretical Computer Science: Graph theory basics, Spectral decomposition.
- CSE206 — Probability And Statistics: Multivariate normal dist.
- CSE504 — Digital Signal Processing: convolution, cross-correlation"
Prerequisite topics
Course Topics
Section | Topics within the section |
---|---|
Review. CNNs and RNNs |
|
Team Data Science Processes |
|
VAEs, GANs |
|
Intended Learning Outcomes (ILOs)
What is the main purpose of this course?
The course is about the practical aspects of deep learning. In addition to frontal lectures, the flipped classes and student project presentations will be organized. During lab sessions the working language is Python. The primary framework for deep learning is PyTorch. Usage of TensorFlow and Keras is possible, usage of Docker is highly appreciated.
ILOs defined at three levels
Level 1: What concepts should a student know/remember/explain?
By the end of the course, the students should be able to ...
- to apply deep learning methods to effectively solve practical (real-world) problems;
- to work in data science team;
- to understand of principles and a lifecycle of data science projects.
Level 2: What basic practical skills should a student be able to perform?
By the end of the course, the students should be able to ...
- to understand modern deep NN architectures;
- to compare modern deep NN architectures;
- to create a prototype of a data-driven product.
Level 3: What complex comprehensive skills should a student be able to apply in real-life scenarios?
By the end of the course, the students should be able to ...
- to apply techniques for efficient training of deep NNs;
- to apply methods for data science team organisation;
- to apply deep NNs in NLP and computer vision.
Grading
Course grading range
Grade | Range | Description of performance |
---|---|---|
A. Excellent | 90-100 | - |
B. Good | 75-89 | - |
C. Satisfactory | 60-74 | - |
D. Poor | 0-59 | - |
Course activities and grading breakdown
Activity Type | Percentage of the overall course grade |
---|---|
Labs/seminar classes | 20 |
Interim performance assessment | 30 |
Exams | 50 |
Recommendations for students on how to succeed in the course
Resources, literature and reference materials
Open access resources
- Goodfellow et al. Deep Learning, MIT Press. 2017
- Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. 2017.
- Osinga, Douwe. Deep Learning Cookbook: Practical Recipes to Get Started Quickly. O’Reilly Media, 2018.
Closed access resources
Software and tools used within the course
Teaching Methodology: Methods, techniques, & activities
Activities and Teaching Methods
Learning Activities | Section 1 | Section 2 | Section 3 |
---|---|---|---|
Development of individual parts of software product code | 1 | 1 | 1 |
Homework and group projects | 1 | 1 | 1 |
Midterm evaluation | 1 | 1 | 1 |
Testing (written or computer based) | 1 | 1 | 1 |
Discussions | 1 | 1 | 1 |
Formative Assessment and Course Activities
Ongoing performance assessment
Section 1
Activity Type | Content | Is Graded? |
---|---|---|
Question | Suppose you use Batch Gradient Descent and you plot the validation error at every epoch. If you notice that the validation error consistently goes up, what is likely going on? How can you fix this? | 1 |
Question | Is it a good idea to stop Mini-batch Gradient Descent immediately when the validation error goes up? | 1 |
Question | List the optimizers that you know (except SGD) and explain one of them | 1 |
Question | Describe Xavier (or Glorot) initialization. Why do you need it? | 1 |
Question | Name advantages of the ELU activation function over ReLU. | 0 |
Question | Can you name the main innovations in AlexNet, compared to LeNet-5? What about the main innovations in GoogLeNet and ResNet? | 0 |
Question | What is the difference between LSTM and GRU cells? | 0 |
Section 2
Activity Type | Content | Is Graded? |
---|---|---|
Question | What is CRISP-DM? | 1 |
Question | What is TDSP? | 1 |
Question | How to use MLflow? | 1 |
Question | What is TensorBoard? | 1 |
Question | How to apply Kubeflow in practice? | 1 |
Question | Explain issues in distributed learning of deep NNs. | 0 |
Question | How do you organize your data science project? | 0 |
Question | Recall a checklist for organization of a typical data science project. | 0 |
Section 3
Activity Type | Content | Is Graded? |
---|---|---|
Question | What is an Autoencoder? Can you list the structure and types of Autoencoders? | 1 |
Question | Can you describe ways to train Stacked AEs? | 1 |
Question | What is Denoising AE? Can you describe what is sparsity loss and why it can be useful? | 1 |
Question | Can you make a distinction between AE and VAE? | 1 |
Question | If an autoencoder perfectly reconstructs the inputs, is it necessarily a good autoencoder? How can you evaluate the performance of an autoencoder? | 0 |
Question | How do you tie weights in a stacked autoencoder? What is the point of doing so? | 0 |
Question | What about the main risk of an overcomplete autoencoder? | 0 |
Question | How the loss function for VAE is defined? What is ELBO? | 0 |
Question | Can you list the structure and types of a GAN? | 0 |
Question | How would you train a GAN? | 0 |
Question | How would you estimate the quality of a GAN? | 0 |
Question | Can you describe cost function of a Discriminator? | 0 |
Final assessment
Section 1
- Explain what the Teacher Forcing is.
- Why do people use encoder–decoder RNNs rather than plain sequence-to-sequence RNNs for automatic translation?
- How could you combine a convolutional neural network with an RNN to classify videos?
Section 2
- Can you explain what it means for a company to be ML-ready?
- What a company can do to become ML-ready / Data driven?
- Can you list approaches to structure DS-teams? Discuss their advantages and disadvantages.
- Can you list and define typical roles in a DS team?
- What do you think about practical aspects of processes and roles in Data Science projects/teams?
Section 3
- Can you make a distinction between Variational approximation of density and MCMC methods for density estimation?
- What is DCGAN? What is its purpose? What are main features of DCGAN?
- What is your opinion about Word Embeddings? What types do you know? Why are they useful?
- How would you classify different CNN architectures?
- How would you classify different RNN architectures?
- Explain attention mechanism. What is self-attention?
- Explain the Transformer architecture. What is BERT?
The retake exam
Section 1
Section 2
Section 3