Difference between revisions of "BSc: Practical Machine Learning Deep Learning"

From IU
Jump to navigation Jump to search
(Created page with "= Practical Machine Learning and Deep Learning = * <span>'''Course name:'''</span> Practical Machine Learning and Deep Learning * <span>'''Course number:'''</span> PMLDL-04...")
 
 
(8 intermediate revisions by 2 users not shown)
Line 1: Line 1:
= Practical Machine Learning and Deep Learning =
 
   
* <span>'''Course name:'''</span> Practical Machine Learning and Deep Learning
+
= Practical Machine Learning and Deep Learning =
* <span>'''Course number:'''</span> PMLDL-04
+
* '''Course name''': Practical Machine Learning and Deep Learning
  +
* '''Code discipline''': PMLDL-04
  +
* '''Subject area''':
   
== Course Characteristics ==
+
== Short Description ==
  +
This course covers the following concepts: Practical aspects of deep learning (DL); Practical applications of DL in Natural Language Processing, Computer Vision and generation..
   
  +
== Prerequisites ==
=== Key concepts of the class ===
 
   
  +
=== Prerequisite subjects ===
* Practical aspects of deep learning (DL)
 
  +
* CSE202 — Analytical Geometry and Linear Algebra I / []: Manifolds "Linear Alg./Calculus: Manifolds
* Practical applications of DL in Natural Language Processing, Computer Vision and generation.
 
  +
* CSE203 — Mathematical Analysis II: Basics of optimisation
  +
* CSE201 — Mathematical Analysis I: integration and differentiation.
  +
* CSE103 — Theoretical Computer Science: Graph theory basics, Spectral decomposition.
  +
* CSE206 — Probability And Statistics: Multivariate normal dist.
  +
* CSE504 — Digital Signal Processing: convolution, cross-correlation"
   
  +
=== Prerequisite topics ===
=== What is the purpose of this course? ===
 
   
The course is about the practical aspects of deep learning. In addition to frontal lectures, the flipped classes and student project presentations will be organized. During lab sessions the working language is Python. The primary framework for deep learning is PyTorch. Usage of TensorFlow and Keras is possible, usage of Docker is highly appreciated.
 
   
=== Course objectives based on Bloom’s taxonomy ===
+
== Course Topics ==
  +
{| class="wikitable"
  +
|+ Course Sections and Topics
  +
|-
  +
! Section !! Topics within the section
  +
|-
  +
| Review. CNNs and RNNs ||
  +
# Image processing, FFNs, CNNs
  +
# Training Deep NNs
  +
# RNNs, LSTM, GRU, Embeddings
  +
# Bidirectional RNNs
  +
# Seq2seq
  +
# Encoder-Decoder Networks
  +
# Attention
  +
# Memory Networks
  +
|-
  +
| Team Data Science Processes ||
  +
# Team Data Science Processes
  +
# Team Data Science Roles
  +
# Team Data Science Tools (MLFlow, KubeFlow)
  +
# CRISP-DM
  +
# Productionizing ML systems
  +
|-
  +
| VAEs, GANs ||
  +
# Autoencoders
  +
# Variational Autoencoders
  +
# GANs, DCGAN
  +
|}
  +
== Intended Learning Outcomes (ILOs) ==
   
  +
=== What is the main purpose of this course? ===
The course focuses on the following outcomes:
 
  +
The course is about the practical aspects of deep learning. In addition to frontal lectures, the flipped classes and student project presentations will be organized. During lab sessions the working language is Python. The primary framework for deep learning is PyTorch. Usage of TensorFlow and Keras is possible, usage of Docker is highly appreciated.
 
* Students should be able to apply deep learning methods to effectively solve practical (real-world) problems;
 
* Students should be able to work in data science team; to understand of principles and a lifecycle of data science projects.
 
 
=== - What should a student remember at the end of the course? ===
 
   
  +
=== ILOs defined at three levels ===
By the end of the course, the students should be able
 
   
  +
==== Level 1: What concepts should a student know/remember/explain? ====
  +
By the end of the course, the students should be able to ...
 
* to apply deep learning methods to effectively solve practical (real-world) problems;
 
* to apply deep learning methods to effectively solve practical (real-world) problems;
 
* to work in data science team;
 
* to work in data science team;
 
* to understand of principles and a lifecycle of data science projects.
 
* to understand of principles and a lifecycle of data science projects.
   
=== - What should a student be able to understand at the end of the course? ===
+
==== Level 2: What basic practical skills should a student be able to perform? ====
  +
By the end of the course, the students should be able to ...
 
By the end of the course, the students should be able
 
 
 
* to understand modern deep NN architectures;
 
* to understand modern deep NN architectures;
 
* to compare modern deep NN architectures;
 
* to compare modern deep NN architectures;
 
* to create a prototype of a data-driven product.
 
* to create a prototype of a data-driven product.
   
=== - What should a student be able to apply at the end of the course? ===
+
==== Level 3: What complex comprehensive skills should a student be able to apply in real-life scenarios? ====
  +
By the end of the course, the students should be able to ...
 
By the end of the course, the students should be able
 
 
 
* to apply techniques for efficient training of deep NNs;
 
* to apply techniques for efficient training of deep NNs;
 
* to apply methods for data science team organisation;
 
* to apply methods for data science team organisation;
* to apply deep NNs in NLP and computer vision.
+
* to apply deep NNs in NLP and computer vision.
  +
== Grading ==
 
=== Course evaluation ===
 
   
  +
=== Course grading range ===
{|
 
  +
{| class="wikitable"
|+ Course grade breakdown
 
  +
|+
!
 
!
 
!align="center"| '''Proposed points'''
 
 
|-
 
|-
  +
! Grade !! Range !! Description of performance
| Labs/seminar classes
 
| 20
 
|align="center"|
 
 
|-
 
|-
  +
| A. Excellent || 90-100 || -
| Interim performance assessment
 
| 30
 
|align="center"|
 
 
|-
 
|-
  +
| B. Good || 75-89 || -
| Exams
 
| 50
+
|-
  +
| C. Satisfactory || 60-74 || -
|align="center"|
 
  +
|-
  +
| D. Poor || 0-59 || -
 
|}
 
|}
   
  +
=== Course activities and grading breakdown ===
If necessary, please indicate freely your course’s features in terms of students’ performance assessment.
 
  +
{| class="wikitable"
 
  +
|+
=== Grades range ===
 
 
{|
 
|+ Course grading range
 
!
 
!
 
!align="center"| '''Proposed range'''
 
 
|-
 
|-
  +
! Activity Type !! Percentage of the overall course grade
| A. Excellent
 
| 90-100
 
|align="center"|
 
 
|-
 
|-
  +
| Labs/seminar classes || 20
| B. Good
 
| 75-89
 
|align="center"|
 
 
|-
 
|-
  +
| Interim performance assessment || 30
| C. Satisfactory
 
| 60-74
 
|align="center"|
 
 
|-
 
|-
| D. Poor
+
| Exams || 50
| 0-59
 
|align="center"|
 
 
|}
 
|}
   
  +
=== Recommendations for students on how to succeed in the course ===
=== Resources and reference material ===
 
   
  +
  +
== Resources, literature and reference materials ==
  +
  +
=== Open access resources ===
 
* Goodfellow et al. Deep Learning, MIT Press. 2017
 
* Goodfellow et al. Deep Learning, MIT Press. 2017
 
* Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. 2017.
 
* Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. 2017.
 
* Osinga, Douwe. Deep Learning Cookbook: Practical Recipes to Get Started Quickly. O’Reilly Media, 2018.
 
* Osinga, Douwe. Deep Learning Cookbook: Practical Recipes to Get Started Quickly. O’Reilly Media, 2018.
   
== Course Sections ==
+
=== Closed access resources ===
  +
   
  +
=== Software and tools used within the course ===
The main sections of the course and approximate hour distribution between them is as follows:
 
  +
  +
= Teaching Methodology: Methods, techniques, & activities =
   
  +
== Activities and Teaching Methods ==
{|
 
  +
{| class="wikitable"
|+ Course Sections
 
  +
|+ Activities within each section
!align="center"| '''Section'''
 
! '''Section Title'''
 
!align="center"| '''Teaching Hours'''
 
 
|-
 
|-
  +
! Learning Activities !! Section 1 !! Section 2 !! Section 3
|align="center"| 1
 
| Review. CNNs and RNNs
 
|align="center"| 12 hours of lectures, 12 hours of labs
 
 
|-
 
|-
  +
| Development of individual parts of software product code || 1 || 1 || 1
|align="center"| 2
 
| Team Data Science Processes
 
|align="center"| 6 hours of lectures, 6 hours of labs
 
 
|-
 
|-
  +
| Homework and group projects || 1 || 1 || 1
|align="center"| 3
 
  +
|-
| VAE, GANs
 
  +
| Midterm evaluation || 1 || 1 || 1
|align="center"| 8 hours of lectures, 8 hours of labs
 
|}
+
|-
  +
| Testing (written or computer based) || 1 || 1 || 1
  +
|-
  +
| Discussions || 1 || 1 || 1
  +
|}
  +
== Formative Assessment and Course Activities ==
   
=== Section 1 ===
+
=== Ongoing performance assessment ===
 
==== Section title: ====
 
 
Review. CNNs and RNNs
 
 
=== Topics covered in this section: ===
 
 
* Image processing, FFNs, CNNs
 
* Training Deep NNs
 
* RNNs, LSTM, GRU, Embeddings
 
* Bidirectional RNNs
 
* Seq2seq
 
* Encoder-Decoder Networks
 
* Attention
 
* Memory Networks
 
 
=== What forms of evaluation were used to test students’ performance in this section? ===
 
 
<div class="tabular">
 
 
<span>|a|c|</span> &amp; '''Yes/No'''<br />
 
Development of individual parts of software product code &amp; 1<br />
 
Homework and group projects &amp; 1<br />
 
Midterm evaluation &amp; 1<br />
 
Testing (written or computer based) &amp; 1<br />
 
Reports &amp; 0<br />
 
Essays &amp; 0<br />
 
Oral polls &amp; 0<br />
 
Discussions &amp; 1<br />
 
 
 
 
</div>
 
=== Typical questions for ongoing performance evaluation within this section ===
 
 
# Suppose you use Batch Gradient Descent and you plot the validation error at every epoch. If you notice that the validation error consistently goes up, what is likely going on? How can you fix this?
 
# Is it a good idea to stop Mini-batch Gradient Descent immediately when the validation error goes up?
 
# List the optimizers that you know (except SGD) and explain one of them
 
# Describe Xavier (or Glorot) initialization. Why do you need it?
 
 
=== Typical questions for seminar classes (labs) within this section ===
 
 
# Name advantages of the ELU activation function over ReLU.
 
# Can you name the main innovations in AlexNet, compared to LeNet-5? What about the main innovations in GoogLeNet and ResNet?
 
# What is the difference between LSTM and GRU cells?
 
 
=== Test questions for final assessment in this section ===
 
   
  +
==== Section 1 ====
  +
{| class="wikitable"
  +
|+
  +
|-
  +
! Activity Type !! Content !! Is Graded?
  +
|-
  +
| Question || Suppose you use Batch Gradient Descent and you plot the validation error at every epoch. If you notice that the validation error consistently goes up, what is likely going on? How can you fix this? || 1
  +
|-
  +
| Question || Is it a good idea to stop Mini-batch Gradient Descent immediately when the validation error goes up? || 1
  +
|-
  +
| Question || List the optimizers that you know (except SGD) and explain one of them || 1
  +
|-
  +
| Question || Describe Xavier (or Glorot) initialization. Why do you need it? || 1
  +
|-
  +
| Question || Name advantages of the ELU activation function over ReLU. || 0
  +
|-
  +
| Question || Can you name the main innovations in AlexNet, compared to LeNet-5? What about the main innovations in GoogLeNet and ResNet? || 0
  +
|-
  +
| Question || What is the difference between LSTM and GRU cells? || 0
  +
|}
  +
==== Section 2 ====
  +
{| class="wikitable"
  +
|+
  +
|-
  +
! Activity Type !! Content !! Is Graded?
  +
|-
  +
| Question || What is CRISP-DM? || 1
  +
|-
  +
| Question || What is TDSP? || 1
  +
|-
  +
| Question || How to use MLflow? || 1
  +
|-
  +
| Question || What is TensorBoard? || 1
  +
|-
  +
| Question || How to apply Kubeflow in practice? || 1
  +
|-
  +
| Question || Explain issues in distributed learning of deep NNs. || 0
  +
|-
  +
| Question || How do you organize your data science project? || 0
  +
|-
  +
| Question || Recall a checklist for organization of a typical data science project. || 0
  +
|}
  +
==== Section 3 ====
  +
{| class="wikitable"
  +
|+
  +
|-
  +
! Activity Type !! Content !! Is Graded?
  +
|-
  +
| Question || What is an Autoencoder? Can you list the structure and types of Autoencoders? || 1
  +
|-
  +
| Question || Can you describe ways to train Stacked AEs? || 1
  +
|-
  +
| Question || What is Denoising AE? Can you describe what is sparsity loss and why it can be useful? || 1
  +
|-
  +
| Question || Can you make a distinction between AE and VAE? || 1
  +
|-
  +
| Question || If an autoencoder perfectly reconstructs the inputs, is it necessarily a good autoencoder? How can you evaluate the performance of an autoencoder? || 0
  +
|-
  +
| Question || How do you tie weights in a stacked autoencoder? What is the point of doing so? || 0
  +
|-
  +
| Question || What about the main risk of an overcomplete autoencoder? || 0
  +
|-
  +
| Question || How the loss function for VAE is defined? What is ELBO? || 0
  +
|-
  +
| Question || Can you list the structure and types of a GAN? || 0
  +
|-
  +
| Question || How would you train a GAN? || 0
  +
|-
  +
| Question || How would you estimate the quality of a GAN? || 0
  +
|-
  +
| Question || Can you describe cost function of a Discriminator? || 0
  +
|}
  +
=== Final assessment ===
  +
'''Section 1'''
 
# Explain what the Teacher Forcing is.
 
# Explain what the Teacher Forcing is.
 
# Why do people use encoder–decoder RNNs rather than plain sequence-to-sequence RNNs for automatic translation?
 
# Why do people use encoder–decoder RNNs rather than plain sequence-to-sequence RNNs for automatic translation?
 
# How could you combine a convolutional neural network with an RNN to classify videos?
 
# How could you combine a convolutional neural network with an RNN to classify videos?
  +
'''Section 2'''
 
=== Section 2 ===
 
 
==== Section title: ====
 
 
Team Data Science Processes
 
 
=== Topics covered in this section: ===
 
 
* Team Data Science Processes
 
* Team Data Science Roles
 
* Team Data Science Tools (MLFlow, KubeFlow)
 
* CRISP-DM
 
* Productionizing ML systems
 
 
=== What forms of evaluation were used to test students’ performance in this section? ===
 
 
<div class="tabular">
 
 
<span>|a|c|</span> &amp; '''Yes/No'''<br />
 
Development of individual parts of software product code &amp; 1<br />
 
Homework and group projects &amp; 1<br />
 
Midterm evaluation &amp; 1<br />
 
Testing (written or computer based) &amp; 1<br />
 
Reports &amp; 0<br />
 
Essays &amp; 0<br />
 
Oral polls &amp; 0<br />
 
Discussions &amp; 1<br />
 
 
 
 
</div>
 
=== Typical questions for ongoing performance evaluation within this section ===
 
 
# What is CRISP-DM?
 
# What is TDSP?
 
# How to use MLflow?
 
# What is TensorBoard?
 
# How to apply Kubeflow in practice?
 
 
=== Typical questions for seminar classes (labs) within this section ===
 
 
# Explain issues in distributed learning of deep NNs.
 
# How do you organize your data science project?
 
# Recall a checklist for organization of a typical data science project.
 
 
=== Test questions for final assessment in this section ===
 
 
 
# Can you explain what it means for a company to be ML-ready?
 
# Can you explain what it means for a company to be ML-ready?
 
# What a company can do to become ML-ready / Data driven?
 
# What a company can do to become ML-ready / Data driven?
Line 228: Line 223:
 
# Can you list and define typical roles in a DS team?
 
# Can you list and define typical roles in a DS team?
 
# What do you think about practical aspects of processes and roles in Data Science projects/teams?
 
# What do you think about practical aspects of processes and roles in Data Science projects/teams?
  +
'''Section 3'''
  +
# Can you make a distinction between Variational approximation of density and MCMC methods for density estimation?
  +
# What is DCGAN? What is its purpose? What are main features of DCGAN?
  +
# What is your opinion about Word Embeddings? What types do you know? Why are they useful?
  +
# How would you classify different CNN architectures?
  +
# How would you classify different RNN architectures?
  +
# Explain attention mechanism. What is self-attention?
  +
# Explain the Transformer architecture. What is BERT?
   
=== Section 3 ===
+
=== The retake exam ===
  +
'''Section 1'''
 
==== Section title: ====
 
 
VAEs, GANs
 
 
==== Topics covered in this section: ====
 
 
* Autoencoders
 
* Variational Autoencoders
 
* GANs, DCGAN
 
 
=== What forms of evaluation were used to test students’ performance in this section? ===
 
 
<div class="tabular">
 
 
<span>|a|c|</span> &amp; '''Yes/No'''<br />
 
Development of individual parts of software product code &amp; 1<br />
 
Homework and group projects &amp; 1<br />
 
Midterm evaluation &amp; 1<br />
 
Testing (written or computer based) &amp; 1<br />
 
Reports &amp; 0<br />
 
Essays &amp; 0<br />
 
Oral polls &amp; 0<br />
 
Discussions &amp; 1<br />
 
 
 
 
</div>
 
=== Typical questions for ongoing performance evaluation within this section ===
 
 
# What is an Autoencoder? Can you list the structure and types of Autoencoders?
 
# Can you describe ways to train Stacked AEs?
 
# What is Denoising AE? Can you describe what is sparsity loss and why it can be useful?
 
# Can you make a distinction between AE and VAE?
 
 
==== Typical questions for seminar classes (labs) within this section ====
 
 
* If an autoencoder perfectly reconstructs the inputs, is it necessarily a good autoencoder? How can you evaluate the performance of an autoencoder?
 
* How do you tie weights in a stacked autoencoder? What is the point of doing so?
 
* What about the main risk of an overcomplete autoencoder?
 
* How the loss function for VAE is defined? What is ELBO?
 
* Can you list the structure and types of a GAN?
 
* How would you train a GAN?
 
* How would you estimate the quality of a GAN?
 
* Can you describe cost function of a Discriminator?
 
   
  +
'''Section 2'''
==== Test questions for final assessment in this section ====
 
   
  +
'''Section 3'''
* Can you make a distinction between Variational approximation of density and MCMC methods for density estimation?
 
* What is DCGAN? What is its purpose? What are main features of DCGAN?
 
* What is your opinion about Word Embeddings? What types do you know? Why are they useful?
 
* How would you classify different CNN architectures?
 
* How would you classify different RNN architectures?
 
* Explain attention mechanism. What is self-attention?
 
* Explain the Transformer architecture. What is BERT?
 

Latest revision as of 12:58, 12 July 2022

Practical Machine Learning and Deep Learning

  • Course name: Practical Machine Learning and Deep Learning
  • Code discipline: PMLDL-04
  • Subject area:

Short Description

This course covers the following concepts: Practical aspects of deep learning (DL); Practical applications of DL in Natural Language Processing, Computer Vision and generation..

Prerequisites

Prerequisite subjects

  • CSE202 — Analytical Geometry and Linear Algebra I / []: Manifolds "Linear Alg./Calculus: Manifolds
  • CSE203 — Mathematical Analysis II: Basics of optimisation
  • CSE201 — Mathematical Analysis I: integration and differentiation.
  • CSE103 — Theoretical Computer Science: Graph theory basics, Spectral decomposition.
  • CSE206 — Probability And Statistics: Multivariate normal dist.
  • CSE504 — Digital Signal Processing: convolution, cross-correlation"

Prerequisite topics

Course Topics

Course Sections and Topics
Section Topics within the section
Review. CNNs and RNNs
  1. Image processing, FFNs, CNNs
  2. Training Deep NNs
  3. RNNs, LSTM, GRU, Embeddings
  4. Bidirectional RNNs
  5. Seq2seq
  6. Encoder-Decoder Networks
  7. Attention
  8. Memory Networks
Team Data Science Processes
  1. Team Data Science Processes
  2. Team Data Science Roles
  3. Team Data Science Tools (MLFlow, KubeFlow)
  4. CRISP-DM
  5. Productionizing ML systems
VAEs, GANs
  1. Autoencoders
  2. Variational Autoencoders
  3. GANs, DCGAN

Intended Learning Outcomes (ILOs)

What is the main purpose of this course?

The course is about the practical aspects of deep learning. In addition to frontal lectures, the flipped classes and student project presentations will be organized. During lab sessions the working language is Python. The primary framework for deep learning is PyTorch. Usage of TensorFlow and Keras is possible, usage of Docker is highly appreciated.

ILOs defined at three levels

Level 1: What concepts should a student know/remember/explain?

By the end of the course, the students should be able to ...

  • to apply deep learning methods to effectively solve practical (real-world) problems;
  • to work in data science team;
  • to understand of principles and a lifecycle of data science projects.

Level 2: What basic practical skills should a student be able to perform?

By the end of the course, the students should be able to ...

  • to understand modern deep NN architectures;
  • to compare modern deep NN architectures;
  • to create a prototype of a data-driven product.

Level 3: What complex comprehensive skills should a student be able to apply in real-life scenarios?

By the end of the course, the students should be able to ...

  • to apply techniques for efficient training of deep NNs;
  • to apply methods for data science team organisation;
  • to apply deep NNs in NLP and computer vision.

Grading

Course grading range

Grade Range Description of performance
A. Excellent 90-100 -
B. Good 75-89 -
C. Satisfactory 60-74 -
D. Poor 0-59 -

Course activities and grading breakdown

Activity Type Percentage of the overall course grade
Labs/seminar classes 20
Interim performance assessment 30
Exams 50

Recommendations for students on how to succeed in the course

Resources, literature and reference materials

Open access resources

  • Goodfellow et al. Deep Learning, MIT Press. 2017
  • Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. 2017.
  • Osinga, Douwe. Deep Learning Cookbook: Practical Recipes to Get Started Quickly. O’Reilly Media, 2018.

Closed access resources

Software and tools used within the course

Teaching Methodology: Methods, techniques, & activities

Activities and Teaching Methods

Activities within each section
Learning Activities Section 1 Section 2 Section 3
Development of individual parts of software product code 1 1 1
Homework and group projects 1 1 1
Midterm evaluation 1 1 1
Testing (written or computer based) 1 1 1
Discussions 1 1 1

Formative Assessment and Course Activities

Ongoing performance assessment

Section 1

Activity Type Content Is Graded?
Question Suppose you use Batch Gradient Descent and you plot the validation error at every epoch. If you notice that the validation error consistently goes up, what is likely going on? How can you fix this? 1
Question Is it a good idea to stop Mini-batch Gradient Descent immediately when the validation error goes up? 1
Question List the optimizers that you know (except SGD) and explain one of them 1
Question Describe Xavier (or Glorot) initialization. Why do you need it? 1
Question Name advantages of the ELU activation function over ReLU. 0
Question Can you name the main innovations in AlexNet, compared to LeNet-5? What about the main innovations in GoogLeNet and ResNet? 0
Question What is the difference between LSTM and GRU cells? 0

Section 2

Activity Type Content Is Graded?
Question What is CRISP-DM? 1
Question What is TDSP? 1
Question How to use MLflow? 1
Question What is TensorBoard? 1
Question How to apply Kubeflow in practice? 1
Question Explain issues in distributed learning of deep NNs. 0
Question How do you organize your data science project? 0
Question Recall a checklist for organization of a typical data science project. 0

Section 3

Activity Type Content Is Graded?
Question What is an Autoencoder? Can you list the structure and types of Autoencoders? 1
Question Can you describe ways to train Stacked AEs? 1
Question What is Denoising AE? Can you describe what is sparsity loss and why it can be useful? 1
Question Can you make a distinction between AE and VAE? 1
Question If an autoencoder perfectly reconstructs the inputs, is it necessarily a good autoencoder? How can you evaluate the performance of an autoencoder? 0
Question How do you tie weights in a stacked autoencoder? What is the point of doing so? 0
Question What about the main risk of an overcomplete autoencoder? 0
Question How the loss function for VAE is defined? What is ELBO? 0
Question Can you list the structure and types of a GAN? 0
Question How would you train a GAN? 0
Question How would you estimate the quality of a GAN? 0
Question Can you describe cost function of a Discriminator? 0

Final assessment

Section 1

  1. Explain what the Teacher Forcing is.
  2. Why do people use encoder–decoder RNNs rather than plain sequence-to-sequence RNNs for automatic translation?
  3. How could you combine a convolutional neural network with an RNN to classify videos?

Section 2

  1. Can you explain what it means for a company to be ML-ready?
  2. What a company can do to become ML-ready / Data driven?
  3. Can you list approaches to structure DS-teams? Discuss their advantages and disadvantages.
  4. Can you list and define typical roles in a DS team?
  5. What do you think about practical aspects of processes and roles in Data Science projects/teams?

Section 3

  1. Can you make a distinction between Variational approximation of density and MCMC methods for density estimation?
  2. What is DCGAN? What is its purpose? What are main features of DCGAN?
  3. What is your opinion about Word Embeddings? What types do you know? Why are they useful?
  4. How would you classify different CNN architectures?
  5. How would you classify different RNN architectures?
  6. Explain attention mechanism. What is self-attention?
  7. Explain the Transformer architecture. What is BERT?

The retake exam

Section 1

Section 2

Section 3