You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: assignments/2017/assignment3.md
+6-7Lines changed: 6 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,13 +4,13 @@ mathjax: true
4
4
permalink: /assignments2017/assignment2/
5
5
---
6
6
7
-
In this assignment you will implement recurrent networks, and apply them to image captioning on Microsoft COCO. You will also explore methods for visualizing the features of a pretrained model on ImageNet, and use this model to implement Style Transfer. Finally, you will train a generative adversarial network to generate images that look like a training dataset!
7
+
In this assignment you will implement recurrent networks, and apply them to image captioning on Microsoft COCO. You will also explore methods for visualizing the features of a pretrained model on ImageNet, and also this model to implement Style Transfer. Finally, you will train a generative adversarial network to generate images that look like a training dataset!
8
8
9
9
The goals of this assignment are as follows:
10
10
11
11
- Understand the architecture of *recurrent neural networks (RNNs)* and how they operate on sequences by sharing weights over time
12
-
- Understand the difference between vanilla RNNs and Long-Short Term Memory (LSTM) RNNs
13
-
- Understand how to sample from an RNN at test-time
12
+
- Understand and implement both Vanilla RNNs and Long-Short Term Memory (LSTM) RNNs
13
+
- Understand how to sample from an RNN language model at test-time
14
14
- Understand how to combine convolutional neural nets and recurrent nets to implement an image captioning system
15
15
- Understand how a trained convolutional network can be used to compute gradients with respect to the input image
16
16
- Implement and different applications of image gradients, including saliency maps, fooling images, class visualizations.
@@ -31,7 +31,7 @@ GPUs are **not required** for this assignment, but will help to speed up trainin
31
31
32
32
Once you've got the cloud instance running, make sure to run the following line to enter the virtual environment that we prepared for you (you do **not** need to make your own virtual environment):
33
33
34
-
```
34
+
```bash
35
35
source /home/cs231n/myVE35/bin/activate
36
36
```
37
37
@@ -113,8 +113,7 @@ with respect to images, and use them to produce saliency maps and fooling
113
113
images. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
114
114
115
115
### Q4: Style Transfer (20 points)
116
-
In the Jupyter notebooks `StyleTransfer-TensorFlow.ipynb`/`StyleTransfer-PyTorch.ipynb` you will learn how to create images with the content of one image but the style of another. . Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
116
+
In the Jupyter notebooks `StyleTransfer-TensorFlow.ipynb`/`StyleTransfer-PyTorch.ipynb` you will learn how to create images with the content of one image but the style of another. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
In the Jupyter notebooks `GANs-TensorFlow.ipynb`/`GANs-PyTorch.ipynb` you will learn how to generate images that match a training dataset, and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
119
+
In the Jupyter notebooks `GANs-TensorFlow.ipynb`/`GANs-PyTorch.ipynb` you will learn how to generate images that match a training dataset, and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awarded if you complete both notebooks.
0 commit comments