### Exercises

Admission (Zulassung) to the exam is only possible by completing at least 50% of the assignments averaged over all handed out tasks. These are exercises to do at home. Some of them are optional, some need to be handed in. Exercises that need to be handed in are specially marked.
• Exercises should be submitted to yousefi@iupr.com
• Generally the preferred way of submitting your results is handing in your iPython Notebook Worksheets (.ipynb).
• The email's subject line should be [NCSO13] Exercise # with # being replaced by the exercise number.
• Make sure that your submission (iPython Notebook Worksheets, PDFs) contains your name(s) and email address(es) at the top.
• If you need to resubmit, just reply to your original email and change the to field to  yousefi@iupr.com .
• The homework has to be handed in before a given deadline, which is at least eight days after the task has been published.

## Lecture 1

#### Submission Deadline: 6 May 2013 at 15:00

(1) In an iPython notebook, generate a random 10x10 matrix and compute its inverse. Use the online documentation on SciPy and NumPy to figure out how to do this:
You generally need to start off with something like "from pylab import *" and "from scipy import linalg".

(2) (a) In an iPython notebook, compute 10000 values in the interval [-2,2] of the function:

and plot them using the "plot" function. Generate the x values using linspace(-2,2,10000).

(b) Approximate the first and second derivatives of f(x) using finite differences and plot them.

## Lecture 2

#### Submission Deadline: 13 May 2013 at 15:00

In this exercise, you're going to work more with the numerical solution of differential equations. The model itself also describes many kinds of systems that are common in computer science, economics, business, and other areas.

The Lotka-Volterra equation is a simple model of prey and predator populations. Let $x$ be the prey population (say, the number of rabbits, scaled somehow) and $y$ is the predator population (say, foxes). The frequency of a fox meeting a rabbit is proportional to $xy$. For the rabbits, we have this:

$\dot{x} = \alpha x - \beta xy$

which means that the rabbits would exponentially reproduce as $e^{x}$ but the casualties are proportional to frequency of meeting a fox. For the foxes, the equation goes similarly:

$\dot{y} = -\gamma y - \delta xy$

and that means that foxes are busy converting consumed rabbits into new foxes, but would die out exponentially if they cannot eat.

Simulate Lotka-Volterra equation. How does a solution behave? Explore the parameter space. Do you need to explore the whole 4-dimensional parameter space to see all possible behaviors?

Hand in a single worksheet with an analysis and discussion.

## Lecture 3

#### Submission Deadline: 20 May 2013 at 15:00

(1) To understand the concept of rate coding and the numerical exploration of complex systems, let us work with the Izhikevich model a bit more. Plot the correspondence between input and output frequency of an Izhikevich neuron. For every frequency, give it a train of discrete spikes with fixed amplitude and measure the output frequency by counting the number of spikes per unit time after letting the system settle. Explore different input parameters (amplitude, frequency) and different parameters of the Izhikevich model.

Hand in a single worksheet with an analysis and discussion.

(2) Optional: Analogous to the way we examined the response types of the FitzHugh-Nagumo model, find sets of parameters with different behaviors for the Izhikevich model (i.e. use simulation, Fourier transforms, and clustering).  Use two different stimuli: no input, step up input after an initial delay, and step-down input after an initial delay. Try to pick reasonable and interesting parameters and parameter ranges. (you probably can't do a complete job).

(3) Optional: Write a Python function that generates a recurrent network to compute parity for inputs of size N. That is, write a function:

def make_parity(N):
# ... your code here ...
return M,m,n

def apply_network(x,M,m,n):
v = zeros(len(M))
v[:len(x)] = x
for i in range(n):
v = 1.0*((dot(M,v)+m)>=0.0)
return v

def parity(x):
N = len(x)
M,m,n = make_parity(N)
v = apply_network(x,M,m,n)
return v[0]

Hand in a single worksheet with an analysis and discussion.

## Lecture 5

#### Submission Deadline: 4 June 2013 at 15:00

Train an MLP on first 5000 digits of MNIST and test it on next 1000 digits. Try different learning rates and numbers of hidden units.

train_images = mnist['train_images']

train_labels = mnist['train_labels']
test_images = mnist['test_images']
test_labels = mnist['test_labels']

The MLP input is the image, flattened to 1D array (reshape()). The target output of the MLP is a 10-dimensional vector with 1 at the right answer and 0 everywhere else.

Hand in a single worksheet with an analysis and discussion.

## Lecture 8

#### Submission Deadline: 17 June 2013 at 15:00

Add a single convolutional layer as the input layer to the MLP of the previous exercise (exercise for Lecture 5), and perform the same experiments (you can use the code samples provided in the lecture slides).
Please note that several convolution layers in parallel are needed; each learns a filter, and so, for instance, at least 2 are needed to learn x-gradient and y-gradient (In practice you need to use more filters).
Analyse the result and report the performance changes with the addition of the convolutional layer.

Hand in a single worksheet with an analysis and discussion.

## Lecture 9

#### Submission Deadline: 1 July 2013 at 15:00

This dataset contains 1442 speeches of German presidents (originally available at: http://perso.ens-lyon.fr/adrien.barbaresi/corpora/index.html)
The raw/ directory contains the speeches themselves and the stems/ directory contains stems of the words, excluding stopwords, one per line.

You can load the words like this:

import codecs
N = 1442
for i in range(N):
words = [line.strip() for line in codecs.open('stems/%04d.txt' % i).readlines()]
...

You can also use the metadata file:
import json

a) train a SOM on the texts using the words' number of occurences as coordinates (that's a very high dimensional space!). Do you get the texts grouped by topic?
b) (optional) plot the path on the SOM according to the year. How random is it?

## Lecture 11 (NEW)

#### Submission Deadline: 29 July 2013 at 15:00

Implement the first part of HMAX feature extraction with multiple scales (S1 and C1 layers). Apply it to MNIST and see what error rate you can reach with an MLP. For the S1 layer you can use Gabor filters or several Sobel filters. For the C1 layer you might want to use scipy.ndimage.zoom() to do approximately the same thing as finding maxima in a grid.

Hand in a single worksheet with an analysis and discussion.

Ċ
Jul 9, 2013, 1:27 AM
ċ
Ex 02 - Solution.ipynb
(176k)
May 27, 2013, 1:28 AM
Ċ
May 27, 2013, 1:28 AM
ċ
MLP-OOP.ipynb
(77k)
Jun 13, 2013, 1:00 AM
ċ
PCA and Emergent Fourier Basis.ipynb
(623k)
Jul 9, 2013, 1:23 AM
ċ
Session 01.ipynb
(91k)
May 1, 2013, 10:46 AM
ċ
Session 02 - ODE_Introduction.ipynb
(178k)
May 1, 2013, 10:48 AM
Ċ
Jul 15, 2013, 7:59 AM
ċ
conv-1d.ipynb
(39k)
Jun 14, 2013, 10:23 AM
Ċ
nn99.pdf
(264k)
Jun 24, 2013, 8:47 AM
ċ
short-mnist.npz
(10375k)
Ilya Mezhirov,
May 21, 2013, 6:57 AM
ċ
som-1d.ipynb
(60k)