Admission (Zulassung) to the exam is only possible by completing at least 50% of the assignments averaged over all handed out tasks. These are exercises to do at home. Some of them are optional, some need to be handed in. Exercises that need to be handed in are specially marked.
Lecture 1Submission Deadline: 6 May 2013 at 15:00(1) In an iPython notebook, generate a random 10x10 matrix and compute its inverse. Use the online documentation on SciPy and NumPy to figure out how to do this: You generally need to start off with something like "from pylab import *" and "from scipy import linalg". (2) (a) In an iPython notebook, compute 10000 values in the interval [-2,2] of the function: and plot them using the "plot" function. Generate the x values using linspace(-2,2,10000). (b) Approximate the first and second derivatives of f(x) using finite differences and plot them. iPython worksheet discussed in Exercise Session 01 (22.04.2013): IPyNBLecture 2In this exercise, you're going to work more with the numerical solution of differential equations. The model itself also describes many kinds of systems that are common in computer science, economics, business, and other areas.The Lotka-Volterra equation is a simple model of prey and predator populations. Let which means that the rabbits would exponentially reproduce as and that means that foxes are busy converting consumed rabbits into new foxes, but would die out exponentially if they cannot eat. Simulate Lotka-Volterra equation. How does a solution behave? Explore the parameter space. Do you need to explore the whole 4-dimensional parameter space to see all possible behaviors? Hand in a single worksheet with an analysis and discussion. iPython worksheet discussed in Exercise Session 02 (29.04.2013): Introduction to ODE: IPyNBSolution to Exercise 2: IPyNB, PDFLecture 3(1) To understand the concept of rate coding and the numerical exploration of complex systems, let us work with the Izhikevich model a bit more. Plot the correspondence between input and output frequency of an Izhikevich neuron. For every frequency, give it a train of discrete spikes with fixed amplitude and measure the output frequency by counting the number of spikes per unit time after letting the system settle. Explore different input parameters (amplitude, frequency) and different parameters of the Izhikevich model. Hand in a single worksheet with an analysis and discussion. (2) Optional: Analogous to the way we examined the response types of the FitzHugh-Nagumo model, find sets of parameters with different behaviors for the Izhikevich model (i.e. use simulation, Fourier transforms, and clustering). Use two different stimuli: no input, step up input after an initial delay, and step-down input after an initial delay. Try to pick reasonable and interesting parameters and parameter ranges. (you probably can't do a complete job). (3) Optional: Write a Python function that generates a recurrent network to compute parity for inputs of size N. That is, write a function: def make_parity(N): # ... your code here ... return M,m,n def apply_network(x,M,m,n): v = zeros(len(M)) v[:len(x)] = x for i in range(n): v = 1.0*((dot(M,v)+m)>=0.0) return v def parity(x): N = len(x) M,m,n = make_parity(N) v = apply_network(x,M,m,n) return v[0] Hand in a single worksheet with an analysis and discussion. Lecture 5Submission Deadline: 4 June 2013 at 15:00Train an MLP on first 5000 digits of MNIST and test it on next 1000 digits. Try different learning rates and numbers of hidden units. Download short-mnist.npz from this page. Load it with mnist = load('short-mnist.npz') The MLP input is the image, flattened to 1D array (reshape()). The target output of the MLP is a 10-dimensional vector with 1 at the right answer and 0 everywhere else. Hand in a single worksheet with an analysis and discussion. Solution to Exercise 5: IPyNBAdd a single convolutional layer as the input layer to the MLP of the previous exercise (exercise for Lecture 5), and perform the same experiments (you can use the code samples provided in the lecture slides). Please note that several convolution layers in parallel are needed; each learns a filter, and so, for instance, at least 2 are needed to learn x-gradient and y-gradient (In practice you need to use more filters). Analyse the result and report the performance changes with the addition of the convolutional layer. Hand in a single worksheet with an analysis and discussion. iPython worksheet discussed in Exercise Session on 10.06.2013: 1D Linear Convolutional Layer: IPyNBThis dataset contains 1442 speeches of German presidents (originally available at: http://perso.ens-lyon.fr/adrien.barbaresi/corpora/index.html) The raw/ directory contains the speeches themselves and the stems/ directory contains stems of the words, excluding stopwords, one per line. You can load the words like this: import codecs N = 1442... m = json.load(codecs.open('metadata.json'))
Hand in a single worksheet with an analysis and discussion.iPython worksheet discussed in Exercise Session on 17.06.2013: 1D SOM: IPyNBRead the following paper: Hierarchical Models of Object Recognition in Cortex iPython worksheet discussed in Exercise Session on 01.07.2013: PCA and Emergent Fourier Basis: IPyNB
Slides discussed in Exercise Session on 15.07.2013: Visual Attention: PDF (NEW)Lecture 11 (NEW)Implement the first part of HMAX feature extraction with multiple scales (S1 and C1 layers). Apply it to MNIST and see what error rate you can reach with an MLP. For the S1 layer you can use Gabor filters or several Sobel filters. For the C1 layer you might want to use scipy.ndimage.filters.maximum_filter() and scipy.ndimage.zoom() to do approximately the same thing as finding maxima in a grid. Hand in a single worksheet with an analysis and discussion. |