I've just finished lecture 4, and so far McKay has taken me through a tour of Shannon's source coding theorem and he has convinced of what a marvel it really is. I don't really have time to do all the surrounding reading (although I did read the chapter on Information Theory in Theoretical Neuroscience), but it has been excellent in giving an intuition for how the basis of the field was constructed. The exercises he takes the class through connect neatly together when you start to think about them and it becomes really satisfying when you start to know what is coming next, which happens more often than I was expecting.
Even though my MSc thesis topic appears to have changed (I'll now be looking at developing a Wiener kernel to predict variability in spike trains), it still looks as if this will be useful in understanding neural encoding. So I'll stick with it for now.
No comments:
Post a Comment