Tuesday 19 August 2014

Up Goer 5 style project rotation summary

On a whim, I wrote up a summary of my project with Jack Mellor in the style of the Up Goer 5 (http://xkcd.com/1133/) using this text editor. I looked at feedforward inhibition in the hippocampal mossy fibre and how it competed with the excitatory mossy fibre transmission. The full PhD would look at how it changes and allows it to compete with other signals e.g. those from the perforant pathway. Anyway, here it is! I might try to improve it at some point.

Your brain is a cool thing that lets you think and move and see and all sorts of other things. I look at one part that lets you remember things by changing the way the little bits of it join up. One of these joins is really weird because it is so big next to all of the others! We don't know why it is so big, but it is really strong. These joins let the little bits talk to each other, so the join that I look at must be really loud! It doesn't talk much though. Maybe it is like a phone, that is quiet most of the time and so when people are all talking together they can talk as much as they want. When the phone rings, it rings loud and everyone goes quiet so someone can answer the phone and listen to what the person on the phone is saying. We think this is what happens in the weird join and I want to know how. We think it might be because there is some stuff that usually makes joins shut up, but sometimes makes them louder. Usually people ignore this stuff, so I want to look at it and see where it goes so I can decide if we are right or not. It will probably be quite hard since I haven't done it before, and it can be hard to see where the stuff is going. I'll use my computer to work out what it should like like if the stuff is going to different parts of the join and see if I can see that happening when I try it out. I hope it works!

Thursday 14 August 2014

Jellybeans

My partner asked me to estimate the probability that a packet of jelly beans would be missing a particular flavour. I enjoy messing around with problems like these, so I thought I'd give it a shot. If there is anything wrong, or if there is a better method of solving the problem, I'd love to hear it. Also, I'm not really a mathematician at all, so I won't attempt to sketch out a formal proof. I'll describe my thinking, and the empirical observations I made along the way.

So, first off, we had to estimate the number of jelly beans in the packet. The packaging informed us that the total weight was 225g. We found that 24 jelly beans weighed 24.5g, so extrapolating from that we estimated there were 230 beans in the packet.

The packet also claimed it had 9 flavours of beans. We assumed that beans were sampled randomly from 9 independent sources (imagine a robot that selects from 230 beans from 9 jars, where each jar contains a different flavour, and each choice does not depend on any of the previous choices)

The problem could be interpreted in two ways: either that there are not 9 flavours of beans in the jar, or that there are exactly 8 flavours of beans. The following method allows us to solve either case.

From the assumptions outlined, any combination of counts of beans that sum up to 230 are equally likely, including those with the zero elements. As such, we could separate out the strings of numbers by the count k of non zero elements. By counting the possible strings that sum up to 230 for each k, we could work out the probability of each string with n non-zero elements occurring.

Due to combinatorial nature of this problem, I wasn't going to sit down and try to count each possible string. I noticed however, that the counts for each k were following a particular pattern. I would take a string with a starting point  of (230-k + 1) and (k-1) ones, e.g., 228 + 1 + 1 for k=3. Starting from the left, I would subtract 1 from the first column, add 1 to the next and leave all others the same. This would continue until the first column had reached 1, and the second column was (230 - k + 1). For k > 2, I would then add one to the third column, and subtract 1 from the second, then return to adding and subtracting from the first and second columns. 

I noticed this procedure would follow a pattern of polyhedral numbers, where the set size C for strings with k non-zero elements:

Ck = prod(i=[1:(k-1)]) {230 - k+ i}/(k-1)!, k < 9 (since there are 9 possible jelly bean flavours.

(Generalising this: replacing 230 with N for any number of possible jelly beans, and setting k < r for any number of possible jelly bean flavours- obviously r < N)

Now, considering the placement of zeros within the string, it is necessary to work out the number of combinations of non-zero elements in the string. These can be calculated as binomial coefficients of the form 

Bk = 9!/(k!(9-k)!)

So, to work out the probability of there not being 9 flavours of jelly beans we must compute

P(not 9 flavours of jelly beans)  = 1 - C9/sum_k=1:9(Bk*Ck) = 0.269

and to work out the probability of there being exactly 8 flavours of jelly beans we must compute

P(exactly 8 flavours of jelly beans from a possible 9) = 9*C8/sum_9(Ck) = 0.237

On first reflection, this seems unintuitively high. However, with 9 options to choose from, and no reason to move from one sack of bean flavours to the next, with a relatively small size of packet, it might be quite reasonable that there is a reasonably high chance it might never visit 1 of the sacks.

I guess that Jelly Bean Inc. don't use this particular method to sort their beans.

Tuesday 20 May 2014

Trivial or black-boxing?

I noticed today that I have a particularly unhelpful habit. It might have contributed to hundreds of hours of my time, and I feel they could have been saved with a simple question:

"Is this trivial, or should I black-box it?"

Let me explain.

As part of learning new skills, I will often come across things I don't understand. At this point, I am reaching the limits of my own knowledge and need some assistance in stretching that further to encompass something new. For example, at the moment I am learning how to take data for patch-clamp recordings and use these to estimate parameters for ion channel gating mechanisms using the Hodgkin-Huxley formalism. What I have had difficulty understanding, was how certain protocols were appropriate for parameter estimation. I have sought out further knowledge (patch-clamp recording protocols), consolidated past knowledge (gating mechanisms in Hodgkin-Huxley formalism), and for the most part all is well.

A lot of time could have been saved if I had appropriately interpreted the message my supervisor had given me. She said

"You just fit it"

I interpreted this to mean that it was a trivial exercise. That I should be able to work it out in a back of the envelope style way, or with a simple algorithm. Instead, I should really have interpreted as a black-box. It was not trivial, but I don't really need to concern myself with the details. In this case, once you have the equations, you feed it through a (mostly*) black-boxed optimisation algorithm and run your data through until you get a good fit. It seems obvious in hindsight.

However, my personal tendency is to believe that I must be doing something wrong, or that everyone else finds it easy and therefore I must be an idiot for not getting it. My personal coping mechanisms for dealing with my own ignorance.

This is what I could avoid with that simple question. When something is presented in such a way that it is mostly skimmed over, it could either be trivial or black-boxed. So I should just ask, and save myself the hassle.

* I actually quite like looking over optimisation algorithms. Even while I was wasting time looking up things that didn't help me at all, I was half concocting some Bayesian method for investigating whether there were differences in the uncertainty of parameters for persistent and transient currents with the standard protocols used for collecting these kinds of data.

Sunday 18 May 2014

Red-eared turtles

For the first time since a short lecture series on motor control in my second year of undergraduate (and a few throwaway comments by machine learning enthusiasts on biological substrates for supervised learning), my attention is turning to the cerebellum. For the next four months I'll be modelling tonic inhibition in cerebellar granule cells, but I felt that since I knew so little about the cerebellum as whole I should cast a broad review of general anatomy, network structure, functional roles, paradigms, etc. 

One of the first things that struck me was the high degree of conservation across species, where practically every modern vertebrate has some form of cerebellum. It appears to differentiate in quite an early embryonic stage from the spinal cord, making it perhaps the oldest distinct brain structure that we still have today. Whatever it is, it is certainly a bit heavier than a few nematode ganglia emerging from some barely recognisable proto-spine.

Nevertheless, this high degree of conservation has meant there are a wide array of species that can be used as model organisms for the cerebellum. It was the red-eared turtle that first took my attention though, since a highly influential model of cerebellar granule cells by Grabbiani et al. (1994) comprises a mixture of ion channels whose properties were experimentally derived from the aforementioned turtle species, and from rats. It seemed odd that a model would be based off of multiple species. Little did I know that this was barely the half of it. Ultimately, I found other models to be based off of a mess of data collected from rats, guinea pigs, turtles, frogs, and goldfish, being applied to data gathered from knockout strains of mice. Our basic understanding of the anatomy even comes from Ramon y Cajal's work with pigeons. I have no idea how I would justify this to an even remotely skeptical biologist.

Anyway, this piqued my curiosity and I wanted to look further into why in particular the red-eared turtle had been used as a model organism. Apparently, it was for four reasons:

1. Conservation of anatomy (as already discussed)
2. The shell could easily be clamped to afford high mechanical stability
3. Resilience to hypoxia
4. Lissencephalic brain

The resilience to hypoxia is rather remarkable. At low temperatures, the red-eared turtle can survive for weeks without oxygen, and even at high temperatures can survive for a number of hours. This extremely efficient anaerobic metabolism and protection from low oxygen conditions means the brain would be very well preserved and rather stable after dissection and slicing, making electrophysiological experiments far easier.

Lissencephaly, or 'smooth brain', simplified the anatomy of the cerebellum. In mammals, the cerebellum is highly folded and so localising implanted electrodes for in vivo study was difficult (especially in the 1970s, when it seemed that much of this work with turtles was carried out). This allowed greater confidence in the reliability of experiments being performed, since the researcher could provide more assurance that their recordings were empirically valid.

As such, the red-eared turtle was for a time a highly favoured biological model of the cerebellum. This seems to have died away in the last 20 years or so, especially since the rise of genetically modifiable mouse strains. Even so, I'm still astounded at how such an unrelated species could provide such insight into a whole brain structure. It also begs the question at why we still know so little about the cerebellum, when we have such a diverse set of models to work with.

Thursday 1 August 2013

The Neuroscientist's tale

I've been reading Brad Voytek's blog since my second year of undergraduate, and his writing has certainly had an influence in my interest to pursue Computational Neuroscience. Recently he acquired a tenure track position at UCSD, and has written about his journey there. It ends with some pretty decent advice that I'll try not to forget over the next few years.

http://blog.ketyov.com/2013/07/the-neuroscience-tenure-track.html

Wednesday 17 July 2013

LIUF 2013 Semi-Final


This was semi-final of a fencing competition I took part in recently. I was 12-8 up at one point, somehow managed to concede 7 hits in a row without even scoring a double hit.



Hopefully I'll learn from the experience...

Saturday 6 July 2013

Spike Triggered Average

A spike triggered average is a method used in Neuroscience to determine the stimulus that a cell is selective for, by averaging over stimuli presented over a given time window before each spike. An efficient representation of this is the Wiener kernel formulation, which works when the stimulus can be characterised as Gaussian white noise, and works even better when it is possible to get synchronised noisy spike trains across many trials so as to smooth out some of the responses with the intrinsic noise. This basically cross-correlates the average spiking activity over many trials (not necessarily averaged across time, but some smoothing can be useful), and normalises by the variance of the stimulus to extract a filter that describes the system that relates stimulus to response. In other words, it's a model of a neuron that says the neuron is just a filter for the stimulus to output a spiking response.

One of the properties of Gaussian white noise is that all frequencies are present in the data, i.e., that samples are uncorrelated in time. However, as with any random sample it's likely there is some spurious autocorrelation present, so it's worth accounting for. In this case, a correction is quite easy to implement. It also reduces some of the need for regularisation

Again, in MATLAB:

function g = STA(r, s, window, lambda)

% Finds the first-order Wiener kernel g that corresponsds to the spike-triggered average of a neuron. Assume neural response r and stimulus s are row vectors. Lambda is regularisation constant. Window is the length of the filter to be extracted.

% Cross-correlate and extract filter.
xrs = xcorr(r - mean(r),s,window, 'unbiased');
xrs = xrs(1:window+1);

% Correct for autocorrelation

xss = xcorr(s, 2*window, 'unbiased');
xss = circshift(xss, [0, -2*window]);
xss = xss(1:window+1);
XSS = toeplitz(xss);

% Regularise

S = XSS + lambda*eye(size(XSS));

% Extract filter

g = S\xrs';