I've been reading Brad Voytek's blog since my second year of undergraduate, and his writing has certainly had an influence in my interest to pursue Computational Neuroscience. Recently he acquired a tenure track position at UCSD, and has written about his journey there. It ends with some pretty decent advice that I'll try not to forget over the next few years.
http://blog.ketyov.com/2013/07/the-neuroscience-tenure-track.html
A foray into electrophysiology and computational neuroscience. Lashings of whatever comes into mind on the way.
Thursday, 1 August 2013
Wednesday, 17 July 2013
LIUF 2013 Semi-Final
This was semi-final of a fencing competition I took part in recently. I was 12-8 up at one point, somehow managed to concede 7 hits in a row without even scoring a double hit.
Hopefully I'll learn from the experience...
Saturday, 6 July 2013
Spike Triggered Average
A spike triggered average is a method used in Neuroscience to determine the stimulus that a cell is selective for, by averaging over stimuli presented over a given time window before each spike. An efficient representation of this is the Wiener kernel formulation, which works when the stimulus can be characterised as Gaussian white noise, and works even better when it is possible to get synchronised noisy spike trains across many trials so as to smooth out some of the responses with the intrinsic noise. This basically cross-correlates the average spiking activity over many trials (not necessarily averaged across time, but some smoothing can be useful), and normalises by the variance of the stimulus to extract a filter that describes the system that relates stimulus to response. In other words, it's a model of a neuron that says the neuron is just a filter for the stimulus to output a spiking response.
One of the properties of Gaussian white noise is that all frequencies are present in the data, i.e., that samples are uncorrelated in time. However, as with any random sample it's likely there is some spurious autocorrelation present, so it's worth accounting for. In this case, a correction is quite easy to implement. It also reduces some of the need for regularisation
Again, in MATLAB:
function g = STA(r, s, window, lambda)
% Finds the first-order Wiener kernel g that corresponsds to the spike-triggered average of a neuron. Assume neural response r and stimulus s are row vectors. Lambda is regularisation constant. Window is the length of the filter to be extracted.
% Cross-correlate and extract filter.
xrs = xcorr(r - mean(r),s,window, 'unbiased');
xrs = xrs(1:window+1);
% Correct for autocorrelation
xss = xcorr(s, 2*window, 'unbiased');
xss = circshift(xss, [0, -2*window]);
xss = xss(1:window+1);
XSS = toeplitz(xss);
% Regularise
S = XSS + lambda*eye(size(XSS));
% Extract filter
g = S\xrs';
Modular Robotics
I showed this to my family a few months ago...
They didn't get it. I was unbelievably excited when I saw this video. All ten minutes of it. They just sat there looking a little bored and started complaining very quickly. If anything I thought my younger brothers might be impressed by the implications for real life Transformers.
To me this is what the future of robotics will be: small uniform cell-like components that rearrange to perform a task. Of course, these modules are still fairly large and will need a few re-designs to get them on the same scale as biological cells. But as far as I can tell this is the right direction to be going in. A lot of people will possibly disagree, but I think the major advances that will be seen in robotics will be biologically inspired.
The main argument I see for this is that the process of robotics research has been slowly developing over the last century or so, with some great stumbling blocks in terms of creating any naturalistic movement e.g. walking. However the process of evolution has been tackling these problems waaaay longer, and so we are likely to see some 'designs' already optimised for these problems in Biology. This does of course require actually understanding the biophysics of each solution, but implementing these solutions in robots could actually be used to validate understanding. In the case of the above robot, it isn't really addressing anything like biophysically plausible motion (it's more an exercise in logic programming as far as I can tell) but I can see something like being used as a model of skeletal cells, which can be applied to motion.
One argument against this I've heard, is that by studying Biophysics we are only studying one implementation of the problem, rather than creating an appropriate solution, e.g., we didn't need to study birds to build the aeroplane. However I would argue that the computational constraints we see in robotics controllers are in many cases the same as in neural controllers, so by studying the optimal solution found in neural controllers and implementing in a robotic controller would actually solve the problem at hand.
So far, I've seen some of the work in Edinburgh looking at insect motion (I turned down the opportunity to work with these- a slight hint of regret), which isn't using anything modular like this. So it may even be the case that it is unnecessary to use modular components at all to create decent biologically inspired designs. Ignoring issues of complexity (I can hear the dismissal of my argument already), tackling these issues with more modular designs seems like a worthy pursuit.
They didn't get it. I was unbelievably excited when I saw this video. All ten minutes of it. They just sat there looking a little bored and started complaining very quickly. If anything I thought my younger brothers might be impressed by the implications for real life Transformers.
To me this is what the future of robotics will be: small uniform cell-like components that rearrange to perform a task. Of course, these modules are still fairly large and will need a few re-designs to get them on the same scale as biological cells. But as far as I can tell this is the right direction to be going in. A lot of people will possibly disagree, but I think the major advances that will be seen in robotics will be biologically inspired.
The main argument I see for this is that the process of robotics research has been slowly developing over the last century or so, with some great stumbling blocks in terms of creating any naturalistic movement e.g. walking. However the process of evolution has been tackling these problems waaaay longer, and so we are likely to see some 'designs' already optimised for these problems in Biology. This does of course require actually understanding the biophysics of each solution, but implementing these solutions in robots could actually be used to validate understanding. In the case of the above robot, it isn't really addressing anything like biophysically plausible motion (it's more an exercise in logic programming as far as I can tell) but I can see something like being used as a model of skeletal cells, which can be applied to motion.
One argument against this I've heard, is that by studying Biophysics we are only studying one implementation of the problem, rather than creating an appropriate solution, e.g., we didn't need to study birds to build the aeroplane. However I would argue that the computational constraints we see in robotics controllers are in many cases the same as in neural controllers, so by studying the optimal solution found in neural controllers and implementing in a robotic controller would actually solve the problem at hand.
So far, I've seen some of the work in Edinburgh looking at insect motion (I turned down the opportunity to work with these- a slight hint of regret), which isn't using anything modular like this. So it may even be the case that it is unnecessary to use modular components at all to create decent biologically inspired designs. Ignoring issues of complexity (I can hear the dismissal of my argument already), tackling these issues with more modular designs seems like a worthy pursuit.
Friday, 21 June 2013
Manipulating spike count variance
Using a leaky integrate and fire neuron to generate spiking from fluctuating stimuli, but I wanted to generate data with different spike count variance. The problem came when two sources of variance worked to change other aspects of spike counts I didn't want. Increasing trial-by-trial noise promoted noise-driven activity, i.e., the rate went up, but the variance didn't increase that much. Increasing stimulus noise (which in truth shouldn't really be called noise, since it is these fluctuations that drive spiking), rapidly increased the firing rate and actually decreased spike variance.
These two forces needed to be balanced then. For future reference, muck about by decreasing stimulus noise, and increasing trial-by-trial noise.
These two forces needed to be balanced then. For future reference, muck about by decreasing stimulus noise, and increasing trial-by-trial noise.
Time-Correlated Gaussian Noise
Short Matlab script for generated time-correlated gaussian noise thanks to my supervisor. It takes the previous sample and generated a new sample with sampling rate dt based on a gaussian with time constant tau, mean mu and sd sig.
function y = tcorr_randn(y_prev,mu, sig, tau, dt)
y = mu + (y_prev - mu)*exp(-dt/tau) + sig*sqrt(1-exp(-2*dt/tau))*randn;
end
Monday, 10 June 2013
Interspike intervals: The fast way?
A reminder for myself more than anything
For a set of spike data distributed across time, i.e., a N x T matrix, where N is number of cells, T is total time steps dt:
function [ints, spt] = isi(spikes, dt)
% Uses spike data to output interspike intervals and accompanying spike times.
[N, T] = size(spikes);
ints = []; % intervals
spt = []; % spiketimes
% Find isi for each cell
for n = 1:N
spiketime = find(spikes(n,:) == 1);
ints = [ints,spiketime(1),diff(spiketime)];
spt = [spt,spiketime];
end
if isempty(ints)
ints = inf;
end
ints = ints/(dt*T);
[spt col] = sort(spt, 'ascend'); spiketimes in ascending order
ints = ints(col); intervals sorted by spiketime
end
I would very much like to know if there is a faster way, particularly if it means I could find for all cells at once. Currently it isn't completely matlab efficient.
Wednesday, 10 April 2013
Reading papers
I've notices recently that the focus I have when reading articles has changed quite dramatically. In undergraduate, I really didn't care how a researcher did anything or how they validated their claims, I just wanted to see the interpretation of their results so I could paraphrase them and be done with it. Now I'll spend days pouring over their method and results section to see if there is absolutely anything I dislike. I'll spend time trying to simulate some small part of it to get some understanding, make notes on anything that seemed absurdly difficult or just redundant.
I do tend to notice now though that the moments of genius people get are always in these sections, although the language has to be so plain that you really can't express the joy you get from making some realisation that made your method work. It seems a shame to paraphrase this now.
I do tend to notice now though that the moments of genius people get are always in these sections, although the language has to be so plain that you really can't express the joy you get from making some realisation that made your method work. It seems a shame to paraphrase this now.
Sunday, 7 April 2013
Computational Neuroscience
My interest in Computational Neuroscience and AI started in some of the dissatisfaction I had with Psychology while studying for my BSc. I felt that a lot of the time there wasn't any coherent theory to actually understand the results of the experiments being done and it often seemed like an exercise in collecting more data. Sometimes there would be a 'theory' proposed, but this was just a qualitative expression that could often be twisted to fit the data. The formalism introduced in Computational Neuroscience and AI gave some concrete predictions that could actually be evaluated.
However there have been times while doing my MSc that I feel I'm just playing with properties of matrices, and that there are people out there better suited for doing it, i.e., physicists, mathematicians, etc. Perhaps I'm just making excuses.
However there have been times while doing my MSc that I feel I'm just playing with properties of matrices, and that there are people out there better suited for doing it, i.e., physicists, mathematicians, etc. Perhaps I'm just making excuses.
Saturday, 6 April 2013
Musings on Alien Races
I've noticed in Sci-fi that Alien races are often depicted as being completely homogeneous. Same religion, same language, same dress sense, same ideals. This is true unless it is a specific plot point, and even then they are only limited to two warring factions at best.
One notable distinction to this is the Covenant in the 'Halo' series. Admittedly this is an amalgamation of different species held together by a unifying religion, but off the top of my head this is one example where an Alien 'race' isn't treated as homogeneous. There are several belief systems, several cultures, etc. In fact, another game seems to make fun of this sci-fi trope, i.e., in 'Mass Effect 2' where Mordin notes that the human race are good test subjects due to their relative genetic diversity, hence explaining why all other races are so homogeneous.
So, assuming that alien races are not so homogeneous, I wonder whether the reason we have yet to be invaded by aliens is that they have yet to hold a referendum on invasion yet.
One notable distinction to this is the Covenant in the 'Halo' series. Admittedly this is an amalgamation of different species held together by a unifying religion, but off the top of my head this is one example where an Alien 'race' isn't treated as homogeneous. There are several belief systems, several cultures, etc. In fact, another game seems to make fun of this sci-fi trope, i.e., in 'Mass Effect 2' where Mordin notes that the human race are good test subjects due to their relative genetic diversity, hence explaining why all other races are so homogeneous.
So, assuming that alien races are not so homogeneous, I wonder whether the reason we have yet to be invaded by aliens is that they have yet to hold a referendum on invasion yet.
Tuesday, 29 January 2013
Information Theory
Recently I've started watching David McKay's lecture series on Information Theory on videolectures.net (http://videolectures.net/course_information_theory_pattern_recognition/). Around September when choosing courses to take for the upcoming semester I had very little idea of what Information Theory actually was and since there were other more obvious courses to take, it barely registered a second thought. Nevertheless having had some facet pop up in every course I had taken that semester, even if it was just a passing mention to Shannon entropy, I thought perhaps I had missed something very worthwhile.
I've just finished lecture 4, and so far McKay has taken me through a tour of Shannon's source coding theorem and he has convinced of what a marvel it really is. I don't really have time to do all the surrounding reading (although I did read the chapter on Information Theory in Theoretical Neuroscience), but it has been excellent in giving an intuition for how the basis of the field was constructed. The exercises he takes the class through connect neatly together when you start to think about them and it becomes really satisfying when you start to know what is coming next, which happens more often than I was expecting.
Even though my MSc thesis topic appears to have changed (I'll now be looking at developing a Wiener kernel to predict variability in spike trains), it still looks as if this will be useful in understanding neural encoding. So I'll stick with it for now.
Monday, 21 January 2013
A second attempt
I tried starting a blog a few years ago, but was ultimately unsuccessful. I think I may have been drunk at the first attempt, and the ramblings in the many drafts saved from that time support this theory. So here begins attempt #2.
Subscribe to:
Posts (Atom)