![]() |
Dylan Richard MuirDoctoral Student Institute of Neuroinformatics (INI) Neuroscience Center Zürich (ZNZ) ETH | Universität | Zürich Email: ![]() Download PGP keys |
When injections of dye are made into the upper layers of cortex, a remarkable structure of neuronal connectivity is revealed. Dye is absorbed by neurons and transported along axons and dendrites to fill entire cells. When a large population of neurons is stained in this way, slices examined in tangential section show a semi-regular array of denser staining.
This pattern is remarkable simply because it appears when any area of cortex is examined, over a wide range of animals. The pattern, and the yet unknown connectivity system which underlies it, is a general feature of cortex. Since spatially segregated areas of cortex perform radically different functions (from visual processing to movement control to higher cognition), this connectivity system could form part of a general, adaptable computation engine in cortex.
See the Daisy Project website for more information.
There is another clustered system which exists in cortex, known as the superficial patch system. This system consists of a lattice of long-range connections between excitatory neurons, sometimes spanning up to seven millimetres across the cortical surface.
In this project we are trying to determine to what extent the structure of this anatomical system (the patch system) constrains the functional system (the maps of function, as shown in the figure). We have developed techniques for automatically locating the clusters of active neurons in a functional map, and we are comparing their spatial arrangement with that of the superficial patch system.
More information can be found on the project page at the Institute website.
Top |
More information about the toolbox is available on the Spike Toolbox page.
Top |
Using the magic of del.icio.us, you can access my database of links to both journals and important researchers in the fields of developmental neuroscience, the cortical microcircuit and computational neuroscience.
Top |
Muir, D; Indiveri, G and Douglas, RJ. 2005, 'Form specifies function: Robust spike-based computation in analog VLSI without precise synaptic weights', Proceedings of the IEEE International Symposium on Circuits and Systems, May 2005, Kobe.
Download unfinalised paper
Muir, D and Sitte, J. 2003, 'Seeing cheaply: flexible vision for small devices', Proceedings of the 2nd International Symposium on Autonomous Minirobots for Research and Edutainment (AMiRE 2003), February 2003, Brisbane.
Download unfinalised paper
Muir, D and Towsey, M. 2001, Better FSAs Through Clustering, Technical Report FIT-TR-2002-01, Queensland University of Technology.
Download technical report
Muir, D; Towsey, M and Diederich, J. 1999, 'Clustering to enhance FSA extraction from recurrent networks', Proceedings of the Australian Machine Learning Workshop, p. 11, November 1999, Canberra.
Download workshop abstracts
Top |
Language Processing / Neural Network utilities |
Software modules |
All software available here is written and copyright by Dylan Muir unless stated otherwise in the code headers. Please feel free to use it, but please also acknowledge the code's origins.
tlearn
reverse engineering excerpttlearn
is a neural network simulator written by Jeff Elman and others. In the course of the Language Processing Group project, I reverse-engineered and modified tlearn
to perform on-line clustering while training. This modification is called tlavq
(for Adaptive Vector Quantisation).
The code contained in this excerpt from the LPG project Technical Report is copyright Jeff Elman and the authors of tlearn
. Please see the tlearn
software page for information on re-distribution.
{name}.pattern
file.
tlearn
. A cluster analysis program is required to extract the locations of the FSA's states within the hidden unit space. The resulting clusters are loaded into MakeFSA. Transition tables are generated for the hidden unit activation data, and used to construct a deterministic FSA.
tlearn
.teach and .data files. .pattern files are used in several other applications, such as dstat and MakeFSA.
tlearn
.teach and .data files. The output can be written in both localist and distributed representations, and the input and output lines can be either binary or normalised together.
tlearn
bit error. It can extract the true error (not the averaged error generated by tlearn
) for a distributed output and target. It will give the number of incorrect predictions over a tlearn
run for a one-step-lookahead task.
For more information, read the Technical Report written for the Language Processing Group project.
tlavq
tlavq
is an extended implementation of tlearn
. tlavq
performs learning on a user-defined neural network, much in the same way as tlearn
, except that tlavq
can also perform on-line cluster analysis of specified neurons, with the intention of using this analysis to either partially or wholly classify the specified neuron's activations into another set of nodes.
The purpose of this was to implement the online clustering architecture outlined in Das and Moser [1998], but tlavq
retains tlearn
's inherent flexibility. The network can be configured to any architecture possible in tlearn
, and clustering can be turned off entirely. With this feature disabled, the program behaves identically to tlearn
.
The source code also serves as an example to aid in the further extension of tlearn
.
StdDefs - Provides a set of standard definitions for all modules.
2darray - Creates and destroys easy-to-handle 2D arrays in C.
gauss - Inverts square 2d matrices using the Gauss-Jordan elimination method. Matrices must be created via the 2darray module.
RunAvg - Computes a "running" average, taken over a fixed number of double-precision samples. All samples are initialised to zero. The sampling interval is handled by the user.
htable - A brief hash-table implementation.
smdarray - Enables the use of infinitely-dimensioned arrays in C. When collecting n-gram transition data for a (say) 29-class problem, you've got a big transition array. You won't be able to allocate it. This module manages sparse any-dimensioned arrays.
TokenLst - Separates whitespace-separated words into separate tokens.
TokScan - Uses the token list data structure to retrieve tokens from a file.
vector_utils - A series of functions for manipulating n-dimensional vectors.
vector_read - Provides utilities for reading and writing vectors from file streams.
cluster - A series of functions designed to represent and manipulate clusters of vectors.
corr_matrix - Implementation of the second-order (correlation matrix) distance measure.
Top |
Dylan Muir is a Doctoral Student at the Institute for Neuroinformatics at the ETH | Universität | Zürich.
Dylan has a Bachelor of Engineering (Electronics) (First Class Honours) and a Bachelor of Information Technology (with Distinction) from the Queensland University of Technology (2002).
Dylan is interested in cortical development, spiking neural networks, and speaking in the third person.
To obtain a full resume, please send me an email, or phone me.
Top |
This page designed and written by Dylan Muir using Notepad. Page last updated on the 24th August, 2007.
Link to: http://www.ini.uzh.ch/~dylan/