Intelligent Networks Laboratory

AI and graph-based systems.

Menu

Past Projects.

Neuronal Coding.

We are interested in neural coding, the transformation of input spike trains into output spike trains, which is the fundamental unit of neural computation. We have been using techniques from the field of nonlinear dynamics to examine the physiological and mathematical mechanisms which underlie neural network dynamics in a combination of experimental and theoretical situations. These techniques have been successful in clarifying the coding of stationary, pacemaker neural inputs. We have explored responses to presynaptic transients and frequency-modulated input. We used this information to develop a theory for the physiological and formal aspects of neuron dynamics.

Our formal models had two purposes: to reduce a neuron to the essential components which produce its complex behaviors and to bridge the gap between the phenomenological description of such behavior and formal computational properties.

Our work focused on experiments on and simulations of a prototypical living inhibitory synapse: the crayfish slowly adapting stretch receptor organ (SAO). The preparation includes the recognized prototype of an inhibitory synapse, and results obtained from it should be considered valid for any such synapse as a working hypothesis at least.

Nonlinear Dynamics.

We use a variety of techniques from a wide range of fields in our work. The fundamental mathematical concepts come from the field of nonlinear dynamics. Briefly, dynamical systems are distinguished from non-dynamical ones in that they have an internal state. Their outputs are not merely a function of input, but rather their input and their state. Nonlinear systems are ones in which the responses to two different inputs do not necessarily provide enough information to predict the response to the sum of those inputs.

Another way to define nonlinear systems is to say that they are not linear. In a linear system, if the response to input $a$ is $F(a)$ and the response to input $b$ is $F(b)$, then the response to their sum, $a+b$, is merely $F(a+b) = F(a) + F(b)$. Similarly, if one were to scale an input by some constant $k$ (double it, triple it, etc.), the response would be $F(ka) = kF(a)$.

These conditions don't hold for nonlinear systems, which neurons are generally agreed to be. Neurons' dynamics have profound implications regarding their behavior (how they respond to various inputs, which is something we can test) and, presumably, their computational characteristics (how they transform information, which is something about which we know little).

Transients.

Changes are ubiquitous in nature; survival depends on our ability to cope with them. These changes must necessarily be reflected in changing spike trains of the neurons which encode them, if they are to have any effect on behavior. One simple type of changing discharge is that associated with unidirectional muscle movement, which may produce accelerating or decelerating transients. Despite the importance of neuron transient responses, until recently they have been understood only superficially.

Our work on transients has been concerned with presynaptic discharges with interspike interval timings that have patterns with trends -- either accelerating or decelerating monotonically. These transients are among the simplest nonpacemaker input regimes allowing examination of nonstationary synaptic coding, and are also reasonable models for neural input in sensorimotor systems during motor activity. We wish to understand how neural responses to changing input compare to that for stationary input, and propose and evaluate a general model for this.

Learning.

Learning, in the context of individual neurons, is represented in both physiological and Artificial Neural Network (ANN) models by change of synaptic strength. Many of the changes in neural responses as a result of the learning process may be explainable by simple consideration of synaptic strength alteration. For instance, sensitization and habituation of vertebrate and invertebrate neurons are usually interpreted in terms of increases and decreases of synaptic strength. However, there are still many unexplained cases. In the mammalian hippocampus, neurons do not receive one stimulus but a number of different stimuli from a variety of different locations. So simply looking at the increment and decrement of synaptic strength is insufficient to explain what has been modified by learning.

Neural responses include input and output rate and their pattern. By "pattern", we mean the ordered sequence of interspike intervals and cross-intervals.

To study the effect of learning, we use an accepted physiological model: Long-Term Potentiation (LTP), first observed in excitatory synapses of rabbit hippocampal neurons. When a high frequency afferent tetanus is applied to these neurons, postsynaptic potentials are enhanced. Other than the magnitude of postsynaptic potential enhancement, LTP also involves persistent maintenance of this enhancement for minutes or even hours. LTP has also been recorded in invertebrate neurons, for instance in the crayfish opener-excitor neuromuscular synapse.

Similar effects have also been observed in the inhibitory synapse of the goldfish, Carasius auratus Mauther cell. Thus, though originally investigated in excitatory preparations, there is no a priori reason to expect LTP to be an exclusively excitatory phenomenon. One might also consider that cerebellar Purkinje cells, involved in motor learning, are also connected through inhibitory synapses.

LTP aftereffects have been primarily studied in terms of induction as a function of stimulus rate. When a high rate stimulus is applied, LTP duration and PSP strength increase are recorded as the modification of the neural response as a result of learning. However, we can also consider LTP in the context of presynaptic spike temporal pattern, rather than just rate. There is evidence that this is a better description of LTP dependence on the presynaptic discharge than is presynaptic rate. Tsukada and collaborators showed that the increase in PSP size recorded strongly depends on the correlations among presynaptic interspike intervals.

Our laboratory considers not only presynaptic pattern, but postsynaptic pattern, too, to describe LTP induction and its effects on neural behavior -- the patterns of spikes it produces in response to identical presynaptic spike trains before and after "learning".

Information and Neural Communication.

Information theory provides general techniques for characterizing the content of a signal or the capacity of some channel. As such, it has been used with increasing regularity as a tool for investigating neural coding. By comparing neuron output with sensory stimuli or presynaptic trains, researchers have concluded that certain preparations utilize temporal or rate coding, what their information (channel) capacity is, etc.

What often has been lost in this work, however, is that information theory does not inform about the nature of the code used by a system, but only how efficient a carrier of information it is. In other words, it tells you how close a code is to being "optimally compressed". Moreover, one can only learn about a channel's characteristics if one knows what the channel is: for example, the distinction between the encoded data transmission and channel noise. Because of this, conclusions about neural coding must be made carefully:

  • Presynaptic spike trains are sometimes used as the reference in determining postsynaptic information capacity. This treats a neuron as a pure encoder. However, neurons are dynamical systems, and their internal state certainly affects their discharge. To the extent that a neuron's internal state represents information to be transmitted to other cells, these approaches underestimate capacity.
  • The idea that neurons are noisy and unreliable has great currency in the literature, based on observations that spike trains look ``messy'' and that multiple experiments with identical stimuli produce different discharges. These are weak definitions of noise: ``variation of a measured quantity from some expected value'' in the first case and ``whatever is not of interest'' in the second. Use of spike train variability as a noise floor in channel capacity estimation only establishes a lower bound on information content.
  • An apparently low information rate (in bits per spike) might be taken as evidence for rate coding. This presupposes that one knows what data is being coded, arguably only feasible for sensory cells. In any event, this at best establishes that the neural code is not optimally compressed, which is not the same thing as determining that there is no ``extra content'' in the code.
  • In artificial systems, there are many reasons for not using optimally compact codes. These include: convenience, compatibility with other systems, encoding/decoding complexity, and error correction. The observation that a code contains redundancy is not the same as determining that the redundant information is not used by a system. For example, one could analyze the information content of an audio compact disc: the fact that each bit of data carries less than one bit of information does not mean that not all of the bits are used (nor does it tell one what the code is or what is being encoded).
  • It is not unreasonble that similar considerations may be applicable to biological systems. Neuron construction and neural architectures have definite (though unknown) limits to their processing power. If we assume that neuron operation contains significant ``noise'' uncorrelated with information to be transmitted, then this would be expected to inject errors into the data stream. This argues for codes with significant, and useful, redundancy.

We have begun preliminary work testing the feasibility of a neural code containing error correction characteristics which requires greater spike timing precision than might be necessary to simply transmit a given amount of information. So far, we have been able to show that a physiological model of the recognized prototype of an inhibitory synapse can exhibit error correction if those errors occur in the context of overall high timing precision. We are currently working to extend these results to show that, for any nonlinear neural oscillator, coding redundancy (in the form of high temporal precision) can support error correcting codes.

LOGOS.

There is no greater challenge to the biomedical information processing community than applying computer technology to research and collaboration in neuroscience: neuroinformatics. Quantities of data range up to multiple-petabyte levels. The information itself is extraordinarily diverse, including scalar, vector (from 1 to 4 dimensions), volumetric (up to 4 dimensional spatio-temporal), topological, and symbolic, structured domain knowledge. Spatial scales range from Angstroms to meters, while at the same time temporal scales go from microseconds to decades. Base information varies greatly from individual to individual, while the results computed from this data and domain knowledge derived from such computation can change with improvements in algorithms, data collection techniques, or the underlying scientific methods. Coupled to this data complexity is the peculiarities of human information absorption, processing, and interaction.

Our LOGOS project was a system for storing, sharing, processing, visualizing, and reasoning about neuroscience information. This system was envisioned as a "researcher's associate", facilitating collaboration among researchers, serving as an interface between researchers and the data they collect and analyze, and taking care of all of the data and knowledge management associated with the complete scientific information life cycle, from experiment design, simulation, collection, processing, analysis, visualization, inference, generalization, and publication to review of previous results and the beginning of a new cycle.

Scientific Data/Knowledge Management.

The heart of LOGOS was a data/knowledge based system described in a white paper and in an older publication (Stiber et al., 1997) and by a student project (Goebel & Mager, 2000).

Simulation.

Two types of custom simulation environments have been developed within our lab:

NeuronPC/XNeuron/CNeuron/MATLAB_Neuron
A family of simulators targeted at detailed neuromotor simulations which include a number of nonlinear dynamics analysis tools. These simulators include versions with graphical users interfaces written in C for PCs and Unix workstations, as well as a GUI simulator written in C and MATLAB. There is also a C version with no user interface, meant to be incorporated into shell scripts for automating large number of simulations.

These simulators implemented models of the crayfish slowly adapting stretch receptor organ (SAO) and its associate prototypical inhibitory synapse, inhibitory long-term potentiation (LTP) rules for synapse modification, and muscle fibers.

CRESSA
A leaky integrator model which incorporated inputs with cluster process statistics. This has been used to investigate the effects of presynaptic correlation among multiple synapses on postsynaptic behavior.