Neural network modeling


We have investigated the applications of dynamic recurrent neural networks whose connectivity can be derived from examples of the input-output behavior [1].  The most efficient training method employs back-propagated error correction to derive the connections between units having continuous activation functions (Fig. 1).  A reduced network performing a sample-and-hold task (Fig. 2) illustrates the dynamic changes in activation states [left and top] as well as the connectivity matrix between units.  A neural network model of the primate motor system captures data obtained during performance of alternating wrist movements and elucidates underlying connectivity [2].  Networks with units having continuous activation states can be transformed into networks of integrate-and-fire spiking units with the same connection weights and performing the same functions [3].


We have modeled the neural mechanisms underlying changes in cortical connectivity induced by spike-triggered stimulation [4] (Fig. 3).  This model captures the experimental data and also makes predictions about parameters that optimize plasticity.  One testable prediction is that plasticity is more rapidly induced when correlations between neurons is broader, such as during sleep.


A neural network model using integrate-and-fire spiking units and incorporating spike-timing dependent plasticity [5] has replicated the results of four different conditioning protocols producing synaptic plasticity (Fig. 4).  The model elucidates the underlying mechanisms and predicts outcomes of novel conditioning protocols.


We are currently developing integrate-and-fire networks for incorporation into closed-loop brain-computer interfaces (Fig. 5).  These will enable multiple spiking neurons to drive stimuli at multiple cortical sites mediated by a wide range of artificial neural networks.





  1. Fetz EE, Dynamic recurrent neural network models of sensorimotor behavior, in THE NEUROBIOLOGY OF NEURAL NETWORKS, Daniel Gardner, Ed. MIT Press, Cambridge MA, pp 165 - 190, 1993. [PDF]
  2. Maier MA, Shupe LE, and Fetz EE, Dynamic neural network models of the premotoneuronal circuitry controlling wrist movements in primates, Journal of Computational Neuroscience.  19:125-46, 2005. [PDF]
  3. Maier MA, Shupe LE, and Fetz EE, Recurrent neural networks of integrate-and-fire cells simulating short-term memory and wrist movement tasks derived from continuous dynamic networks, Journal of Physiology (Paris), 97: 601-612, 2003. [PDF]
  4. Lajoie G, Krouchev NI, Kalaska JF, Fairhall A, Fetz EE, Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface., PLOS Computational Biology, Feb; 13 2: e1005343. 2017. [PDF]
  5. Shupe LE, Fetz EE, An integrate-and-fire spiking neural network model simulating artificially induced cortical plasticity,[ENeuro 0333-20, 2021]. [PDF] [Source Code]

Figure 1. Dynamic recurrent network; weights between units are derived by back-propagated error correction [1].

Figure 2. Neural network performing short-term memory task.

Figure 3. Network simulating cortical plasticity induced by spike-triggered stimulation. [4]

Figure 4. Integrate-and-fire model simulates spike-triggered stimulation protocol. [5]

Video 1. Progression of synaptic plasticity when conditioning with Spike-Triggered stimulation as in Figure 4 above.


The strength of synaptic connections from units in Column A to units in Column B are assessed by the size of the evoked potential (EP) in Column B evoked by a stimulus affecting all units in Column A. Simulation begins with 500 seconds of pre-conditioning during which synaptic plasticity is on and the network settles from its initial random connection strengths.  Next are 500 seconds of Testing in which the network runs with plasticity turned off, allowing a mean pre-conditioning EP to be calculated.  For the next 500 seconds, Conditioning and Plasticity are turned on.  Conditioning consists of stimulation applied to Column B triggered from each spike of the first unit in Column A.  During the final Testing period both conditioning and plasticity are off to assess post-conditioning EPs.


Each video frame shows 10 EPs and the mean of these EPs as the network simulation progresses. When the testing phases are finished, the final EP average for the entire testing block is displayed.  The final mean percent increase is calculated as:


MPI =  (Post-condtioning EP height - Preconditioning EP height) / Preconditioning EP height

Figure 5. Closed loop brain-computer interface via artificial neural networks.

Network Animation Videos


Video 2. Sample-and-hold with one output [sah1.avi]. This video animates an artificial neural network performing a memory task to remember the value of input unit A when a sample signal occurs on unit S. The remembered value is continually output on unit O until the next sample signal occurs.  This small network (also shown in Fig. 2 above) was derived from a larger network trained on the task using recurrent back-propagation.  The larger network was reduced by training with weight decay and eliminating unused units and combining similarly performing units until just four hidden units remained.  The reduced network is shown at right, with brightness of unit symbols proportional to instantaneous activity.  The network diagram shows largest connections between units.  The time course of unit activity is shown scrolling at left.  Hidden units perform identifiable functions: Excitatory unit SA outputs the value of A at the sample time (by clipping the sum of the values of units S and A).  This transient signal of the value to be remembered is fed to the two excitatory memory units M1 and M2 whose reciprocal connections maintain the value and drive the output O.  Inhibitory unit SM subtracts the currently remembered value at the time of the sample pulse.  For further details see [Fetz and Shupe 2003].

Video 3. 2D sample-and-hold network with two sine wave inputs and two outputs. [sah2.avi].  The two values can be represented by the X and Y coordinates of a point on a plane [lower left].   The values of sinusoidal time-varying inputs  [blue square] that occur at the time of the sample signal [green cross] are maintained by the two outputs [red square].  This network allowed unit self-connections, which eliminated the need for recurrent activity in two units to maintain values [like the M1 and M2 units in Video 2]. This resulted in two different methods for performing the hold operation.  The method used to reduce the size of the network from the original trained version separated the larger connections into two sub-networks; one each for the X and Y signals.



Video 4. Sample-and-hold with two outputs and pulse inputs [sah3.avi].  This network is identical to that in Video 3 except that the blue input signals to sample are now transient values from the original sine waves.  This demonstrates that the network can still perform the sample-and-hold function for novel signals on which it was not trained.

Video 5. Sample-and-hold with SA unit lesioned [sah3-lesion.avi].  This network is identical to that in Video 4 except that the connections from the first hidden unit have been lesioned.  This results in a loss of function for unit X, but leaves unit Y unaffected.  The black trace in the scrolling activity indicates the activity that would have occurred with an intact network.


Video 6. Cue task [cue.avi].  This video demonstrates a neural network performing a 2D instructed delay task.  It remembers a Left or Right Cue, and when a delayed Go signal [green] occurs it produces a corresponding Left or Right output.  Hidden units are excitatory [H+] or inhibitory [H-]. This network learned the task by remembering the Left Cue in a subset of the hidden units, and then activating output L or R based on this remembered value combined with the Go signal.  As a result, output R became a default output if the Go signal was given in absence of either Cue.  For further details, see [Fetz 1993]


Video 7. Cue task with lesion [cue-lesion.avi]. This network is identical to that in Video 6 except that the connections from the first hidden unit have been lesioned.  This results in a loss of function for unit L, but leaves unit R unaffected. The black trace in the scrolling activity indicates the activity that would have occurred with an intact network.