Reverse engineering the brain’s control mechanisms

 It would be great to understand how the brain works, yet this goal remains distant and elusive. We have developed computational theories of sensorimotor function on the single-neuron level [1] as well as on the system level [2] which are now mainstream. We have also performed a range of psychophysical experiments testing the predictions and helping refine the theories [3]. While such developments remain a significant part of our agenda, we do not feel that the current trends in sensorimotor control are leading towards algorithmic understanding, of the kind that can enable artificial systems to match the brain’s performance. Thus we are initiating a different type of experiments and data analyses, designed not to test hypotheses about isolated features of the brain’s controller, but to directly reveal what that controller is. Instead of following the tradition of studying many repetitions of a simple movement, we will be studying complex movements executed under a wide variety of task conditions as well as random perturbations. We will then use machine learning and inverse optimal control [4] techniques to discover the structure in the data, and infer how humans would have acted in any possible situation. The ability to make such inferences is equivalent to having an automatic controller. The low-dimensionality typically observed in motor behavior [5], along with the regularization afforded by inverse optimal control, will hopefully mitigate the curse of dimensionality. The specific experiments currently planned are recording hand kinematics and EMG from large numbers of channels, as well as recording full-body kinematics and ground reaction forces during walking, while subjects are being pushed unexpectedly. We are also adapting methods from computer graphics, which has a tradition of building elaborate controllers based on motion capture data.