Affordable hardware that in effect enables desktop supercomputing has enabled more ambitious neural simulations driven by more complex software. However, this opportunity comes with costs, in terms of long learning curves to take advantage of the performance possibilities of idiosyncratic, architecturally heterogenous hardware and decreasing ability to be confident in the quality of simulation results. We are developing a new neural simulation and software/data provenance framework that reduces the difficulty of taking full advantage of GPU computing and increases investigator confidence that simulations results are valid. You can learn more about this framework on its Github site (aka http://tinyurl.com/BrainGrid).
The BrainGrid+Workbench framework supports investigations that require high-performance parallel simulation. It decreases the time needed to write parallel software while simultaneously providing mechanisms for validating results against serial implementation. Moreover, it tracks the connections among software and data, helping researchers to understand the interactions among mathematical models, algorithms, software implementation, simulation configuration/parameters, and simulation results. The net effect is greater confidence in the quality of one's results by increasing the visibility of changes that may render the results invalid.
The name "BrainGrid" comes from a previous investigation of the use of agent-based middleware (AgentTeamwork) for this application. While the current approach isn't really grid computing anymore, we liked the name.