Synthetic cognition through petascale models of the primate visual cortex
We seek to understand and implement the computational principles that enable high-level sensory processing and other forms of cognition in the human brain. To achieve these goals, we are creating synthetic cognition systems that emulate the functional architecture of the primate visual cortex. By using petascale computational resources, combined with our growing knowledge of the structure and function of biological neural systems, we can match, for the first time, the size and functional complexity necessary to reproduce the information processing capabilities of cortical circuits. The arrival of next generation supercomputers may allow us to close the performance gap between state of the art computer vision approaches by bringing these systems to the scale of the human brain.
There are approximately 10 billion neurons in the human visual cortex, with each neuron receiving approximately 10000 synaptic inputs and each synapse requiring approximately 10 floating point operations per second (FLOPS), based on an average firing rate of 1Hz. Thus, it follows that synthetic visual cognition will require on the order of 10G (neurons) x 10K (synapses) x 10 FLOPS/synapse= 1 Petaflop, commensurate with next generation supercomputers performance, such as Roadrunner at LANL.