An artificial visual neuron with multiplexed rate and time-to-first-spike coding

Summary

Scientists created an artificial neuron that mimics how biological eyes process visual information by combining two different encoding methods simultaneously. This device can fire electrical spikes at different frequencies (like how biological neurons respond to brightness) while also measuring precise timing of the first spike (which captures rapid changes). When tested with an autonomous vehicle system, this artificial neuron performed better than using either encoding method alone, suggesting it could lead to more efficient and capable robot vision systems.

Background

Biological visual systems use event-driven, energy-efficient spikes for communication, while silicon image sensors use frame-driven approaches with high energy budgets. Natural visual neurons employ multiplexed coding schemes combining rate coding and time-to-first-spike (TTFS) coding to efficiently process complex visual information. Current artificial visual neurons in spiking neural networks lack multiplexed data coding schemes, limiting their ability to emulate biological visual perception.

Objective

Develop an artificial visual spiking neuron capable of multiplexed rate and temporal fusion (RTF) coding to improve the computing capability and efficacy of artificial visual neurons in spiking neural networks. Demonstrate practical applications in autonomous vehicle steering and speed prediction using the proposed coding scheme.

Results

The artificial neuron demonstrated spike frequencies from 0.35 to 1.85 MHz and first-spike latencies from 13.00 to 1.04 μs, with energy consumption of 1.06 nJ per spike. The device showed high endurance (>10^10 cycles) and low device-to-device variability. The SNN with RTF coding achieved superior performance in predicting steering angles and vehicle speeds with loss values <0.5, outperforming both rate and TTFS coding schemes alone.

Conclusion

The multiplexed RTF coding scheme enables artificial visual neurons to emulate natural vision more effectively by combining rate and temporal information coding. The demonstrated hardware-based SNN shows the feasibility of developing highly efficient spike-based neuromorphic hardware with practical applications in autonomous driving. The approach provides a biologically plausible method for vision processing in neuromorphic systems.
Scroll to Top