All inclusive AI technology!

Learning and inference engine on a single chip

Hardware neurons inspired by biology

All interconnected and working in parallel, recognizing or learning one pattern in a constant number of nsec and this regardless of the number of neurons committed in the chip

Behave collectively as (1) a K-Nearest Neighbor or (2) a Radial Basis Function (more specifically a Restricted Coulomb Energy classifier). They can cope with ill-defined and fuzzy data, high variability of context and novelty detection.

Collective built-in model generator which means that learning is done on the chip!

Last, but not least, multiple NeuroMem chips can be daisy-chained to scale a network from thousands to millions of neurons with the same speed performance and simplicity of operation as a single chip.


Multiple type of classifier

  • The neurons can behave as a KNN or RCE (class of RBF)
  • A Restricted Coulomb Energy (RCE) classifier uses Radial Basis Function as activation function. It is capable of complex nonlinear mappings and widely used for function approximation, time series prediction, and control.
  • A K-Nearest Neighbor algorithm (KNN) is a method for classifying objects based on closest training examples in the feature space. The parallel architecture of the NeuroMem chip makes it the fastest candidate to retrieve the K closest neighbors of a vector among ANY number.

Reactive recognition with Winner-Takes-All

  • The neurons are capable of ranking similarities between input vectors and the reference patterns they hold in memory, but also reporting conflicting responses or cases of uncertainty, reporting unknown responses or cases of anomaly or novelty.
  • The neurons order their response autonomously per increasing distance value as the host processor sends K successive read commands of the Distance register. Again, this unique feature pertains to the parallel architecture of the neurons and a patented Search and Sort process which allows them to know if other neurons have a smaller distance value without the need for a supervisor or controller.

Parallel broadcast mode

  • All the neurons update their distance value simultaneously as the components of an input vector are broadcasted on their parallel bus. Upon receipt of the last component of the input vector, all neurons have calculated its distance to the reference pattern they hold in memory. If an input vector is broadcasted to a chain of 10, 100 or 1000 NeuroMem chips, their distance values are calculated and ready to be read as soon as the last component of a vector his broadcasted.

Fixed latency

  • The time necessary to obtain a response is independent from the number of committed neurons in the network and from their type of response. At each read command, only the neuron with the smallest distance outputs its value to the parallel bus after 19 clock cycles. If an application requires the use of a KNN with K equal to 50 for example, the distance values of the 50th closest neurons are read in 50 * 19 clock cycles.

Autonomous model generator

  • The model generator built-in the NeuroMem chip makes it possible to learn examples in real-time when they drift from the knowledge residing in the current neurons. The “novelty” examples can be stored in neurons assigned to a different context to allow a supervised verification and learning at a later time.
  • Deduplication is intrinsic, since neurons only learn novelties
  • The knowledge built by the neurons is cloneable since the content of the neurons can be saved and restored.

Multiple contexts or network dynamic segmentation

The ability to assign the neurons to different contexts or sub-network allows building hierarchical or parallel decision trees between sub-networks. This leads to advanced machine learning with uncertainty management and hypothesis generation.

State of the Art for neuromorphic chips