Licensing Opportunity Contact Form
Submission of this form should not be interpreted as an offer to license technology. All licenses are subject to negotiation and availablility of the intellectual property for licensing. This form is intended for indications of interest only.
 

Search/Browse Tech

Analog Matrix Vector Multiplication

Summary

Reduces the area and energy consumption of neural network accelerators using nonvolatile memory components (NVM)

Description

Analog matrix vector multiplication (MVM) using nonvolatile memories is limited by the accuracy of the memory elements and their analog variability. Variations in “on” or “off“ state current, or the low or high resistance state, directly translate to variability in value represented by the memory, thus limiting the accuracy of analog matrix multiplication. Capacitive based analog sums can be extremely accurate as the relative capacitance across elements in a local analog block is very well controlled; however, they currently require large SRAM based memory elements to charge the capacitors and therefore require extra power to store the non-volatile state. Using a nonvolatile memory to store the state and a capacitor for the analog sum realizes advantages for both systems.


Sandia researchers have developed a system that uses nonvolatile memory elements (NVM) to charge capacitors arranged in an array to perform matrix vector multiplications (MVM). This development can be utilized by memory manufacturers to reduce both the area and energy consumption of accelerators, significantly improving the efficiency of binary accelerators. This allows for analog matrix multiplication to be performed with the accuracy of capacitive elements, while benefiting from the energy and area advantages of new on-volatile memories. The system can be altered to include multibit weights and multibit inputs. The system can ultimately be used to improve the efficiency of neural network accelerators.

Benefits

  • Reduces total accelerator area
  • Reduces accelerator energy consumption
  • Efficiency gains in memory

Applications and Industries

  • Neural Network Accelerometers
  • RAM
  • Memory
Technology IDSD# 15180Development StageProposed - TRL 3AvailabilityAvailablePublished03/03/2021Last Updated03/03/2021