Neuromorphic Computing

Motivation and Challange

Increasing performance requirements due to artificial intelligence in both end devices and high-performance data centers are leading to an immense increase in power consumption. Increasing efficiency by scaling the area density is reaching its physical limits. Although today's AI software has already become significantly more efficient, it is limited in classic von Neuman computer architectures in terms of data input and output, which are related to the separation of processor and memory. In contrast, typical AI tasks are solved by our brain in a network of neurons and synapses more effectively and, above all, much more energy-efficiently. Neuromorphic computing therefore pursues the approach of harnessing the properties of the biological brain for the development of computer hardware and software.

Neuromorphic Computing

From the Basics to Use

A significant innovation at component level is the memristive component, which is being researched at the PGI in many variants of switching mechanisms and material combinations. Memristive components can function as adjustable electrical resistance with permanent memory. The interface with materials research is important here, as the components provide an example of the transition from potentially useful and understood physical phenomena in the material system to functional and potentially industrially producible components. This step is accompanied by specialized measurement technology and modelling, which makes it possible to break down complex ideas into smaller components and implement more targeted optimization processes. The use of memristive cells and special transistors is an option for integrating synapses in artificial neural networks directly into the hardware. Typical paradigms of neuromorphic computing, such as computing in memory, can be built compactly with this component in a compatible and integrated manner using modern CMOS semiconductor technology. In the next stage of development, matrix structures and functional units are designed as circuit microarchitecture to enable associative storage, memory-based computing and artificial neural networks, up to asynchronous, analog-digital mixed or event-based spiking neural networks (SNNs). Software that is optimized for this hardware enables classic machine learning, AI applications and even the simulation of the brain in order to make full use of the biological model and close the circle of inspiration. This is also where the interface to the hardware-software system area of the PGI has long been reached: Demonstrated edge computing solutions, autonomous AI systems and large NC modules for supercomputing are typical research goals in the joint projects and in programmatic Helmholtz research.

Infrastructure and Cooperation

Important infrastructures and cooperations in Jülich in this field are:

- Internally, the Jülich Neuromorphic Computing Alliance (JUNCA) forms a cross-institute NC community with a growing shared knowledge base and opportunities for exchanging ideas, including NC road-map creation and project planning
- Helmholtz Nano Facility, ER-C electron microscopy and other analytics are essential for materials research-based component manufacturing
- the Institute for Engineering, Electronics and Analytics (ZEA; in particular ZEA-2), in circuit design, CMOS layout and the technical implementation of complete electronic systems.
- the Institute of Neuroscience and Medicine (INM), through insights into the functioning of the human brain and (INM-6) as a user of NC hardware for simulation and emulation calculations
- CMOS design teams (Complementary Metal Oxide Semiconductor) through the provision of and research into materials
- the Jülich Supercomputing Center (JSC), by providing computing power and knowledge in the field of machine learning

Externally, we work closely with RWTH Aachen University, TU-Dresden, international research groups and industry to develop new materials and electronic devices for neuro-inspired hardware.

Aims:

  • Understand the astonishing efficiency and computational power of biological systems, both at the component level and at the higher levels of system organization, and apply this to computer design
  • Go beyond recent developments in artificial intelligence and machine learning, where insights from biology are merely incorporated into algorithms and software, but the underlying hardware remains unchanged
  • Translating discoveries in neuroscience and biology into the language of computing and information processing; developing and designing components and circuits inspired by the human brain
  • Interface with materials research - utilizing a comprehensive physical understanding of materials and predictive models to develop dynamic devices that can support biologically inspired processes with low energy expenditure.
  • Targeting neuromorphic hardware to significant and challenging problems in industry (e.g. low power embedded/edge intelligence) and science (e.g. dynamic simulations of the brain)
  • Exploring novel computational methods on a large scale by developing neuromorphic systems - interface to Systems and Software, interface to ZEA-2 and CMOS design teams; also link to hybrid electronic-optical/photonic systems to address chip-to-chip communication bottlenecks and possible 3D integration
  • Inspired by neuroscience and machine learning, mathematically based algorithms adapted to the properties of neuromorphic substrates will be researched and developed in collaboration (INM, JSC)

Contributing PGI Institutes:

Last Modified: 27.02.2024