论文标题

NENGO和低功耗AI硬件,用于健壮的嵌入式神经植物技术

Nengo and low-power AI hardware for robust, embedded neurorobotics

论文作者

DeWolf, Travis, Jaworski, Pawel, Eliasmith, Chris

论文摘要

在本文中,我们演示了Nengo神经建模和仿真库如何使用户可以快速开发机器人感知和动作神经网络,以使用熟悉的工具(例如Keras和Python)对神经形态硬件进行仿真。我们确定了建立健壮的嵌入神经动物系统的四个主要挑战:1)开发与环境和传感器接口的基础设施; 2)处理任务特定的感觉信号; 3)产生可解释的控制信号; 4)编译神经网络以在目标硬件上运行。 Nengo通过以下方式帮助解决这些挑战,1)提供NengoInterfaces库,该库定义了一个简单但功能强大的API,供用户与仿真和硬件进行交互; 2)提供Nengodl库,该库使用户可以使用Keras和Tensorflow API开发NENGO模型; 3)实施神经工程框架,该框架提供了用于实施已知功能和电路的白色框方法; 4)提供多个后端库,例如Nengoloihi,使用户能够将相同的模型编译为不同的硬件。我们提供了两个使用Nengo的示例来开发在CPU,GPU和Intel的神经形态芯片Loihi上运行的神经网络,以演示此工作流程。第一个示例是一个端到端的尖峰神经网络,该网络控制着在Mujoco中模拟的流浪者。该网络集成了一个深卷积网络,该网络可处理从安装的摄像机中的视觉输入以跟踪目标,以及一个实现转向和驱动功能的控制系统,以引导漫游者到达目标。第二个示例增强了具有神经自适应控制的基于力的操作空间控制器,以使用现实世界中的Kinova Jaco2机器人组来提高到达任务期间的性能。提供了代码和详细信息,目的是使其他研究人员能够构建自己的神经动物系统。

In this paper we demonstrate how the Nengo neural modeling and simulation libraries enable users to quickly develop robotic perception and action neural networks for simulation on neuromorphic hardware using familiar tools, such as Keras and Python. We identify four primary challenges in building robust, embedded neurorobotic systems: 1) developing infrastructure for interfacing with the environment and sensors; 2) processing task specific sensory signals; 3) generating robust, explainable control signals; and 4) compiling neural networks to run on target hardware. Nengo helps to address these challenges by: 1) providing the NengoInterfaces library, which defines a simple but powerful API for users to interact with simulations and hardware; 2) providing the NengoDL library, which lets users use the Keras and TensorFlow API to develop Nengo models; 3) implementing the Neural Engineering Framework, which provides white-box methods for implementing known functions and circuits; and 4) providing multiple backend libraries, such as NengoLoihi, that enable users to compile the same model to different hardware. We present two examples using Nengo to develop neural networks that run on CPUs, GPUs, and Intel's neuromorphic chip, Loihi, to demonstrate this workflow. The first example is an end-to-end spiking neural network that controls a rover simulated in Mujoco. The network integrates a deep convolutional network that processes visual input from mounted cameras to track a target, and a control system implementing steering and drive functions to guide the rover to the target. The second example augments a force-based operational space controller with neural adaptive control to improve performance during a reaching task using a real-world Kinova Jaco2 robotic arm. Code and details are provided with the intent of enabling other researchers to build their own neurorobotic systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源