3.8 Proceedings Paper

AEGNN: Asynchronous Event-based Graph Neural Networks

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01205

Keywords

-

Funding

  1. Huawei, and as a part of NCCR Robotics, a National Centre of Competence in Research
  2. Swiss National Science Foundation [51NF40 185543]
  3. Swiss National Science Foundation (SNF) [51NF40-185543] Funding Source: Swiss National Science Foundation (SNF)

Ask authors/readers for more resources

This research introduces a novel event-processing paradigm called Asynchronous, Event-based Graph Neural Networks (AEGNNs), which generalize standard GNNs to process events as evolving spatio-temporal graphs. AEGNNs significantly reduce computation and latency by only recomputing network activations for the nodes affected by each new event. The experimental results show a reduction in computational complexity by up to 200-fold and similar or even better performance compared to state-of-the-art asynchronous methods, making AEGNNs a promising approach for low-latency event-based processing.
The best performing learning algorithms devised for event cameras work by first converting events into dense representations that are then processed using standard CNNs. However, these steps discard both the sparsity and high temporal resolution of events, leading to high computational burden and latency. For this reason, recent works have adopted Graph Neural Networks (GNNs), which process events as static spatio-temporal graphs, which are inherently sparse. We take this trend one step further by introducing Asynchronous, Event-based Graph Neural Networks (AEGNNs), a novel event-processing paradigm that generalizes standard GNNs to process events as evolving spatio-temporal graphs. AEGNNs follow efficient update rules that restrict recomputation of network activations only to the nodes affected by each new event, thereby significantly reducing both computation and latency for eventby-event processing. AEGNNs are easily trained on synchronous inputs and can be converted to efficient, asynchronous networks at test time. We thoroughly validate our method on object classification and detection tasks, where we show an up to a 200-fold reduction in computational complexity (FLOPs), with similar or even better performance than state-of-the-art asynchronous methods. This reduction in computation directly translates to an 8fold reduction in computational latency when compared to standard GNNs, which opens the door to low-latency eventbased processing.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available