Abstract
Electroencephalogram (EEG)-based Brain-Computer Interfaces (BCIs) have garnered significant interest across various domains, including
rehabilitation and robotics. Despite advancements in neural network-based EEG decoding, maintaining performance across diverse user populations
remains challenging due to feature dis- tribution drift. This paper presents an effective approach to ad- dress this challenge by implementing
a lightweight and efficient on-device learning engine for wearable motor imagery recognition. The proposed approach, applied to the well-established
EEGNet architecture, enables real-time and accurate adaptation to EEG signals from unregistered users. Leveraging the newly released low-power
parallel RISC-V-based processor, GAP9 from Greeenwaves, and the Physionet EEG Motor Imagery dataset, we demonstrate a remark- able accuracy gain of
up to 7.31% with respect to the baseline with a memory footprint of 15.6 KByte. Furthermore, by optimizing the input stream, we achieve enhanced real-time performance without compromising
inference accuracy. Our tailored approach exhibits inference time of 14.9 ms and 0.76 mJ per single inference and 20 us and 0.83 uJ per single
update during online training. These findings highlight the feasibility of our method for edge EEG devices as well as other battery-powered wearable
AI systems suffering from subject-dependant feature distribution drift.
Go back