Fine-tuning Myoelectric Control through Reinforcement Learning in a Game Environment
Journal article, 2025
Methods: The starting point of our method is a SL control policy, pretrained on a static recording of electromyographic (EMG) ground truth data. We then apply RL to fine-tune the pretrained classifier with dynamic EMG data obtained during interaction with a game environment developed for this work. We conducted real-time experiments to evaluate our approach and achieved significant improvements in human-in-the-loop performance.
Results: The method effectively predicts simultaneous finger movements, leading to a two-fold increase in decoding accuracy during gameplay and a 39% improvement in a separate motion test.
Conclusion: By employing RL and incorporating usage-based EMG data during fine-tuning, our method achieves significant improvements in accuracy and robustness. Significance: These results showcase the potential of RL for enhancing the reliability of myoelectric controllers, which is of particular importance for advanced bionic limbs. See our project page for visual demonstrations: https://sites.google.com/view/bionic-limb-rl.
Reinforcement learning
Electromyography
Human computer interaction
Deep Learning
Prosthetic limbs
Author
Kilian Tamino Freitag
Chalmers, Electrical Engineering, Systems and control
Yiannis Karayiannidis
Lund University
Jan Zbinden
Chalmers, Electrical Engineering, Systems and control
Rita Laezza
Chalmers, Electrical Engineering, Systems and control
IEEE Transactions on Biomedical Engineering
0018-9294 (ISSN) 15582531 (eISSN)
Vol. In PressSubject Categories (SSIF 2025)
Robotics and automation
Computer Sciences
DOI
10.1109/TBME.2025.3578855