In this paper an automated data labeling (ADL) neural network is proposed to streamline dataset collecting for real-time predicting the continuous motion of hand and wrist, these gestures are only decoded from a surface electromyography (sEMG) array of eight channels. Unlike collecting both the bio-signals and hand motion signals as samples and labels in supervised learning, this algorithm only collects unlabeled sEMG into an unsupervised neural network, in which the hand motion labels are auto-generated. The coefficient of determination (R 2 ) for three DOFs, i.e. wrist flex/extension, wrist pro/supination, hand open/close, was 0.86, 0.89 and 0.87 respectively. The comparison between real motion labels and auto-generated labels shows that the latter has earlier response than former. The results of Fitts’ law test indicate that ADL has capability of controlling multi-DOFs simultaneously even though the training set only contains sEMG data from single DOF gesture. Moreover, no more hand motion measurement needed which greatly helps upper limb amputee imagine the gesture of residual limb to control a dexterous prosthesis.
@inproceedings{RN711,author={Hu, Xuhui and Zeng, Hong and Chen, Dapeng and Zhu, Jiahang and Song, Aiguo},title={Real-time continuous hand motion myoelectric decoding by automated data labeling},booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},pages={6951-6957},isbn={1728173957},year={2020},month=sep,publisher={IEEE},dimensions={true},doi={10.1109/ICRA40945.2020.9197286},}
Intuitive environmental perception assistance for blind amputees using spatial audio rendering
Xuhui Hu, Aiguo Song, Hong Zeng, and 1 more author
IEEE Transactions on Medical Robotics and Bionics, Jan 2022
Vision and touch are essential sensory systems for human to interact with the environment. For the blind amputees, how to quickly and intuitively convey the environmental information to them is one of the key issues for recovering their daily living ability. Inspired by the auditory localization ability of human, we constructed a virtual scene almost identical to reality, and concurrently added a virtual sound source to the interactive object. Leveraging the method of spatial audio rendering (SAR), the three-dimensional motion of the virtual sound source can be vividly simulated in real-time. Finally, a myoelectric prosthetic control system was developed to assist blind amputees in their daily activities, The Fitts’ law test on target localization was conducted on both SAR and voice prompt (VP) based path guidance methods, the results indicate that SAR significantly improves the information transfer rate. The results of prosthetic control test show that SAR reduces the completion time by half than the VP, while restoring the natural grasping path. With the advantage of intuitive and rich perception, the SAR demonstrated the potential applications for blind amputees to reconstruct the control and sensory loops.
@article{RN717,author={Hu, Xuhui and Song, Aiguo and Zeng, Hong and Chen, Dapeng},title={Intuitive environmental perception assistance for blind amputees using spatial audio rendering},journal={IEEE Transactions on Medical Robotics and Bionics},volume={4},number={1},pages={274-284},issn={2576-3202},year={2022},month=jan,publisher={IEEE},dimensions={true},doi={10.1109/TMRB.2022.3146743},}
StereoPilot: A wearable target location system for blind and visually impaired using spatial audio rendering
Xuhui Hu, Aiguo Song, Zhikai Wei, and 1 more author
IEEE Transactions on Neural Systems and Rehabilitation Engineering, Jun 2022
Vision loss severely impacts object recognition and spatial cognition for limited vision individuals. It is a challenge to compensate for this using other sensory modalities, such as touch or hearing. This paper introduces StereoPilot, a wearable target location system to facilitate the spatial cognition of BVI. Through wearing a head-mounted RGB-D camera, the 3D spatial information of the environment is measured and processed into navigation cues. Leveraging spatial audio rendering (SAR) technology, it allows the navigation cues to be transmitted in a type of 3D sound from which the sound orientation can be distinguished by the sound localization instincts in humans. Three haptic and auditory display strategies were compared with SAR through experiments with three BVI and four sighted subjects. Compared with mainstream speech instructional feedback, the experimental results of the Fitts’ law test showed that SAR increases the information transfer rate (ITR) by a factor of three for spatial navigation, while the positioning error is reduced by 40%. Furthermore, SAR has a lower learning effect than other sonification approaches such as vOICe. In desktop manipulation experiments, StereoPilot was able to obtain precise localization of desktop objects while reducing the completion time of target grasping tasks in half as compared to the voice instruction method. In summary, StereoPilot provides an innovative wearable target location solution that swiftly and intuitively transmits environmental information to BVI individuals in the real world.
@article{RN719,author={Hu, Xuhui and Song, Aiguo and Wei, Zhikai and Zeng, Hong},title={StereoPilot: A wearable target location system for blind and visually impaired using spatial audio rendering},journal={IEEE Transactions on Neural Systems and Rehabilitation Engineering},volume={30},pages={1621-1630},issn={1534-4320},year={2022},month=jun,publisher={IEEE},dimensions={true},doi={10.1109/TNSRE.2022.3182661},}
Finger movement recognition via high-density electromyography of intrinsic and extrinsic hand muscles
Xuhui Hu, Aiguo Song, Jianzhi Wang, and 2 more authors
Surface electromyography (sEMG) is commonly used to observe the motor neuronal activity within muscle fibers. However, decoding dexterous body movements from sEMG signals is still quite challenging. In this paper, we present a high-density sEMG (HD-sEMG) signal database that comprises simultaneously recorded sEMG signals of intrinsic and extrinsic hand muscles. Specifically, twenty able-bodied participants performed 12 finger movements under two paces and three arm postures. HD-sEMG signals were recorded with a 64-channel high-density grid placed on the back of hand and an 8-channel armband around the forearm. Also, a data-glove was used to record the finger joint angles. Synchronisation and reproducibility of the data collection from the HD-sEMG and glove sensors were ensured. The collected data samples were further employed for automated recognition of dexterous finger movements. The introduced dataset offers a new perspective to study the synergy between the intrinsic and extrinsic hand muscles during dynamic finger movements. As this dataset was collected from multiple participants, it also provides a resource for exploring generalized models for finger movement decoding.
@article{RN720,author={Hu, Xuhui and Song, Aiguo and Wang, Jianzhi and Zeng, Hong and Wei, Wentao},title={Finger movement recognition via high-density electromyography of intrinsic and extrinsic hand muscles},journal={Scientific Data},volume={9},number={1},pages={373},doi={10.1038/s41597-022-01484-2},url={https://www.ncbi.nlm.nih.gov/pubmed/35768439},year={2022},month=jun,publisher={Nature},dimensions={true},}
IEEE-RAL
Bridging Human-Robot Co-Adaptation via Biofeedback for Continuous Myoelectric Control
@article{RN726,title={Bridging Human-Robot Co-Adaptation via Biofeedback for Continuous Myoelectric Control},author={Hu, X. and Song, A. and Zeng, H. and Wei, Z. and Deng, H. and Chen, D.},journal={IEEE Robotics and Automation Letters},volume={8},pages={8573-8580},doi={10.1109/LRA.2023.3330053},year={2023},publisher={IEEE}}
You can even add a little note about which of these is the best way to reach you.