![]() ![]() We demonstrate the capabilities of PrimSeq in previously unseen stroke patients with a range of upper extremity motor impairment. PrimSeq furthermore quantifies these motions at a fraction of the time and labor costs of human experts. The trained model accurately decomposes rehabilitation activities into elemental functional motions, outperforming competitive machine learning methods. Our approach integrates wearable sensors to capture upper-body motion, a deep learning model to predict motion sequences, and an algorithm to tally motions. Here, we present PrimSeq, a pipeline to classify and count functional motions trained in stroke rehabilitation. The optimal quantity of functional motions to boost recovery in humans is currently unknown, however, because no practical tools exist to measure them during rehabilitation training. In animals, training hundreds of functional motions in the first weeks after stroke can substantially boost upper extremity recovery. Stroke rehabilitation seeks to accelerate motor recovery by training functional activities, but may have minimal impact because of insufficient training doses. Random forest underperformed Seq2Seq in classifying all primitives. ASRF outperformed Seq2Seq in classifying reaches (81.6% versus 77.6%) and repositions (79.0% versus 75.4%), but underperformed in classifying the remaining primitives. Comparing sensitivities (diagonal values), CNN outperformed Seq2Seq in classifying repositions (80.7% versus 75.4%) but underperformed in classifying the remaining primitives. Columns reflect swap-in errors for predicted primitives and indicate how often an incorrect primitive was predicted instead of the ground truth primitive. ![]() Rows reflect swap-out errors for ground truth primitives and indicate how often a ground truth primitive was incorrectly predicted as another primitive class. The non-diagonal values represent the identification errors made by the models. The diagonal values represent the sensitivity per primitive, or how often the model correctly predicted a primitive that was actually performed. Shown are confusion matrices for each model with values normalized to the ground truth primitive count. ![]() S2 Fig: Classification performance of Seq2Seq, convolutional neural network (CNN), action state representation framework (ASRF), and random forest. These primitives have similar motion phenotypes and are distinguished by grasp onset/amount. confusion between reaches and transports, idles and stabilizations). We note that some of the errors made by the model could be explained by the lack of finger information from the IMU setup (e.g. The non-diagonal values represent the identification errors made by Seq2Seq. The diagonal values represent the sensitivity per primitive, or how often the model correct predicted a primitive that was actually performed. Shown is a confusion matrix, with values normalized to the ground truth primitive count. S1 Fig: Classification performance of Seq2Seq. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |