Visual Feedback of an Intermediate Mapping During Motor Learning"

Event Date:
June 21st 9:00 AM - 10:00 AM

NEC Seminar, Friday, June 21, 9:00 AM

 

Hybrid
            In person in Wickenden 321
            and via Zoom  
 
            Meeting ID: 928 0482 8495 Passcode: 185518

 

Speaker

Marsalis Smith, DPT, PT (he/him/his)

Biomedical Engineering PhD Candidate
Robotics Lab, Shirley Ryan AbilityLab

Northwestern University

NEC Seminar, Friday June 21, 9:00 AM

 

 

MSmith.jpeg

Title

"Visual Feedback of an Intermediate Mapping During Motor Learning"

 

Abstract

Many-to-few mappings are important in many control scenarios, especially in brain and body machine interface devices for performance enhancement, training, and assistance with disability.  From the perspective of neuromuscular physiology, many-to-few mappings typically involve distinct stages of dimensionality reduction such as occur when a great number of motor neurons (thousands or tens of thousands of neurons) provide input to many muscles (10s of muscles) to control the two-dimensional motion of a cursor on a computer screen (i.e., endpoint motion). We explored how a many-to-few mapping might be facilitated by visual cues of an intermediate map â€“ a 3 degree of freedom planar mechanism. We hypothesized that providing explicit visual knowledge about how an intermediate map influences endpoint motion would result in faster learning of a novel visuomotor task that in many ways represents known features of human sensorimotor control. Here, we required healthy human participants to learn how to control the two-dimensional motion of a screen cursor using 18 signals derived from hand gestures measured from a CyberGlove (i.e., the 18-DOF control signals). In all cases, these control signals were mapped intermediately onto the three joint angles of a 3-DOF planar linkage. Cursor motion was constrained to the tip of the linkage. One group of participants could see the linkage during training on the task, whereas another group could not. We compared target capture performance across the two groups to test our hypothesis. We found that individuals with explicit visual feedback of the intermediate mapping performed better in the early stages of learning only.