Original Paper
Abstract
Background: Sustained engagement is essential for the success of telerehabilitation programs. However, patients’ lack of motivation and adherence could undermine these goals. To overcome this challenge, physical exercises have often been gamified. Building on the advantages of serious games, we propose a citizen science–based approach in which patients perform scientific tasks by using interactive interfaces and help advance scientific causes of their choice. This approach capitalizes on human intellect and benevolence while promoting learning. To further enhance engagement, we propose performing citizen science activities in immersive media, such as virtual reality (VR).
Objective: This study aims to present a novel methodology to facilitate the remote identification and classification of human movements for the automatic assessment of motor performance in telerehabilitation. The data-driven approach is presented in the context of a citizen science software dedicated to bimanual training in VR. Specifically, users interact with the interface and make contributions to an environmental citizen science project while moving both arms in concert.
Methods: In all, 9 healthy individuals interacted with the citizen science software by using a commercial VR gaming device. The software included a calibration phase to evaluate the users’ range of motion along the 3 anatomical planes of motion and to adapt the sensitivity of the software’s response to their movements. During calibration, the time series of the users’ movements were recorded by the sensors embedded in the device. We performed principal component analysis to identify salient features of movements and then applied a bagged trees ensemble classifier to classify the movements.
Results: The classification achieved high performance, reaching 99.9% accuracy. Among the movements, elbow flexion was the most accurately classified movement (99.2%), and horizontal shoulder abduction to the right side of the body was the most misclassified movement (98.8%).
Conclusions: Coordinated bimanual movements in VR can be classified with high accuracy. Our findings lay the foundation for the development of motion analysis algorithms in VR-mediated telerehabilitation.
doi:10.2196/27597
Keywords
Introduction
Stroke Telerehabilitation
Stroke is continuously cited as a leading cause of disability in adults. Every year, 795,000 Americans experience stroke, and 649,000 survive it [
]. Approximately 610,000 of these cases are the first attacks, indicating that the population of stroke survivors is rapidly increasing [ ]. Stroke survivors commonly experience neuromuscular disorders that profoundly disrupt their lives. It is estimated that 74% of stroke survivors require assistance with activities of daily living, costing billions of dollars annually [ , ]. Beyond loss of mobility, stroke-induced disability takes a societal toll; many stroke survivors can no longer contribute to the workforce and lose their functional role in their community [ ]. They often enter a downward spiral that is associated with a steep decline in their psychological and cognitive well-being, affecting their families and social circles [ , , ].Motivated by these economic and societal needs, rehabilitation medicine aims to reintegrate individuals with disabilities into society. This process typically involves multiple visits to outpatient clinics, where therapists treat patients with arduous exercises. The more frequently and intensely they exercise, the sooner the patients would recover muscle strength and function [
]. Nonetheless, outpatient clinics are often underequipped and understaffed. As a result, patients have to wait for long periods for appointments and do not receive sufficient care, significantly hindering their recovery [ ]. To address this issue, the notion of telerehabilitation has emerged.In the ideal telerehabilitation paradigm, patients are prescribed home-based exercises involving electronic devices that measure their movements [
- ]. Data on motion are then sent to a physician, who would, in turn, remotely assess motor performance and recommend the next steps in the rehabilitation regimen. Through this process, patients are expected to exercise at their own convenience at home, readily receive professional feedback, and ultimately maximize their rehabilitation outcomes. Multiple telerehabilitation systems have been introduced in the past 20 years, demonstrating and yielding outcomes comparable with those of traditional in-clinic rehabilitation [ - ].Despite the promising prospects, the advantages of telerehabilitation are often not realized, as patients fail to adhere to their prescribed regimen in the absence of a physical therapist [
, ]. One of the primary factors pinpointing a lack of adherence is a lack of motivation [ , ]. To address this critical limitation, innumerable efforts were invested in gamification of telerehabilitation [ - ]. Notably, the Java Therapy, one of the first examples of a telerehabilitation system, incorporated therapy games in between status tests that measure rehabilitation progress [ ]. Similarly, games that involve chasing rabbits [ ], catching falling fruit [ ], and even competitive air hockey [ ], were developed to make physical exercise more enjoyable.Citizen Science–Based Telerehabilitation
Although games effectively improve engagement in telerehabilitation, incorporating citizen science into the activity was proposed instead [
]. In citizen science, members of the general public carry out research tasks in projects led by professional scientists [ , ]. These tasks involve data collection or data analysis and do not require any particular expertise or commitment [ , ]. Citizen science is a compelling means for improving engagement in telerehabilitation for a few reasons. Similar to games, the motivations underlying participation in citizen science are primarily intrinsic [ , ]. Some citizen science projects incorporate gaming elements, such as point systems, scoreboards, or competitions, to promote long-term participation [ , ]. Unlike in games, citizen scientists choose to contribute to a project not only because it is enjoyable or fun but also because they are interested in the research topic, they have a desire to learn more about it, and they would like to promote it [ - ]. In essence, citizen science is intellectually stimulating and encourages learning. Moreover, citizen science has the potential to empower patients to help scientists despite their disability, increase their self-esteem, and provide them with a sense of belonging to a community [ , ]. Finally, as it is important for leading scientists to collect or analyze data meticulously, there is rarely a time constraint for making a contribution such that users can contribute at their own pace.In a recent study, we presented a low-cost telerehabilitation system that delivers exercise in the context of citizen science [
]. The system consisted of a Microsoft Kinect sensor and an inertial measurement unit mounted on a wooden dowel. Users would manipulate the dowel in front of the Kinect sensor to perform actions on a standard computer monitor or television screen. More specifically, the actions involved the annotation of 360° images of a highly polluted canal in Brooklyn, New York, United States. The system was dedicated to bimanual exercise, in which users would manipulate the dowel with both hands. The system also featured a classification algorithm that identified the movements performed by the user, which achieved a high accuracy of 93.1%.In this study, we adapted the Kinect-based interface to virtual reality (VR) and focused on the classification of upper limb movements in a preclinical setting. We recorded the interactions of 9 healthy users with the Oculus Rift (Oculus VR), a popular VR gaming system. The Oculus Rift consists of a head-mounted display, 2 Touch controllers, and 2 tracking sensors. Inertial measurement units are embedded in the head-mounted display and Touch controllers such that the system is able to record the orientation of the head and the hands. The devices were also seeded with an array of infrared lights, which, in conjunction with the tracking sensors, enable high-fidelity motion tracking through the Oculus trademarked Constellation Tracking [
]. The VR setting offered more degrees of freedom in motion relative to our Kinect-based system, whereby users could rotate their entire bodies to interact with the interface. Therefore, to adapt the software and classification algorithm, we applied a kinematic framework that infers the position and orientation of the Touch controllers relative to the head-mounted display.Our choice to explore human movement in VR was motivated by 2 main reasons. First, the major barriers that prevent the widespread adoption of rehabilitation technologies are cost and user friendliness. Rehabilitation devices are often custom-made, cost prohibitive, and require technological proficiency that extends beyond the typical knowledge of the general public [
]. On the other hand, gaming controllers such as the Oculus Rift are safe and intuitive to use and are more affordable than rehabilitation robots, thereby offering a viable means for home-based telerehabilitation. Gaming controllers can also objectively measure the motor performance through their embedded sensors. Specifically, the Oculus Rift tracks movements of the headset and Touch controllers with high spatial and temporal resolution, thereby providing rich data on the user’s motions. It was validated in controlled experiments and deemed sufficient for motion analysis in medical applications [ , ].Second, VR is the most immersive medium available today. The technological apparatus of VR grants the user the experience of presence, where the user accesses a novel environment and interacts with it as if the computer ceased to exist [
, ]. In the context of rehabilitation, immersive VR environments are largely used to improve patients’ engagement and adherence to the rehabilitation regimen, which will accelerate their recovery in return [ , , ]. The literature suggests that patients undergoing rehabilitation augmented with VR could substantially improve their motivation and motor functions [ , ]. For example, Dockx et al [ ] compared 281 older adults’ perceptions of fall prevention training over a period of 6 weeks when delivered with and without VR. All participants who exercised in the VR condition reported higher engagement and perceived benefits and were more likely to recommend the intervention to others than those who did not use VR in their training. In another study, AlMousa et al [ ] tested a game with 5 patients with stroke and compared their satisfaction when playing in VR and in a traditional setting. All patients agreed that the VR modality was highly motivating and expressed interest in including it in their rehabilitation. Finally, in a study involving 4 patients with spinal cord injury, Palaniappan and Duerstock [ ] showed that VR improved motor performance, whereby patients’ upper limb range of motion was greater.We created an interactive interface in which users could participate in an environmental citizen science project. In this particular application, users contributed to the environmental monitoring of the highly polluted Gowanus Canal in Brooklyn, New York, United States. Users could explore 360° images of the canal, select labels from a list of 4 labels, and allocate them onto objects of interest, such as potential pollutants and notable landmarks (
).The interface was dedicated to bimanual training of patients with stroke, whereby users interacted with the interface by performing coordinated movements with both arms. Many rehabilitation strategies, such as constraint-induced movement therapy [
, ], task-oriented training [ ], and continuous passive movement [ ], have various advantages. Bimanual training is highlighted as a potent clinical approach for the recovery of coordinated movements with both physiological and practical advantages [ ] Research has shown that passive movement of paretic limbs can recover voluntary motion by imparting electrical impulses to the contralateral primary motor cortex (sometimes referred to as spillover) [ - ] and project them to the affected muscles [ - ]. Furthermore, it has been argued that bimanual skills are abundant in activities of daily living and therefore practicing them will help patients regain independence more quickly [ - ].We pursued a simple, yet effective, data-driven approach to automatically assess bimanual movements in VR.
Motor Assessment Using Machine Learning
Machine learning offers an important avenue for automatically identifying and categorizing human behavior. In machine learning, a computer uses data to predict an outcome without explicitly knowing the relationship between the data and the outcome [
, ]. The input of a machine learning algorithm consists of features that describe instances of data. When a supervised machine learning approach is used, knowledge of the outcome must be available during training. In this case, a set of instances is fed to the machine, encapsulating their features and associated outcomes [ , ]. For example, Begg and Kamuzzaman [ ] used machine learning to distinguish between the gait of the young and older adults. The authors fed their machine learning algorithm with data on the gait of 12 young and 12 older individuals, where their gait was summarized through multiple features, such as stride length, walking speed, forces applied by the feet, and ankle angles. They used a supervised machine learning approach (support vector machine [SVM]) and therefore provided the machine with the true class of the participant: young or older. Following training, the SVM classifier achieved an accuracy rate of 91.7% in the classification of the age group of the participant.In a similar study, Novak et al [
] aimed to identify gait initiation and termination using wearable inertial measurement units. The authors recorded 10 participants walking with inertial sensors on their legs and trained a tree classifier to distinguish between gait phases. The algorithm exceeded 80% accuracy and was robust with respect to the gait speed. Semwal et al [ ] trained a multilayer perceptron to identify disordered gait. The authors defined features for walking, running, jogging, and jumping from vision-based and sensor-based data and achieved accuracy rates ranging from 85% to 92.5%.Despite its success with gait analysis, the use of machine learning to assess upper limb movement has not been extensively studied. Such an assessment is more challenging as the repertoire of arm movements is wider than that of the lower limbs. In several studies, statistical pattern recognition algorithms have been used to quantify the motor performance of the upper limb from data collected by inertial sensors [
] and vision-based sensors [ ]. Additional work to recognize upper limb movement was carried out using k-means clustering and convolutional neural networks [ , ]. Nonetheless, the efficacy of machine learning in upper limb rehabilitation remains underexplored.Objective
We developed a machine learning algorithm that classifies the movements performed by the user to automate the assessment of motor performance. The proposed algorithm implements dimensionality reduction through principal component analysis (PCA), feature extraction, and ensemble classification. In all, 9 healthy individuals interacted with our interface, whereas data on their movements were recorded by the sensors embedded in the Oculus Rift devices. The classification of the movement was achieved with remarkably high accuracy and could reduce the time and cost of poststroke rehabilitation assessment by a therapist. Furthermore, the classification strategy can be extended to provide haptic feedback to the user to perform exercises correctly and safely.
Methods
VR Interface
The interface was developed in the Unity real-time game engine (Unity Technologies) for use with the Oculus Rift VR system. In the game, participants were presented with a random 360° image of the Gowanus Canal, overlaid by a heads-up display (HUD). The HUD served as the participants’ main method of interacting with the application. It contained a button for navigation between images of the canal and a trash bin and a list of descriptive keywords that may or may not describe objects within the image.
Users were tasked with analyzing the images. Specifically, they could explore the 360° images, select labels from the list of keywords, and allocate them to objects of interest (
). If the users could not find an object that a label described in the image, they could eliminate the label by allocating it onto the trash bin ( ). Once the user felt that the image was saturated with labels, they could analyze a new image by selecting the Next Image button.To interact with the HUD, the users performed bimanual gestures (
). Specifically, users began from a baseline pose where they flexed their elbows and held the Touch controllers near their shoulders. To move the cursor to the left, they extended both arms to the left side of their body, simultaneously performing horizontal abduction of the left shoulder, horizontal adduction of the right shoulder, shoulder flexion, elbow extension, and forearm pronation ( A). Similarly, to move the cursor to the right, they performed horizontal shoulder abduction in the opposite direction, extending both hands to the right side of their bodies ( D). To move the cursor upward, users raised the Touch controllers by flexing their shoulders and extending their elbows ( B). To move the cursor downward, they extended both the elbows and lowered the Touch controllers ( E). Finally, to select a button, they flexed both shoulders simultaneously and extended their elbows, pushing the Touch controllers away from their body ( C and F). These movements used most joints of the upper limb and were commonly prescribed to patients [ , ]. If a user wanted to move the cursor diagonally along the screen, they would instead move it horizontally and vertically.To enable the user interface, we used a kinematic framework using data on the position of the head-mounted display and Touch controllers, measured by the infrared camera sensors. We considered 4 reference frames for the inertial, global space, denoted as {G} and the 3 noninertial reference frames associated with the head-mounted display, right hand Touch controller, and left hand Touch controller, denoted as {H}, {R}, and {L}, respectively (
).Throughout the game, a midway point between the Touch controllers, PGh (
A), was computed in real time as follows:where PGh is a vector in the form of [X Y Z]T that expresses the position of the midpoint h in the global frame {G} (T being matrix transposition); , , and are the positions of a point along the X-, Y-, and Z-axis of global frame {G}, respectively; and subscripts R and L represent the right and left Touch controllers, respectively. The cursor on the screen responded to the fixed values of PGh. For example, if XGh was greater than a certain threshold value, the cursor would move to the left on the screen. Similarly, if XGh was smaller than a certain negative threshold value, the cursor would move to the right. Considering that patients may take longer to complete their movements, we did not impose any time constraints on these controls.
To accommodate for impaired movement with a compromised range of motion, a calibration phase was added to determine the aforementioned threshold values. During calibration, the participant performed each of the movements 5 times consecutively. The software computed an average of the user’s range of motion during the 5 iterations as follows:
where n=1, 2,..., 5 is the iteration of the movement, PGh,n is the time series of the position of the midpoint between the right and left Touch controllers during iteration n; and PGH,n is the time series of the position of the head-mounted display during iteration n. The application set a threshold point at a distance of 0.25 along the X-, Y-, and Z-axis of the head-mounted display (
B). At any time when PGh exceeded 0.25, the cursor began moving on the screen along the axes that satisfied this condition ( C). Thus, users who had a limited range of motion had to move their arms at a shorter distance to induce movement of the cursor on the screen.Finally, acknowledging that physical therapy can be physically and mentally taxing, we enabled a Home page menu such that patients could press a button to pause the software and rest. This feature is particularly important for telerehabilitation of stroke, as many patients may feel pain or fatigue, discouraging them from engaging in the exercise [
].Data Collection
This study was carried out in accordance with the relevant guidelines and regulations set by the New York University’s Institutional Review Board, the University Committee on Activities Involving Human Subjects (study number: FY2019-2828). Informed consent for participation was obtained from all participants.
In all, 9 members of the university community were recruited and escorted to a private room. They were introduced to the project and VR system. Upon signing a consent form, the participants stood in a 3 meter × 3 meter cleared space and wore the head-mounted display. They viewed a short presentation about the Gowanus Canal and the notion of citizen science and underwent a calibration phase.
The calibration was designed such that the participants began with a baseline pose with their elbows bent and hands held near their respective shoulders. The participants first performed horizontal shoulder abduction toward their right side. Instructions on the screen explicitly asked the participants to extend their arms as far as possible to the right and return to the baseline pose, repeating this movement 5 times. Then, the participants performed horizontal shoulder abduction toward their left side and returned to the baseline pose 5 times. In the same manner, the participants performed shoulder flexion by raising both hands, elbow extension by lowering both hands, and simultaneous shoulder flexion and elbow extension by pushing both hands forward in this order. The participants repeated each movement 5 times consecutively and returned to the baseline pose after each excursion.
After calibration, the participants completed a tutorial teaching them how to use the HUD. They then analyzed images of the Gowanus Canal for as long as they wished. The movements of the participants were recorded throughout the experiment. The data set consisted of the time series of the positions of the head-mounted display and Touch controllers in 3D and their orientations in Tait-Bryan angles. Measurements were logged at a sampling rate of 89 measurements per second.
Data Analysis
Kinematics in the VR Setting
Data were processed and analyzed in MATLAB (MATLAB R2020a; The MathWorks, Inc). We aimed to infer the participants’ movement during their interaction with the VR system from data on the positions and orientations of the head-mounted display and Touch controllers. In VR, the interface is not constrained to a fixed planar screen, and participants’ interactions extend to 3D whereby the user can walk and turn their body around. Therefore, to infer the participants’ movements, the positions of their hands relative to their heads are more informative than their positions in absolute space.
We began with a kinematic description of the positions and orientations of the Touch controllers relative to the head-mounted display through matrix manipulation [
]. The reference frame of the head-mounted display was expressed with respect to the global frame using the rotation matrix:where a superimposed hat identifies unit vectors for the reference frames, such that the columns of the matrix are the unit vectors of {H}, expressed in {G}’s coordinate system. Similarly, the reference frames of the Touch controllers with respect to the global frame were expressed as RGR and RGL for the right and left controllers, respectively.
Taking the devices’ rotation matrices, the frame of reference of the right Touch controller relative to the head-mounted display was calculated as
and the left Touch controller’s was calculated as
where the inverse is equivalent to the transpose of the matrix [
]. To fully describe the instantaneous relative positions and relative orientations of the devices, we applied the homogeneous transform [ ] at each time step, such thatwhere PGH is the instantaneous position of the head-mounted display in the global frame, PGH is the instantaneous position of the head-mounted display in the right hand controller frame, RGR is the rotation matrix of the right Touch controller frame relative to the global frame, PGR is the position of the right Touch controller in the global frame, and 0 on the bottom left entry of the matrix represents a row vector of 3 zeros. We applied the transformation to instantaneous measurements at each time step and generated a time series containing the positions and orientations of the Touch controllers relative to the head-mounted display.
Assessing Motor Performance
For a comparison of patients’ movements with movements of healthy ones, we quantified the participants’ motor performance using several metrics: (1) range of motion, computed as the maximum distance of each of the Touch controllers from the headset, along each of the anatomical planes [
, ]; (2) mean speed, computed as the average of instantaneous speeds [ , ]; (3) smoothness, computed as the mean speed divided by the maximal instantaneous speed [ , ]; and (4) path length, measured as the sum of distances between pairs of consecutive data points during movement [ , ].Feature Selection
We pursued a data-driven methodology to classify the movements performed by the participant based on the Touch controllers’ position and orientation relative to the head-mounted display. Only data from the calibration phase were used in the analysis, as the sequence of movements performed by the participants during this period was known and could be specified in supervised training. The data that were collected in the remainder of the session while participants interacted with the citizen science software could be used in future endeavors to assess motor performance and engagement over longer periods, once automatic classification is implemented. We also included the instantaneous head-mounted display and Touch controllers’ linear and angular velocities in the global frame for analysis. Specifically, we computed the devices’ linear, denoted as , , and , where (∙) is the noninertial reference frame under examination and angular velocities about their X-, Y-, and Z-axis in the global reference system, denoted as , , and . We also computed the Touch controllers’ positions and orientations relative to the head-mounted display, denoted as , , and , and , , and , respectively. In general, we denoted as the generic coordinate of point B, in coordinate system {A}. For notational convenience, when the trailing subscript is a reference frame, B represents the position of the origin of frame {B}. For example, XHR is the position of the right Touch controller, along the X-axis of the head-mounted display. Similarly, γGR is the angular velocity of the right Touch controller about the X-axis in the global frame. Overall, the data set included 30 variables, as summarized in
.Device and variable notation | Variable description | ||
Head-mounted display | |||
XGH, YGH, ZGH | Linear velocity in {G} | ||
γGH, βGH, αGH | Angular velocity in{G} | ||
Right Touch controller | |||
XHR, YHR, ZHR | Position in {H} | ||
γHR, βHR, αHR | Orientation in {H} | ||
XGR, YGR, ZGR | Linear velocity in {G} | ||
γGR, βGR, αGR | Angular velocity in {G} | ||
Left Touch controller | |||
XHL, YHL, ZHL | Position in {H} | ||
γHL, βHL, αHL | Orientation in {H} | ||
XGL, YGL, ZGL | Linear velocity in {G} | ||
γGL, βGL, αGL | Angular velocity in {G} |
Next, we automatically identified instances of movement (versus nonmovement) in the time series of each variable and segmented them. Specifically, we used finite differences between the positional data for the Touch controllers with respect to time and defined the time series [
]:Intervals of movement were taken as the instances where Ω exceeded 0.077 meters/second and lasted for longer than 0.2 seconds (
). These threshold values were derived empirically and were unique to the participant. To identify instances where a distinct pose occurred, pairs of consecutive intervals and the time series between them were selected as segments. Overall, 25 segments were identified, one each for each movement.PCA was performed to identify salient variables in each movement. Within segments n=1, 2,..., 25, each of the 30 time series was normalized with respect to its own SD in the segment. The normalized time series, sn,i was represented by a column vector containing variable i=1, 2,..., 30 in segment n. For each segment n, we generate covariance matrix Kn, whose entries i, j are given by
where i=1, 2,..., 30, j=1, 2,..., 30, and is the average value of the components of vector sn. As there are 30 variables, there are 30×30 possible ordered variable pairs to compute the covariance for, which is the size of the symmetrical matrix Kn.
The principal components of each covariance matrix Kn were determined from the dominant eigenvalues λis [
]. To identify these eigenvalues, we defined a spectral gap as the largest difference between consecutive eigenvalues sorted in descending order ( A). The eigenvalues that preceded the gap were deemed to be dominant. Then, we examined the contribution of eigenvector vi’s components, the so-called principal component loadings, to these principal components. We sorted the absolute values of these loadings in descending order and recognized a gap as the largest difference between consecutive values. The loadings that appeared before the gap were retained, and the associated variables were used as salient variables that summarize the entire principal component ( B).The salient variables we identified in the PCA were used to create discriminating statistics for training a classification algorithm. In the training, given the true class of a movement that was performed, the algorithm would unveil different relationships between the features that distinguish one movement from another [
, ].Importantly, we observed that only the orientations of the Touch controllers relative to the head-mounted display were prominent during movement. Thus, their means and SDs were selected as the features. We also included the mean positions of the Touch controllers relative to the head-mounted display as features to further support the distinction between the poses. Nonetheless, we acknowledged that movements may be better discriminated using features that encapsulate the interactions between the variables. Therefore, we used correlation coefficients as additional features that relate 2 variables at a time. The correlation coefficients between γ, β, and α of one Touch controller and their counterparts in the other Touch controller were added to the analysis, yielding 21 features in total.
Movement Classification
We implemented a supervised machine learning classification that identifies which movement a user performs at any given time. To observe the evolution of features over time in a future clinical study, we chose to perform classification in a moving-window paradigm. Within this paradigm, we evaluated the actual movement and associated features within a window of several time steps, shifted the window forward in time by a single step, evaluated the features again, and so on. The length of the moving window was set to 13 time steps, equivalent to 0.15 seconds.
First, we established the true classes within each frame to train the algorithm. We visually inspected the time series of the calibration (where we knew what movement was performed), identified which movement was performed (if any) at every time step, and labeled it as such. Beginning from the first time step, we determined the true class of the window that covered the subsequent 13 time steps based on their mode. That is, the window’s true class matched the class of the majority of time steps (7 or more). Henceforth, the window was moved to the following time step and the subsequent true class was determined. In this manner, we created a time series for the true class of frames. To determine the true class of a movement within a 13–time step frame, we also computed the set of 21 features and recorded them for the same frame. Thus, we created 21 additional time series, each representing the evolution of a feature.
Next, we trained a classification algorithm using MATLAB’s Classification Learner app. We compounded the moving frames’ true classes and features across participants into a single table and selected it as the data set variable. The frames’ true classes were set as response variables, and all features were set as predictors. We applied a K-fold cross-validation with K=5, such that 80% of the calibration data from all participants were used for training and the remaining 20%, for validation. Finally, we selected bagged trees as the model type.
Bagged trees is an ensemble method based on decision trees [
]. A basic decision tree splits the input data into subgroups with a similar response to a binary criterion. The subgroups are partitioned recursively until the model is able to predict the output based on the class that has the majority representation. A bagged trees classifier performs bootstrapping and aggregation, that is bagging, on a multitude of decision trees. Specifically, the bagged trees algorithm generates decision trees by resampling the data set with replacement and determines the response class based on the simple majority of the trees’ predictions. Thus, this classification method mitigates the high variance often observed in them [ , ].Because the trees are produced by bagging, all features are considered for a splitting event. It is possible to score the importance of each feature by estimating the out-of-bag error. That is, instances that were not sampled when a tree was generated were used to make a prediction. The mean error of the prediction was then computed. The features that yielded the largest decrease in mean error were considered to be the most important.
Results
Data Collection
Data were collected from 9 healthy participants who interacted with the interface. On average, the participants interacted with the interface for 368.26 (SD 92.74) seconds, generating time series of 32,776 (SD 8254) time steps on average. A total of 294,983 measurements were collected, of which 142,916 time steps (1605.80 seconds) were recorded during the calibration phase.
Motor Performance
The participants’ range of motion, mean speed, peak speed, and path length were computed (
). The range of motion, mean speed, and smoothness for each movement in one arm were comparable with those of its symmetrical counterpart. However, during shoulder adduction and shoulder flexion or extension upward, considerable variation was measured among participants with respect to smoothness; SDs were >25% of the mean value, or even greater than the mean value, as in the case of the left hand during shoulder flexion or extension upward. Finally, in all movements, the path length was larger than the range of motion, indicating that the participants did not follow a straight line along the anatomical axes.Movement and hand | Range of motion (meters), mean (SD) | Speed (meters/second), mean (SD) | Smoothness, mean (SD) | Path length (meters), mean (SD) | |||||
Shoulder adduction to the right | |||||||||
Right | 0.61 (0.11) | 0.86 (0.25) | 2.47 (1.16) | 0.72 (0.17) | |||||
Left | 0.39 (0.06) | 0.60 (0.13) | 1.95 (0.58) | 0.48 (0.08) | |||||
Shoulder adduction to the left | |||||||||
Right | 0.38 (0.05) | 0.60 (0.10) | 2.27 (1.43) | 0.46 (0.08) | |||||
Left | 0.61 (0.16) | 0.94 (0.25) | 4.39 (3.96) | 0.74 (0.18) | |||||
Shoulder flexion or extension upward | |||||||||
Right | 0.60 (0.08) | 0.86 (0.20) | 3.43 (3.33) | 0.64 (0.10) | |||||
Left | 0.59 (0.08) | 0.86 (0.20) | 3.16 (3.64) | 0.63 (0.10) | |||||
Shoulder flexion or extension downward | |||||||||
Right | 0.66 (0.06) | 1.02 (0.30) | 1.94 (0.21) | 0.82 (0.13) | |||||
Left | 0.66 (0.06) | 1.03 (0.29) | 1.95 (0.21) | 0.81 (0.12) | |||||
Elbow flexion or extension upward | |||||||||
Right | 0.45 (0.05) | 0.79 (0.19) | 1.81 (0.29) | 0.51 (0.08) | |||||
Left | 0.45 (0.05) | 0.78 (0.18) | 1.81 (0.33) | 0.50 (0.07) |
Dimensionality Reduction
PCA disclosed the salient variables that best characterized each movement performed by the participants. Examination of the spectra of the covariance matrices revealed that the spectral gap was located between the largest and second largest eigenvalues for all instances of movement. Therefore, only 1 principal component was required to capture variations in movements.
Unexpectedly, among the 30 variables we considered, only the orientations of the Touch controllers were pertinent for the analysis. We found that shoulder abduction to the right side of the body and to the left side of the body were both associated with changes in the Tait-Bryan angles about the X- and Z-axis of the Touch controllers in the head-mounted display frame: γHR, αHR, γHL, and αHL. Shoulder flexion while raising the hands was dominated by variations in all 6 Tait-Bryan angles γHR, βHR, αHR, γHL, βHL, and αHL. Only changes in αHL and γHL strongly characterized elbow extension while lowering the Touch controllers. Finally, appreciable variations in αHR and αHL were most prominent during elbow extension while pushing the Touch controllers forward. Changes in γHL, βHL, γHR, and βHR were also detected in this motion. The PCA results are summarized in
.Movement | Salient variables |
Shoulder abduction to the right | γHR, αHR, γHL, αHL |
Shoulder abduction to the left | γHR, αHR, γHL, αHL |
Shoulder flexion or extension upward | γHR, βHR, αHR, γHL, βHL, αHL |
Shoulder flexion or extension downward | γHL, αHL |
Elbow flexion or extension upward | γHR, βHR, αHR, γHL, βHL, αHL |
Feature Selection
We created features based on the variables identified as salient using PCA. We considered the mean values and SDs of the Touch controllers’ Tait-Bryan angles. We also included the Touch controllers’ mean displacement relative to the head-mounted display to distinguish between static poses. We used correlation coefficients as additional features to capture the interactions between the variables. Specifically, we computed the correlation coefficients for the following three pairs: (γHR, γHL), (βHR, βHL), and (αHR, αHL). Overall, 21 features were selected (
).Features | Variables |
Mean |
|
SD |
|
Correlation coefficient |
|
Movement Classification
Our classification model achieved an accuracy of 99.9%, where most misclassifications resulted from falsely classifying instances of movement as nonmovements (
). The true positive rate was highest for elbow extension to the bottom and for elbow extension forward, with 99.2% of instances classified successfully in both. The algorithm performed the worst in the classification of shoulder flexion forward, where the true positive rate reached 98.7%.Out-of-bag analysis revealed that the mean value of XHR was the most important variable for the classification of movement, followed by the means of ZHR and βHR (
). The correlation between αHR, and αHL contributed the most to the classification among the correlation values. Among the SDs, γHR contributed the most to the classification. Nonetheless, correlation coefficients and SDs seemed to modestly impact the classification. The mean value of γHL was the least important, and αHR had the smallest contribution to classification among the SD values.Discussion
Principal Findings
As the world’s population is aging, the incidence of stroke and other neuromuscular diseases is increasing, and the demand for affordable and convenient physical therapy is rising [
]. Sensor and communication technologies are readily available for delivery and monitoring of home-based therapy; however, human interaction is a critical design aspect in this context: telerehabilitation programs are carried out without clinical supervision, so that patients must motivate themselves to perform exercises with sufficient intensity and frequency.Lack of motivation has led to the study and development of exergames [
, ], where physical activity facilitates games. Although the effectiveness of these interventions has been demonstrated [ , ], it may be further maximized by incorporating cognitively challenging elements, learning, and sociality [ ] as older adults, who comprise most patients, show a propensity toward these features [ ]. As such, citizen science presents itself as an intellectually stimulating motivational framework with greater appeal to patients. By framing physical exercise in citizen science, patients would be able to learn about ongoing research, bring about scientific discoveries, and support a cause they care about—all while adhering to their rehabilitation regimen.A second, yet equally important aspect in the design of telerehabilitation systems is minimizing health care providers’ time commitment such that they can diagnose and monitor multiple patients rapidly and simultaneously. However, this undertaking can become especially challenging when human behavior is abnormal [
]. Machine learning offers a viable means of automating the classification of human movements. Multiple examples exist where machine learning algorithms successfully detect and analyze different behaviors with high accuracy, as well as deviations from those behaviors, whether the application was for safe driving [ ], gaming [ ], or physical therapy [ - ]. Through machine learning algorithms, devices can learn from new data such that they can update their control strategies and dynamically adapt to the user’s behavior over time. This feature is particularly useful for telerehabilitation applications, as patients recover motor function and move differently [ , ].In this study, we present the use of machine learning to identify and classify bimanual movements in VR. We demonstrate the approach in the context of a citizen science software that is dedicated for telerehabilitation. Commercial gaming systems are advantageous for home-based rehabilitation because they are relatively small, affordable, and user-friendly [
]. VR gaming systems are particularly favored as they confer high levels of immersion and increase user engagement [ , , , ]. In telerehabilitation, recovery is often hindered by patients’ lack of motivation to perform prescribed exercises [ ]. Thus, the motivational aspects of home-based interventions are crucial to their success. To address this challenge, we also incorporated citizen science content into the application, such that the user could contribute to an authentic scientific project and help clean a polluted canal [ ]. The task leverages human intellect as an intrinsic motivator and has a strong potential to improve patients’ sense of self-worth [ , , ].In all, 9 participants interacted with the citizen science system through a set of 5 predefined bimanual gestures. Bimanual training effectively improves rehabilitation outcomes through several physiological mechanisms [
, , ]. This clinical approach could also target a wider range of patients with varying levels of impairment. Specifically, for the Oculus Rift system, a rigid link can be designed and 3D-printed for the Touch controllers such they are affixed to one another [ ]. The custom-made link could enable passive exercise of the affected limb in patients with moderate to severe impairment, whereby the intact limb mediates coordinated movement of the paretic side. In a future study, we will seek to measure movements of participants with and without such fixture and compare its effect on motor performance.One of the novelties of our approach lies in the application of a movement classification algorithm to a VR exercise for telerehabilitation. Although the movements we incorporated into game control are carried out along the 3 orthogonal anatomical planes and appear to be easily distinguishable, they require coordinated flexion or extension of the shoulder and elbow joints, as well as pronation of the forearms. For example, extending the right arm to the right side of the body involves simultaneous flexion of the shoulder, lateral rotation of the shoulder, extension of the elbow, and pronation of the forearm. Owing to these degrees of freedom, backward kinematics to determine the angles of these joints would require more information beyond the position of the Touch controllers relative to the head-mounted display. To further support this notion, our PCA results showed that the Tait-Bryan angles of the Touch controllers relative to the head-mounted display, and not their positions, are salient during movements. Most variations in these angles likely resulted from simultaneous movement of the shoulder and elbow joints and pronation of the forearm.
The variation of features based on relative angles is expected to become extremely important for the classification of movements when our approach is implemented on data from patients with stroke. Stroke can lead to a wide range of movement abnormalities, including spasticity, segmentation, and compensation. However, the latter is best known for sabotage rehabilitation efforts. In the face of reduced mobility, patients with stroke tend to recruit body parts that are not normally involved in certain movements to add degrees of freedom to their kinematics. For example, patients with stroke commonly use their trunk during reach movements to compensate for the limited range of motion of their upper limbs [
, ]. By reinforcing these strategies, patients perpetuate the nonuse of the affected limb and do not recover their function. Fortunately, compensatory movements would be easily detected through our algorithm, whereby the angles of the Touch controllers relative to the headset will not vary significantly.The algorithm was used to classify the movements the participant performed toward a genuine telerehabilitation paradigm, where one’s motor performance is monitored remotely by a clinician. The algorithm classified bimanual movements objectively and reliably, reaching 99.9% accuracy. The 0.1% inaccuracy was mainly related to lack of sensitivity with respect to the presence of a movement. In other words, the algorithm erroneously classified movements as instances of no movement. This misclassification likely resulted from the use of a moving-window scheme. The moving window covers 13 time steps. During the algorithm training, the instantaneous true class of a window was defined as the mode of the true classes of the time steps it covered. For example, if the window covered 2 time steps of shoulder flexion and 11 time steps of no movement, its true class was no movement. At the beginning and end of each movement segment, the window covered 7 time steps of one class and 6 time steps of another class. The true class was then arbitrarily defined as 1 of the 2 classes. The accuracy of our approach may be further improved by refining this scheme and eliminating false negatives or by applying an alternative method to assign the true class of a moving window.
Future research could explore the use of alternative dimensionality reduction techniques. Our selection of features was based on the results of PCA, which informed us about which variables characterized each movement. However, this method may be inappropriate. In symmetrical movements performed by the participant, PCA showed that variables in only 1 arm were prominent. For example, when a participant performed shoulder abduction to the right side of the body, 2 angles of the left Touch controller and only 1 angle of the right Touch controller were dubiously deemed salient. Potentially, nonlinear dimensionality reduction methods such as Isomap, diffusion maps, and principal manifolds could better identify sets of variables that distinguish one movement from another [
- ].The methodology presented herein can be extended to several research directions. First, multiple classification schemes can be applied in tandem to distinguish between static and dynamic poses. This will be especially useful for measuring metrics that are important for clinical evaluation, such as movement accuracy [
], smoothness [ , ], and coordination [ ].We measured some motor performance metrics using data collected by the VR system. We observed symmetry in motor performance when comparing the right and left arms. In patients with paresis, we expected significant differences in motor performance between each side of the body. Specifically, movements of the affected arm would present stiffness and be segmented early in recovery, measured through lower mean speed, reduced range of motion, and longer path lengths, which will change over time as muscle function is recovered in the affected arm. We also found considerable variation among healthy participants with respect to smoothness. It is tenable that this metric reflects the individualistic nature of user interaction with the VR interface, whether it involves abrupt initiation of movements or the sequential use of different sets of upper limb joints. As such, smoothness should be examined over the course of a movement rather than as a single score. To further support this notion, Rohrer et al [
] showed that the smoothness of pathological movements is characterized by a series of peaks and dips, which become shorter and shallower along recovery.In addition to the quality of movements, one might consider the use of cognitive cues in the analysis to treat low motivation. Posture and movement have been previously demonstrated to be closely related to engagement [
, ]. For example, restlessness may be reflected by the frequently moving body weight between the legs. Similarly, arousal can be expressed by head rotation and extensive hand movements [ ]. The combined use of biometrics, such as heart rate, skin conductance, and pupil dilation, may also provide important insights into human behavior [ - ]. Incorporating such psychophysiological sensory information could open the door for multifaceted interventions in telerehabilitation [ ], although this path will require the use of additional sensors and requires further research.Finally, the classification algorithm can be enhanced to detect and minimize compensatory movements. Compensatory movements are nonphysiological movements that patients with disabilities perform with their bodies to compensate for their limited range of motion. Essentially, the patients use muscles that are not normally involved in the movement, thereby adding degrees of freedom to it. Most commonly, patients tend to displace their torso during reaching tasks to compensate for their inability to move their upper limbs [
, , ]. Although such nonphysiological movements improve patients’ function instantly, they are energetically inefficient, hinder functional recovery, and pose a risk of injury [ , ].Recently, Cai et al [
, ] explored the effectiveness of machine learning in detecting compensatory movements in patients with stroke. In their experimental setting, users sat on a chair covered with a pressure distribution mattress and interacted with a tabletop robotic manipulator [ ]. Data were collected on their motion from the mattress and from a VICON 3D motion capture system [ ]. Users’ postures and compensation were classified by an SVM algorithm, which achieved an accuracy >96%. Although the sensors used in this study are different in nature from those of commercial VR gaming systems, the results are encouraging and suggest that our approach is feasible. Work to assess our approach is currently under way, and head-mounted display-based features are expected to aid in the detection of compensatory movements.Limitations
Our findings strongly support the viability of machine learning in the accurate assessment of movements in telerehabilitation with commercial VR systems. Nonetheless, the several limitations of this study must be acknowledged. First, this study was conducted on healthy participants only. Patients with stroke exhibit a wide range of movement disorders, including loss of mobility, loss of balance control, spasticity, chorea, and adoption of maladaptive movements [
- ]. It is unknown whether these disorders can be detected and correctly characterized from sensor data, let alone be tracked and monitored over time. We are currently collecting controlled clinical data from patients with stroke and intend to challenge these questions once the study is concluded.The second limitation concerns the focus of our system on bimanual training with the Oculus Rift. Although this setting is practical, affordable, and has the potential to improve engagement in telerehabilitation, it is still subject to the limitations of machine-mediated patient–physician interactions. During in-clinic meetings, a physician can assess the physiological, behavioral, and emotional status of a patient simultaneously. For example, physicians may evaluate skin tactile feedback during grip [
] or the patient’s ability to balance while performing gross motor movements [ ]. This cannot be accomplished in a telerehabilitation setting without teleconferencing with a physician or encumbering the patient with multiple wearable sensors, which would likely require special training and the aid of another person. Nonetheless, many of these in-clinic assessments may be feasible in telerehabilitation by means of machine learning. Emotion recognition from physiological [ , ] and behavioral [ , ] signals has already been demonstrated. Similarly, research has been carried out to predict patients’ ability to balance [ ] and infer pain levels from kinematic features [ ] and detect compensatory movements [ ]. Thus, machine learning methodologies may successfully quantify other aspects of rehabilitation from data originating from a single modality, thereby providing health care providers with more information to monitor patients remotely.Another nontrivial limitation of our study is the essence of machine learning as a black box [
, - ]. In recent years, it has become widely accepted to trust machine learning predictions without fully understanding the model from which they are derived. The transparency of machine learning models is paramount to users’ trust in machines [ ]. In medical applications, rather than perceiving decisions as arbitrarily made, an understanding of their rigor and potential sources of errors must be gained for good clinical decision-making. Furthermore, machine learning algorithms are vulnerable to adversarial attacks [ - ]. Minimal perturbations can significantly impact the output of algorithms and remain unnoticeable to human inspectors [ ]. Thus, in future work, we will probe the model and apply perturbing strategies to interpret it [ ].Conclusions
This study is a first step in our endeavor to incorporate machine learning into VR-mediated telerehabilitation. We classified bimanual movements using a bagged trees classifier and achieved high performance. Work to expand on our findings and hone our approach is underway, including experiments with patients with stroke, development of an interpretable model, and detection of compensatory movements.
Acknowledgments
This study was supported by the National Science Foundation under award numbers CBET-1604355, CMMI-1505832, and ECCS-1928614. This study is also part of the collaborative activities carried out under the program Groups of Excellence of the region of Murcia, the Fundación Seneca, Science and Technology Agency of the region of Murcia project 19884/GERM/15. MRM is grateful for the financial support of Ministerio de Ciencia e Innovación of Spain under grant PID2019-107800GB-I00/AEI/ 10.13039/501100011033. RBV was supported in part by a Mitsui-USA Foundation scholarship.
Authors' Contributions
RBV, ON, PR, and MP designed the study. ON, PR, and MP secured the funding. RBV, KH, and MP designed the experimental system. RBV and KH developed the experimental system and conducted the experiments. RBV, MRM, and MP developed an approach to perform motion analysis. RBV analyzed the data. RBV and KH wrote the first draft of the manuscript. MP supervised the study. All authors reviewed and approved the final submission of the manuscript.
Conflicts of Interest
None declared.
References
- Virani SS, Alonso A, Benjamin EJ, Bittencourt MS, Callaway CW, Carson AP, et al. Heart Disease and Stroke Statistics-2020 update: a report from the American Heart Association. Circulation 2020;141(9):139-596. [CrossRef] [Medline]
- Ma VY, Chan L, Carruthers KJ. Incidence, prevalence, costs, and impact on disability of common conditions requiring rehabilitation in the United States: stroke, spinal cord injury, traumatic brain injury, multiple sclerosis, osteoarthritis, rheumatoid arthritis, limb loss, and back pain. Arch Phys Med Rehabil 2014;95(5):986-995 [FREE Full text] [CrossRef] [Medline]
- Barral M, Rabier H, Termoz A, Serrier H, Colin C, Haesebaert J, et al. Patients' productivity losses and informal care costs related to ischemic stroke: a French population-based study. Eur J Neurol 2021;28(2):548-557. [CrossRef] [Medline]
- García-Álvarez D, Sempere-Rubio N, Faubel R. Economic evaluation in neurological physiotherapy: a systematic review. Brain Sci 2021;11(2):1-13 [FREE Full text] [CrossRef] [Medline]
- Langhorne P, Bernhardt J, Kwakkel G. Stroke rehabilitation. Lancet 2011;377(9778):1693-1702. [CrossRef] [Medline]
- Nuara A, Fabbri-Destro M, Scalona E, Lenzi SE, Rizzolatti G, Avanzini P. Telerehabilitation in response to constrained physical distance: an opportunity to rethink neurorehabilitative routines. J Neurol 2021:3 [FREE Full text] [CrossRef] [Medline]
- Winters JM. Telerehabilitation research: emerging opportunities. Annu Rev Biomed Eng 2002;4:287-320. [CrossRef] [Medline]
- Winters JM, Wang Y, Winters JM. Wearable sensors and telerehabilitation. IEEE Eng Med Biol Mag 2003;22(3):56-65. [CrossRef] [Medline]
- McCue M, Fairman A, Pramuka M. Enhancing quality of life through telerehabilitation. Phys Med Rehabil Clin N Am 2010;21(1):195-205. [CrossRef] [Medline]
- Cramer SC, Dodakian L, Le V, See J, Augsburger R, McKenzie A, et al. Efficacy of home-based telerehabilitation vs in-clinic therapy for adults after stroke: a randomized clinical trial. JAMA Neurol 2019;76(9):1079-1087. [CrossRef] [Medline]
- Appleby E, Gill ST, Hayes LK, Walker TL, Walsh M, Kumar S. Effectiveness of telerehabilitation in the management of adults with stroke: a systematic review. PLoS One 2019;14(11):e0225150 [FREE Full text] [CrossRef] [Medline]
- Peretti A, Amenta F, Tayebati SK, Nittari G, Mahdi SS. Telerehabilitation: review of the state-of-the-art and areas of application. JMIR Rehabil Assist Technol 2017;4(2):e7511 [FREE Full text] [CrossRef] [Medline]
- Conraads VM, Deaton C, Piotrowicz E, Santaularia N, Tierney S, Piepoli MF, et al. Adherence of heart failure patients to exercise: barriers and possible solutions: a position statement of the Study Group on Exercise Training in Heart Failure of the Heart Failure Association of the European Society of Cardiology. Eur J Heart Fail 2012;14(5):451-458 [FREE Full text] [CrossRef] [Medline]
- Pezzera M, Tironi A, Essenziale J, Mainetti R, Borghese NA. Approaches for increasing patient’s engagement and motivation in exer-games-based autonomous telerehabilitation. In: Proceedings of the IEEE 7th International Conference on Serious Games and Applications for Health (SeGAH). 2019 Presented at: IEEE 7th International Conference on Serious Games and Applications for Health (SeGAH); Aug. 5-7, 2019; Kyoto, Japan. [CrossRef]
- Lange B, Flynn SM, Rizzo AA. Game-based telerehabilitation. Eur J Phys Rehabil Med 2009;45(1):143-151 [FREE Full text] [Medline]
- Amorim P, Sousa Santos B, Dias P, Silva S, Martins H. Serious games for stroke telerehabilitation of upper limb - a review for future research. Int J Telerehabil 2020;12(2):65-76 [FREE Full text] [CrossRef] [Medline]
- Rego P, Moreira PM, Reis LP. Serious games for rehabilitation: a survey and a classification towards a taxonomy. In: Proceedings of the 5th Iberian Conference on Information Systems and Technologies. 2010 Presented at: 5th Iberian Conference on Information Systems and Technologies; June 16-19, 2010; Santiago de Compostela, Spain.
- Reinkensmeyer DJ, Pang CT, Nessler JA, Painter CC. Web-based telerehabilitation for the upper extremity after stroke. IEEE Trans Neural Syst Rehabil Eng 2002;10(2):102-108. [CrossRef] [Medline]
- Burke JW, McNeill M, Charles D, Morrow P, Crosbie J, McDonough S. Serious games for upper limb rehabilitation following stroke. In: Proceedings of the Conference in Games and Virtual Worlds for Serious Applications. 2009 Presented at: Conference in Games and Virtual Worlds for Serious Applications; March 23-24, 2009; Coventry, UK. [CrossRef]
- Cikajlo I, Rudolf M, Mainetti R, Borghese NA. Multi-exergames to set targets and supplement the intensified conventional balance training in patients with stroke: a randomized pilot trial. Front Psychol 2020;11:572 [FREE Full text] [CrossRef] [Medline]
- Novak D, Nagle A, Keller U, Riener R. Increasing motivation in robot-aided arm rehabilitation with competitive and cooperative gameplay. J Neuroeng Rehabil 2014 Apr 16;11:64 [FREE Full text] [CrossRef] [Medline]
- Laut J, Cappa F, Nov O, Porfiri M. Increasing patient engagement in rehabilitation through citizen science. In: Proceedings of the ASME 2014 Dynamic Systems and Control Conference. 2014 Presented at: ASME 2014 Dynamic Systems and Control Conference; October 22–24, 2014; San Antonio, Texas, USA. [CrossRef]
- Silvertown J. A new dawn for citizen science. Trends Ecol Evol 2009;24(9):467-471. [CrossRef] [Medline]
- Nov O, Arazy O, Anderson D. Scientists@Home: what drives the quantity and quality of online citizen science participation? PLoS One 2014;9(4):e90375 [FREE Full text] [CrossRef] [Medline]
- Nov O, Arazy O, Anderson D. Dusting for science: motivation and participation of digital citizen science volunteers. In: Proceedings of the 2011 iConference. 2011 Presented at: Conference 2011; February 8 - 11, 2011; Seattle Washington USA p. 68-74. [CrossRef]
- Nov O, Arazy O, Anderson D. Technology-mediated citizen science participation: a motivational model. In: Proceedings of the AAAI International Conference on Weblogs and Social Media (ICWSM 2011). 2011 Presented at: AAAI International Conference on Weblogs and Social Media (ICWSM 2011); July 2011; Barcelona, Spain.
- Bowser A, Hansen D, He Y, Boston C, Reid M, Gunnell L, et al. Using gamification to inspire new citizen science volunteers. In: Proceedings of the First International Conference on Gameful Design, Research, and Applications. 2013 Presented at: Gamification '13: Gameful Design, Research, and Applications; October 2 - 4, 2013; Toronto Ontario Canada p. 18-25. [CrossRef]
- Callaghan CT, Rowley JJL, Cornwell WK, Poore AG, Major RE. Improving big citizen science data: moving beyond haphazard sampling. PLoS Biol 2019;17(6):e3000357 [FREE Full text] [CrossRef] [Medline]
- Aristeidou M, Scanlon E, Sharples M. Profiles of engagement in online communities of citizen science participation. Comput Hum Behav 2017;74:246-256. [CrossRef]
- Land-Zandstra AM, Devilee JL, Snik F, Buurmeijer F, van den Broek JM. Citizen science on a smartphone: participants' motivations and learning. Public Underst Sci 2016;25(1):45-60. [CrossRef] [Medline]
- Domroese MC, Johnson EA. Why watch bees? Motivations of citizen science volunteers in the Great Pollinator Project. Biolog Conserv 2017;208:40-47. [CrossRef]
- Ventura RB, Nakayama S, Raghavan P, Nov O, Porfiri M. The role of social interactions in motor performance: feasibility study toward enhanced motivation in telerehabilitation. J Med Internet Res 2019;21(5):e12708 [FREE Full text] [CrossRef] [Medline]
- Ventura RB, Nov O, Marin MR, Raghavan P, Porfiri M. A low-cost telerehabilitation paradigm for bimanual training. IEEE/ASME Trans Mechatron 2021:1. [CrossRef]
- Melim A. Increasing fidelity with constellation-tracked controllers. Oculus. 2019. URL: https://developer.oculus.com/blog/increasing-fidelity-with-constellation-tracked-controllers/ [accessed 2021-01-26]
- Wootton R, Hebert MA. What constitutes success in telehealth? J Telemed Telecare 2001;7(2):3-7. [CrossRef] [Medline]
- Shum LC, Valdés BA, Van der Loos HM. Determining the accuracy of oculus touch controllers for motor rehabilitation applications using quantifiable upper limb kinematics: validation study. JMIR Biomed Eng 2019;4(1):e12291. [CrossRef]
- Borrego A, Latorre J, Alcañiz M, Llorens R. Comparison of Oculus Rift and HTC Vive: feasibility for virtual reality-based exploration, navigation, exergaming, and rehabilitation. Games Health J 2018;7(3):151-156. [CrossRef] [Medline]
- Steuer J. Defining virtual reality: dimensions determining telepresence. J Commun 1992;42(4):73-93. [CrossRef]
- Jackson RL, Fagan E. Collaboration and learning within immersive virtual reality. In: Proceedings of the Third International Conference on Collaborative Virtual Environments. 2000 Presented at: CVE00: Collaborative Virtual Environments; September 2000; San Francisco California USA p. 83-92. [CrossRef]
- Jack D, Boian R, Merians AS, Tremaine M, Burdea GC, Adamovich SV, et al. Virtual reality-enhanced stroke rehabilitation. IEEE Trans Neural Syst Rehabil Eng 2001;9(3):308-318. [CrossRef] [Medline]
- Merians AS, Jack D, Boian R, Tremaine M, Burdea GC, Adamovich SV, et al. Virtual reality-augmented rehabilitation for patients following stroke. Phys Ther 2002;82(9):898-915. [Medline]
- Dockx K, Alcock L, Bekkers E, Ginis P, Reelick M, Pelosin E, et al. Fall-prone older people's attitudes towards the use of virtual reality technology for fall prevention. Gerontology 2017;63(6):590-598. [CrossRef] [Medline]
- AlMousa M, Al-Khalifa H, AlSobayel H. Move-it: a virtual reality game for upper limb stroke rehabilitation patients. In: Proceedings of the International Conference on Computers Helping People with Special Needs. 2020 Presented at: International Conference on Computers Helping People with Special Needs; September 9-11, 2020; Lecco, Italy p. 184-195. [CrossRef]
- Palaniappan SM, Duerstock BS. Developing rehabilitation practices using virtual reality exergaming. In: Proceedings of the IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). 2018 Presented at: IEEE International Symposium on Signal Processing and Information Technology (ISSPIT); Dec. 6-8, 2018; Louisville, KY, USA. [CrossRef]
- Taub E, Uswatte G, Pidikiti R. Constraint-Induced Movement Therapy: a new family of techniques with broad application to physical rehabilitation--a clinical review. J Rehabil Res Dev 1999;36(3):237-251. [Medline]
- Kunkel A, Kopp B, Müller G, Villringer K, Villringer A, Taub E, et al. Constraint-induced movement therapy for motor recovery in chronic stroke patients. Arch Phys Med Rehabil 1999;80(6):624-628. [CrossRef] [Medline]
- Rensink M, Schuurmans M, Lindeman E, Hafsteinsdóttir T. Task-oriented training in rehabilitation after stroke: systematic review. J Adv Nurs 2009;65(4):737-754. [CrossRef] [Medline]
- Lynch D, Ferraro M, Krol J, Trudell CM, Christos P, Volpe BT. Continuous passive motion improves shoulder joint integrity following stroke. Clin Rehabil 2005;19(6):594-599. [CrossRef] [Medline]
- Wu C, Yang C, Chen M, Lin K, Wu L. Unilateral versus bilateral robot-assisted rehabilitation on arm-trunk control and functions post stroke: a randomized controlled trial. J Neuroeng Rehabil 2013;10:35 [FREE Full text] [CrossRef] [Medline]
- Cohen L. Interaction between limbs during bimanual voluntary activity. Brain 1970;93(2):259-272. [CrossRef] [Medline]
- Stewart KC, Cauraugh JH, Summers JJ. Bilateral movement training and stroke rehabilitation: a systematic review and meta-analysis. J Neurol Sci 2006;244(1-2):89-95. [CrossRef] [Medline]
- Wenderoth N, Debaere F, Sunaert S, van Hecke P, Swinnen SP. Parieto-premotor areas mediate directional interference during bimanual movements. Cereb Cortex 2004;14(10):1153-1163. [CrossRef] [Medline]
- Lum PS, Lehman SL, Reinkensmeyer DJ. The bimanual lifting rehabilitator: an adaptive machine for therapy of stroke patients. IEEE Trans Rehab Eng 1995;3(2):166-174. [CrossRef]
- Pink M. Contralateral effects of upper extremity proprioceptive neuromuscular facilitation patterns. Phys Ther 1981;61(8):1158-1162. [CrossRef] [Medline]
- Mills VM, Quintana L. Electromyography results of exercise overflow in hemiplegic patients. Phys Ther 1985;65(7):1041-1045. [CrossRef] [Medline]
- Debaere F, Wenderoth N, Sunaert S, Van Hecke P, Swinnen SP. Changes in brain activation during the acquisition of a new bimanual coodination task. Neuropsychologia 2004;42(7):855-867. [CrossRef] [Medline]
- Goldberg G. Supplementary motor area structure and function: review and hypotheses. Behav Brain Sci 1985;8(4):567-588. [CrossRef]
- Swinnen SP, Wenderoth N. Two hands, one brain: cognitive neuroscience of bimanual skill. Trends Cogn Sci 2004;8(1):18-25. [CrossRef] [Medline]
- Cauraugh JH, Summers JJ. Neural plasticity and bilateral movements: a rehabilitation approach for chronic stroke. Prog Neurobiol 2005;75(5):309-320. [CrossRef] [Medline]
- Azodi CB, Tang J, Shiu S. Opening the black box: interpretable machine learning for geneticists. Trends Genet 2020;36(6):442-455. [CrossRef] [Medline]
- Bishop CM. Pattern Recognition and Machine Learning. New York: Springer; 2006.
- Murrell N, Bradley R, Bajaj N, Whitney JG, Chiu GT. A method for sensor reduction in a supervised machine learning classification system. IEEE/ASME Trans Mechatron 2019;24(1):197-206. [CrossRef]
- Begg R, Kamruzzaman J. A machine learning approach for automated recognition of movement patterns using basic, kinetic and kinematic gait data. J Biomech 2005;38(3):401-408. [CrossRef] [Medline]
- Novak D, Reberšek P, De Rossi SM, Donati M, Podobnik J, Beravs T, et al. Automated detection of gait initiation and termination using wearable sensors. Med Eng Phys 2013;35(12):1713-1720. [CrossRef] [Medline]
- Semwal VB, Raj M, Nandi GC. Biometric gait identification based on a multilayer perceptron. Robot Auton Syst 2015 Mar;65:65-75. [CrossRef]
- Ongvisatepaiboon K, Chan JH, Vanijja V. Smartphone-based tele-rehabilitation system for frozen shoulder using a machine learning approach. In: Proceedings of the IEEE Symposium Series on Computational Intelligence. 2015 Presented at: IEEE Symposium Series on Computational Intelligence; Dec. 7-10, 2015; Cape Town, South Africa p. 811-815. [CrossRef]
- Olesh EV, Yakovenko S, Gritsenko V. Automated assessment of upper extremity movement impairment due to stroke. PLoS One 2014;9(8):e104487 [FREE Full text] [CrossRef] [Medline]
- Biswas D, Cranny A, Gupta N, Maharatna K, Achner J, Klemke J, et al. Recognizing upper limb movements with wrist worn inertial sensors using k-means clustering classification. Hum Mov Sci 2015;40:59-76. [CrossRef] [Medline]
- Panwar M, Biswas D, Bajaj H, Jobges M, Turk R, Maharatna K, et al. Rehab-Net: deep learning framework for arm movement classification using wearable sensors for stroke rehabilitation. IEEE Trans Biomed Eng 2019;66(11):3026-3037. [CrossRef] [Medline]
- Hatem SM, Saussez G, Faille MD, Prist V, Zhang X, Dispa D, et al. Rehabilitation of motor function after stroke: a multiple systematic review focused on techniques to stimulate upper extremity recovery. Front Hum Neurosci 2016;10:442 [FREE Full text] [CrossRef] [Medline]
- Thompson SB, Morgan M. Occupational Therapy for Stroke Rehabilitation. 1st ed. New York, USA: Springer; 1990.
- Craig JJ. Introduction to Robotics: Mechanics and Control. 3rd ed. Upper Saddle River, New Jersey, USA: Pearson Education International; 2005.
- Rohrer B, Fasoli S, Krebs HI, Hughes R, Volpe B, Frontera WR, et al. Movement smoothness changes during stroke recovery. J Neurosci 2002;22(18):8297-8304 [FREE Full text] [Medline]
- Colombo R, Pisano F, Mazzone A, Delconte C, Micera S, Carrozza MC, et al. Design strategies to improve patient motivation during robot-aided rehabilitation. J Neuroeng Rehabil 2007;4:3 [FREE Full text] [CrossRef] [Medline]
- Fod A, Matarić MJ, Jenkins OC. Automated derivation of primitives for movement classification. Auton Robots 2002;12(1):39-54. [CrossRef]
- Jolliffe IT, Cadima J. Principal component analysis: a review and recent developments. Philos Trans A Math Phys Eng Sci 2016;374(2065):20150202 [FREE Full text] [CrossRef] [Medline]
- Quinlan JR. Induction of decision trees. Mach Learn 1986;1(1):81-106. [CrossRef]
- Dietterich TG. Experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach Learn 2000;40(2):139-157. [CrossRef]
- Prasad AM, Iverson LR, Liaw A. Newer classification and regression tree techniques: bagging and random forests for ecological prediction. Ecosystems 2006;9(2):181-199. [CrossRef]
- Carignan CR, Krebs HI. Telerehabilitation robotics: bright lights, big future? J Rehabil Res Dev 2006;43(5):695-710 [FREE Full text] [CrossRef] [Medline]
- Burke JW, McNeill MD, Charles DK, Morrow PJ, Crosbie JH, McDonough SM. Optimising engagement for stroke rehabilitation using serious games. Vis Comput 2009;25(12):1085-1099. [CrossRef]
- Mubin O, Alnajjar F, Al Mahmud A, Jishtu N, Alsinglawi B. Exploring serious games for stroke rehabilitation: a scoping review. Disabil Rehabil Assist Technol 2020:1-7. [CrossRef] [Medline]
- Flores E, Tobon G, Cavallaro E, Cavallaro F, Perry JC, Keller T. Improving patient motivation in game development for motor deficit rehabilitation. In: Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology. 2008 Presented at: ACE2008: International Conference in Advances in Computer Entertainment Technology; December 3 - 5, 2008; Yokohama Japan p. 381-384. [CrossRef]
- Beckerle P, Salvietti G, Unal R, Prattichizzo D, Rossi S, Castellini C, et al. A human-robot interaction perspective on assistive and rehabilitation robotics. Front Neurorobot 2017;11:24 [FREE Full text] [CrossRef] [Medline]
- Martinelli F, Mercaldo F, Orlando A, Nardone V, Santone A, Sangaiah AK. Human behavior characterization for driving style recognition in vehicle system. Comput Electric Eng 2020;83:102504. [CrossRef]
- Galway L, Charles D, Black M. Machine learning in digital games: a survey. Artif Intell Rev 2009;29(2):123-161. [CrossRef]
- Gopinath D, Jain S, Argall BD. Human-in-the-loop optimization of shared autonomy in assistive robotics. IEEE Robot Autom Lett 2017;2(1):247-254. [CrossRef]
- Laut J, Porfiri M, Raghavan P. The present and future of robotic technology in rehabilitation. Curr Phys Med Rehabil Rep 2016;4(4):312-319 [FREE Full text] [CrossRef] [Medline]
- Laver KE, Lange B, George S, Deutsch JE, Saposnik G, Crotty M. Virtual reality for stroke rehabilitation. Cochrane Database Syst Rev 2017;11:CD008349 [FREE Full text] [CrossRef] [Medline]
- Laut J, Cappa F, Nov O, Porfiri M. Increasing patient engagement in rehabilitation exercises using computer-based citizen science. PLoS One 2015;10(3):e0117013 [FREE Full text] [CrossRef] [Medline]
- Ventura RB, Rizzo A, Nov O, Porfiri M. A 3D printing approach toward targeted intervention in telerehabilitation. Sci Rep 2020;10(1):3694 [FREE Full text] [CrossRef] [Medline]
- Cirstea MC, Levin MF. Compensatory strategies for reaching in stroke. Brain 2000;123(5):940-953. [CrossRef] [Medline]
- Levin MF, Michaelsen SM, Cirstea CM, Roby-Brami A. Use of the trunk for reaching targets placed within and beyond the reach in adult hemiparesis. Exp Brain Res 2002;143(2):171-180. [CrossRef] [Medline]
- Tenenbaum JB, de Silva V, Langford JC. A global geometric framework for nonlinear dimensionality reduction. Science 2000;290(5500):2319-2323. [CrossRef] [Medline]
- Coifman RR, Lafon S, Lee AB, Maggioni M, Nadler B, Warner F, et al. Geometric diffusions as a tool for harmonic analysis and structure definition of data: diffusion maps. Proc Natl Acad Sci U S A 2005;102(21):7426-7431 [FREE Full text] [CrossRef] [Medline]
- Gajamannage K, Butail S, Porfiri M, Bollt EM. Dimensionality reduction of collective motion by principal manifolds. Phys D Nonlinear Phenom 2015;291:62-73. [CrossRef]
- Bosecker C, Dipietro L, Volpe B, Krebs HI. Kinematic robot-based evaluation scales and clinical counterparts to measure upper limb motor performance in patients with chronic stroke. Neurorehabil Neural Repair 2010;24(1):62-69 [FREE Full text] [CrossRef] [Medline]
- Zollo L, Rossini L, Bravi M, Magrone G, Sterzi S, Guglielmelli E. Quantitative evaluation of upper-limb motor control in robot-aided rehabilitation. Med Biol Eng Comput 2011;49(10):1131-1144. [CrossRef] [Medline]
- Squeri V, Zenzeri J, Morasso P, Basteris A. Integrating proprioceptive assessment with proprioceptive training of stroke patients. IEEE Int Conf Rehabil Robot 2011;2011:5975500. [CrossRef] [Medline]
- Bianchi-Berthouze N, Kim W, Patel D. Does body movement engage you more in digital game play? And why? In: Affective Computing and Intelligent Interaction. Berlin, Heidelberg: Springer; 2007:102-113.
- Bianchi-Berthouze N. Understanding the role of body movement in player engagement. Hum-Comput Interact 2013;28(1):40-75 [FREE Full text]
- Barak Ventura R, Richmond S, Nadini M, Nakayama S, Porfiri M. Does winning or losing change players’ engagement in competitive games? Experiments in virtual reality. IEEE Trans Games 2021;13(1):23-34. [CrossRef]
- Appelhans BM, Luecken LJ. Heart rate variability as an index of regulated emotional responding. Rev Gen Psychol 2006;10(3):229-240. [CrossRef]
- Juvrud J, Gredebäck G, Åhs F, Lerin N, Nyström P, Kastrati G, et al. The immersive virtual reality lab: possibilities for remote experimental manipulations of autonomic activity on a large scale. Front Neurosci 2018;12:305 [FREE Full text] [CrossRef] [Medline]
- Boyle EA, Connolly TM, Hainey T, Boyle JM. Engagement in digital entertainment games: a systematic review. Comput Hum Behav 2012;28(3):771-780. [CrossRef]
- Loureiro RC, Harwin WS, Nagai K, Johnson M. Advances in upper limb stroke rehabilitation: a technology push. Med Biol Eng Comput 2011;49(10):1103-1118. [CrossRef] [Medline]
- Levin MF, Kleim JA, Wolf SL. What do motor "recovery" and "compensation" mean in patients following stroke? Neurorehabil Neural Repair 2009;23(4):313-319. [CrossRef] [Medline]
- Da Gama A, Chaves T, Figueiredo L, Teichrieb V. Poster: Improving motor rehabilitation process through a natural interaction based system using Kinect sensor. In: Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI). 2012 Presented at: IEEE Symposium on 3D User Interfaces (3DUI); March 4-5, 2012; Costa Mesa, CA, USA p. 145-146. [CrossRef]
- Brokaw EB, Lum PS, Cooper RA, Brewer BR. Using the Kinect to limit abnormal kinematics and compensation strategies during therapy with end effector robots. IEEE Int Conf Rehabil Robot 2013;2013:6650384. [CrossRef] [Medline]
- Michaelsen SM, Dannenbaum R, Levin MF. Task-specific training with trunk restraint on arm recovery in stroke: randomized control trial. Stroke 2006;37(1):186-192. [CrossRef] [Medline]
- Cai S, Li G, Huang S, Zheng H, Xie L. Automatic detection of compensatory movement patterns by a pressure distribution mattress using machine learning methods: a pilot study. IEEE Access 2019;7:80300-80309. [CrossRef]
- Cai S, Li G, Su E, Wei X, Huang S, Ma K, et al. Real-time detection of compensatory patterns in patients with stroke to reduce compensation during robotic rehabilitation therapy. IEEE J Biomed Health Inform 2020;24(9):2630-2638. [CrossRef] [Medline]
- Alarcón F, Zijlmans JC, Dueñas G, Cevallos N. Post-stroke movement disorders: report of 56 patients. J Neurol Neurosurg Psychiatry 2004;75(11):1568-1574 [FREE Full text] [CrossRef] [Medline]
- Handley A, Medcalf P, Hellier K, Dutta D. Movement disorders after stroke. Age Ageing 2009;38(3):260-266. [CrossRef] [Medline]
- de Oliveira CB, de Medeiros IR, Frota NA, Greters ME, Conforto AB. Balance control in hemiparetic stroke patients: main tools for evaluation. J Rehabil Res Dev 2008;45(8):1215-1226 [FREE Full text] [Medline]
- Takeuchi N, Izumi SI. Maladaptive plasticity for motor recovery after stroke: mechanisms and approaches. Neural Plast 2012;2012:359728 [FREE Full text] [CrossRef] [Medline]
- Avraham C, Nisky I. The effect of tactile augmentation on manipulation and grip force control during force-field adaptation. J Neuroeng Rehabil 2020;17(1):1-19 [FREE Full text] [CrossRef] [Medline]
- Chen SC, Lin CH, Su SW, Chang YT, Lai CH. Feasibility and effect of interactive telerehabilitation on balance in individuals with chronic stroke: a pilot study. J Neuroeng Rehabil 2021;18(1):71 [FREE Full text] [CrossRef] [Medline]
- Bota PJ, Wang C, Fred AL, Da Silva HP. A review, current challenges, and future possibilities on emotion recognition using machine learning and physiological signals. IEEE Access 2019;7:140990-141020. [CrossRef]
- Cho Y, Bianchi-Berthouze N, Julier SJ. DeepBreath: deep learning of breathing patterns for automatic stress recognition using low-cost thermal imaging in unconstrained settings. In: Proceedings of the Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). 2017 Presented at: Seventh International Conference on Affective Computing and Intelligent Interaction (ACII); Oct. 23-26, 2017; San Antonio, TX, USA p. 456-463. [CrossRef]
- Healy M, Donovan R, Walsh P, Zheng H. A machine learning emotion detection platform to support affective well being. In: Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 2018 Presented at: IEEE International Conference on Bioinformatics and Biomedicine (BIBM); Dec. 3-6, 2018; Madrid, Spain p. 2694-2700. [CrossRef]
- Wang C, Peng M, Olugbade TA, Lane ND, Williams AC, Bianchi-Berthouze N. Learning temporal and bodily attention in protective movement behavior detection. In: Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). 2019 Presented at: 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW); Sept. 3-6, 2019; Cambridge, UK p. 324-330. [CrossRef]
- Harari Y, O'Brien MK, Lieber RL, Jayaraman A. Inpatient stroke rehabilitation: prediction of clinical outcomes using a machine-learning approach. J Neuroeng Rehabil 2020;17(1):1-10 [FREE Full text] [CrossRef] [Medline]
- Olugbade TA, Bianchi-Berthouze N, Marquardt N, Williams AC. Pain level recognition using kinematics and muscle activity for physical rehabilitation in chronic pain. In: Proceedings of the International Conference on Affective Computing and Intelligent Interaction (ACII). 2015 Presented at: International Conference on Affective Computing and Intelligent Interaction (ACII); Sept. 21-24, 2015; Xi'an, China p. 243-249. [CrossRef]
- Kashi S, Feingold-Polak R, Lerner B, Rokach L, Levy-Tzedek S. A machine-learning model for automatic detection of movement compensations in stroke patients. IEEE Trans Emerg Topics Comput 2021;9(3):1234-1247. [CrossRef]
- Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 2019;1(5):206-215. [CrossRef]
- Brendel W, Rauber J, Bethge M. Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: Proceedings of the 6th International conference on learning representations, ICLR 2018. 2018 Presented at: 6th International conference on learning representations, ICLR 2018; April 30 – May 3, 2018; Vancouver, BC, Canada URL: https://arxiv.org/pdf/1712.04248.pdf
- Papernot N, McDaniel P, Goodfellow I. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv 2016 [FREE Full text]
- Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A. Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. 2017 Presented at: ASIA CCS '17: ACM Asia Conference on Computer and Communications Security; April 2 - 6, 2017; Abu Dhabi United Arab Emirates p. 506-519. [CrossRef]
Abbreviations
HUD: heads-up display |
PCA: principal component analysis |
SVM: support vector machine |
VR: virtual reality |
Edited by N Zary; submitted 29.01.21; peer-reviewed by S Silva, T Szturm; comments to author 14.05.21; revised version received 14.06.21; accepted 12.10.21; published 10.02.22
Copyright©Roni Barak Ventura, Kora Stewart Hughes, Oded Nov, Preeti Raghavan, Manuel Ruiz Marín, Maurizio Porfiri. Originally published in JMIR Serious Games (https://games.jmir.org), 10.02.2022.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Serious Games, is properly cited. The complete bibliographic information, a link to the original publication on https://games.jmir.org, as well as this copyright and license information must be included.