Original Paper
Abstract
Background: Exergame-based training enhances physical and cognitive performance in older adults, including those with mild neurocognitive disorder (mNCD). In-game metrics generated from user interactions with exergames enable individualized adjustments. However, there is a need to systematically investigate how well such game metrics capture true cognitive and motor-cognitive performance to provide a more robust basis for personalized training.
Objective: The primary objective was to identify valid game metrics as indicators for in-game domain-specific cognitive performance during exergaming in individuals with mNCD. We also aimed to explore game metric performance changes over time during exergame-based training.
Methods: Data were analyzed from individuals with mNCD who completed a 12-week home-based, exergame-based intervention following the Brain-IT training concept. A cross-sectional analysis was conducted by correlating game metrics with standardized neurocognitive reference assessments. To confirm the alternative hypothesis, we predetermined the following criteria: (1) statistically significant correlation (P≤.05; uncorrected; 1-sided) with (2) a correlation coefficient (Pearson r or Spearman ρ) of ≥0.4. Visual and curve-fitting longitudinal analyses were conducted to explore game performance changes over time.
Results: Data were available from 31 participants (mean age 76.4, SD 7.5 y; n=9, 29% female). In total, 33% (6/18) of the game metrics were identified as valid indicators for in-game cognitive performance during exergaming. In the neurocognitive domain of learning and memory, these metrics included the mean reaction time (ρ=–0.747), the number of collected items (ρ=0.691), and the precision score (r=–0.607) for the game Shopping Tour (P<.001 in all cases), as well as the point rate (P=.008; r=0.471) for the game Simon. In addition, point rate was a valid indicator for executive function (P=.006; r=0.455) and visuospatial skills (P=.02; r=0.474) for the games Targets and Gears, respectively. The exploratory longitudinal analysis revealed high interindividual variability, with a general trend of the expected typical curvilinear curves of rapid initial improvements followed by a plateau in performance.
Conclusions: This study demonstrated that metrics reflecting the precision of responses generally performed better than metrics reflecting the speed of responses. These observations highlight the importance of selecting valid game metrics for implementation in exergame designs. Further research is needed to explore the potential of game metrics and identify factors contributing to individual variability in in-game performance and performance progression, as well as identifying and adopting strategies that facilitate individual learning success and thus promote effectiveness in improving health outcomes.
doi:10.2196/65878
Keywords
Introduction
Background
There is growing interest in interventions that aim to mitigate the physical and cognitive decline associated with aging in older adults [
]. Applying gamified digital health technologies with real-time user behavior and gameplay interactions such as computerized cognitive training games or exergame technology is becoming increasingly popular to address the challenges of physical and cognitive decline in the aging population [ - ]. Within this field, considerable attention has been directed toward exergame-based physical and motor-cognitive training, which is considered more promising than conventional physical, cognitive, or motor-cognitive training to improve cognitive and physical performance in diverse user populations [ - ]. Exergames are defined as “technology-driven physical activities, such as video game play, that require participants to be physically active or exercise in order to play the game” [ ]. Exergame-based training typically focuses on incorporating cognitive tasks into motor tasks than can be designed to specifically train a variety of physical (eg, cardiorespiratory, motor, balance, strength, or multicomponent exercises) and cognitive (eg, learning and memory, complex attention, executive function, and visuospatial skills) domains [ ]. The positive impact of exergaming on physical and cognitive functions has been observed in healthy older adults [ , , - ], as well as in older adults with cognitive impairments such as mild neurocognitive disorder (mNCD) or more severe forms such as dementia [ , - ].mNCD describes a critical stage of cognitive decline that goes beyond normal aging without affecting the capacity for independence in everyday activities [
]. Early interventions may effectively influence the course of cognitive decline, making those with mNCD an optimal target population for secondary prevention [ - ]. Several pharmacological and nonpharmacological methods have been proposed to slow down the progression of symptoms accompanying mNCD and improve overall quality of life [ , ]. According to current research, the most effective type of exercise to achieve this goal is to implement physical training, ideally with integrated cognitive tasks (simultaneous motor-cognitive training) [ , - ]. This type of training has been proposed to operate through various disease-modifying mechanisms, including multisystem effects that reduce mNCD-related neuropathological damage [ , ] and promote synaptogenesis, neurogenesis, and angiogenesis [ , ]. It also helps maintain or increase cognitive reserve [ , , ], empowering individuals to retain independence in everyday functioning despite neurodegenerative changes [ , ].Physical and motor-cognitive training may represent the predominant nonpharmacological intervention that effectively mitigates cognitive decline in mNCD [
, , , ] and is recommended for secondary prevention of mNCD by a collaborative international guideline [ ] as well as a global consensus on optimal exercise recommendations for enhancing healthy longevity in older adults [ ]. In this context, exergames provide a promising solution to deliver this type of training [ ]. Recent systematic reviews support the use of exergame-based training for improving both physical and cognitive performance in individuals with mNCD. However, studies in this field exhibit high heterogeneity, which weakens the strength of the evidence [ , , - ]. Furthermore, recent research suggests that, while exergame-based training may indeed improve cognitive performance, its superiority over traditional training remains uncertain, and further research should investigate the conditions in which exergames are most effective [ ].Several exergame systems have been investigated for older adults and have been shown to effectively promote engagement and motivation [
- ], which largely explains the superior adherence to exergame- or technology-enhanced training compared to adherence to conventional approaches of physical and motor-cognitive activities, exercise, or training [ , ]. Furthermore, exergames offer high value due to the technical adaptability of various elements. However, while many exergames are technically well designed for attractiveness and entertainment, they often lack systematic user adaptability and specificity [ ]. Therefore, future exergames should be better tailored to specific target groups as well as individual goals and abilities [ , , , ].To enhance the user adaptability of exergames, it has been recommended to implement real-time adaptive exergame mechanisms that support the dual-flow concept proposed by Sinclair et al [
]. According to this concept, an optimal exergame experience requires a balance between the challenge of the game and the player’s skills, as well as between the intensity of the game and the player’s fitness. A common method for implementing adaptive exergame mechanics is to use the data that are directly collected during exergaming [ ]. These data are commonly referred to as in-game metrics or performance measures and capture the real-time interaction between the user and the serious game. Game metrics are quantitative measures that can range from simple measures such as point scores or reaction time to more complex measures that combine various parameters [ ]. Game metrics have the potential to improve personalization and individualized progression of training according to users’ performance, provide (real-time) feedback, and monitor gameplay [ - ]. However, there are currently no standards or guidelines for what data should be collected and for what specific purpose [ , ].By investigating game metrics and understanding how well they can capture true cognitive and motor-cognitive performance, we can provide a robust basis for in-game adjustments and monitoring of performance progression [
, ]. While many studies emphasize the importance of individually tailored training and commonly use in-game metrics for this purpose, only a limited number have analyzed the psychometric properties of in-game metrics. Previous studies that have investigated game metrics from exergames for older adults with and without physical or cognitive impairments have found moderate to strong correlations between some game metrics and clinical assessments [ , , ]. This suggests that certain game metrics may be useful for measuring physical and cognitive performance. Furthermore, one study demonstrated that game metrics can distinguish between healthy older adults and those with mNCD [ ]. In addition, another study explored a difference in exergame performance progression between older adults with and without cognitive impairment [ ]. However, further studies are needed with domain-specific evaluation, larger participant samples, and inclusion of participants with a clinical diagnosis of mNCD, as stated by Guimarães et al [ ].Objectives
The primary objective was to identify valid game metrics as indicators for in-game domain-specific cognitive performance during exergaming in older adults with mNCD. We hypothesized statistically significant correlations between specific game metrics and reference assessments within their corresponding neurocognitive subdomains. As a secondary objective, we aimed to explore game metric performance change over time during a 12-week exergame-based training intervention for older adults with mNCD.
Methods
Overview
This study is a secondary analysis of data from 2 randomized controlled trials (RCTs) that aimed to evaluate the feasibility (feasibility study [FS]; NCT04996654) [
] and effectiveness (effectiveness study [ES]; NCT0538707) [ ] of an individually tailored home-based exergame-based training concept specifically for the secondary prevention of mNCD developed within the Brain-IT project [ ]. The training concept is rooted in years of iterative and user-centered co-design, development, testing, and evaluation with continuous patient and public involvement [ ]. It serves as a guideline for the implementation of the training by providing algorithmic decision trees for the structure, content, and individualized tailoring of the training and can be implemented using different commercially available hardware [ ].The project was structured in 3 phases. In phase 1, we combined a comprehensive literature synthesis [
] with qualitative research, including primary end users (individuals with mNCD), secondary end users (multidisciplinary health and care professionals), other exergaming researchers, and experts from the exergaming industry [ ], to elaborate a set of design requirements for the Brain-IT training concept. In phase 2, possible concepts were co-designed and developed based on the requirements defined in phase 1. The first prototype of the Brain-IT training concept [ , ] then entered the iterative cycle of feasibility, usability, safety, and acceptance testing and integrating of findings for further development until an “acceptable” solution was achieved (refer to the FS [ ], the data from which are analyzed in this study). This process resulted in a novel intervention type specifically targeting various relevant mechanisms of action to alleviate the pathological state of mNCD by combining, for the first time, exergame-based motor-cognitive training with biofeedback-guided resonance breathing as an adjunct neuromodulatory intervention [ ]. In phase 3, we confirmed the effectiveness (refer to the ES [ ], the data from which are analyzed in this study) of the Brain-IT training in improving global cognitive performance as well as immediate and delayed verbal recall. We observed that the training not only could effectively slow down cognitive decline in comparison to usual care but also resulted in 55% of participants showing clinically relevant improvements in cognitive performance [ ].We compiled this manuscript according to the latest version of the CONSORT (Consolidated Standards of Reporting Trials) Statement for Randomized Trials of Nonpharmacologic Treatments (Tables S1 in
). As this is a secondary analysis of 2 RCTs, not all original trial design elements may be fully applicable or reported in this paper as they were not relevant for the specific analysis conducted (for full reproducibility, please also refer to the published FS [ ], the study protocol for the ES [ ], the ES itself [ ], and the training concept—Supplementary File 2 of the ES [ ]).Ethical Considerations
All original study procedures were carried out in accordance with the Declaration of Helsinki. The original study protocols were approved by the ETH Zurich Ethics Commission (EK 2021-N-79) for the FS and the Ethics Committees of Zurich and Eastern Switzerland (EK-2022-00386) for the ES.
All study participants were fully informed about the study procedures in person (at the interested person’s home or at one of the study centers depending on their preference) through verbal explanations and an information sheet. After sufficient time for consideration (ie, at least 24 hours after handing out the study information sheet [which was approved by the corresponding ethics committee] but, on average, approximately 1 week), suitable patients willing to participate in the study provided written informed consent in a second in-person meeting with one of the trained investigators of the study team at the home of the interested person or at one of the study centers. The original informed consent forms allowed for the secondary analyses conducted in this study as all study participants provided consent for (1) original datasets being made available in a publicly accessible repository in deidentified form and (2) the transfer of encrypted data for research purposes. Only deidentified data were analyzed for this study (for more detail, see the Data Availability section).
No compensation was granted to participants, but detailed feedback on individual performance as well as the study outcomes in general was provided at the end of the studies.
Study Design and Participants
Recruitment for the 2 parallel-group and single-blinded (the outcome evaluator of pre- and postintervention measurements was blinded to group allocation) RCTs took place between July 2021 and October 2023 in collaboration with health care facilities in the larger area of Zurich, Switzerland. The eligibility criteria for study participants have been published previously [
, , ] and are listed in [ - ].Inclusion criteria
- Clinical diagnosis of mild neurocognitive disorder according to the International Classification of Diseases, 11th Revision [ ], or the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition [ ], OR patients screened for mild cognitive impairment (MCI) according to the following criteria: (1) informant (ie, health care professional)-based suspicion of MCI confirmed by (2) an objective screening of MCI based on the validated German version [ ] of the Quick Mild Cognitive Impairment screen [ ], with a recommended cutoff score for cognitive impairment (or dementia) of <63/100 [ ] while not falling below the cutoff score for dementia (ie, <45/100) [ ]
- German speaking
- Able to stand for at least 10 minutes without assistance
Exclusion criteria
- Mobility impairments (ie, gait and balance) that prevented experiment participation
- Presence of additional, clinically relevant (ie, acute or symptomatic or both) neurological disorders (ie, epilepsy, stroke, multiple sclerosis, Parkinson disease, brain tumors, or traumatic disorders of the nervous system)
- Presence of any other unstable or uncontrolled diseases (eg, uncontrolled high blood pressure and progressing or terminal cancer)
After successful recruitment, participants underwent a premeasurement (see the Reference Assessments subsection in the Outcomes section) at the study center located at ETH Hönggerberg and were randomly assigned in a 1:2 (FS) or 1:1 (ES) ratio to either the control or intervention group. This secondary analysis investigated only the data from participants allocated to the intervention groups who engaged with the exergame-based training.
There were no important changes to the trial design and study setting after commencement that were relevant to the analyses presented in this paper. More details can be found in the published FS [
] and ES [ ].Training Intervention
Following the premeasurement and group allocation, a study investigator installed the exergame system in the participants’ homes, where the training sessions took place. A standardized familiarization session was conducted to introduce the exergame system (ie, 4 games from the first scheduled training session [at level 1] were introduced and played for 2 minutes each). The exergame system Senso Flex (Dividat AG; hardware: prototype version 2; software: version 22.4.0-360-gf9df00d5b) was used [
]. The participants were instructed to follow the Brain-IT training concept [ ] for the 12-week intervention, which consisted of completing an instructed minimum of 5 training sessions per week. Each training session had a duration of 24 minutes and included 6 motor-cognitive games and 3 resonance breathing exercises. Approximately one-third of the sessions were supervised by a study investigator, with more supervision provided at the beginning of the intervention and less toward the end [ ]. The intervention was personalized and individually adapted based on several predefined parameters and included games for the neurocognitive domains of learning and memory, executive function, complex attention, and visuospatial skills [ , ]. The relevant parameters for this study were (1) the focus on participants’ main cognitive deficits through an individual constellation of games and (2) the individual adaptation of the standardized difficulty levels (levels 1-10), with each participant starting at level 1 [ ].The parameters for determining the 10 game levels were defined through agreement between an experienced neuropsychologist and the research team and were extensively tested [
]. According to the Brain-IT training concept, level 1 should refer to an introductory level (ie, even most impaired patients with mNCD should be able to play the game at the first trial), and in level 10 (ie, “healthy functioning” level), the game demands are expected to be challenging but doable for an average healthy older adult. The remaining levels are defined to increase neurocognitive demands consecutively and regularly from level to level. In the definition of these 10 game levels, we integrated the methodology by Huber et al [ ], who established an extended taxonomy supporting motor-cognitive learning. This methodology provides a framework for structuring training to specifically integrate models of skill acquisition [ , ] that has also been described to apply to cognitive skill learning and relearning [ ].For more details on the specific exergames, as well as all algorithmic decisions for the personalized design and individualized progression of the training, including a description of all games and game level settings, we kindly refer interested readers to the published Brain-IT training concept [
]. To ensure replicability, the Brain-IT training concept was planned and reported according to the Consensus on Exercise Reporting Template [ ]. The training concept provides specific instructions on how to adapt the training when implemented with alternative hardware and software solutions (Supplementary File 2 of the ES [ ]).Outcomes
Demographics
During the premeasurement, participants provided self-reported information on sociodemographic variables, including age, sex, BMI, and years of education. The characterization of the etiology of mNCD was derived from the diagnostic information provided by the collaborating recruitment partners. Global cognitive performance was assessed using the validated German version [
] of the Quick Mild Cognitive Impairment screen [ , ].Game Metrics
A total of 14 different exergames were implemented in the Brain-IT training concept. During the development phase of the training concept, neuropsychologists assigned each exergame to its primary targeted neurocognitive function to ensure its content validity [
]. However, it should be mentioned that exergames typically involve and train multiple neurocognitive functions. The exergames were assigned to the primary targeted neurocognitive domains of (1) learning and memory, (2) executive function, (3) complex attention, and (4) visuospatial skills. In addition, they were subcategorized into neurocognitive subdomains [ ]. For this study, we analyzed 8 exergames, with 2 games representing each neurocognitive domain, as listed in . These games were selected based on data availability (ie, games that were frequently played during the intervention and were introduced to most, if not all, participants to ensure robust analyses). We collected all game metrics that were computed by the exergame system and, therefore, available for analyses (ie, no selection or exclusion of specific metrics). A detailed description of the game’s content, parameters to adapt, and game metrics (referred to as “performance measures”) can be found in the Brain-IT training concept [ ].Neurocognitive domain and subdomain | Exergame | Neuropsychological assessment—outcome variable | ||
Name | Outcome variable | |||
Learning and memory | ||||
Free recall | Shopping Tour |
| WMS-IV-LM 1b—free recall (total point score) | |
Serial recall | Simon (color and number) |
| PEBLc DSFd (total point score) | |
Executive function | ||||
Working memory | Nomis (color and number) |
| PEBL DSBe (total point score) | |
Planning | Targets |
| HOTAP (points × min–1) | |
Complex attention | ||||
Selective attention | Habitats |
| TAPf Go-No go (median reaction time [ms]) | |
Processing speed | Simple |
| PEBL TMT-Ag (completion time [s]) | |
Visuospatial skills | ||||
Visual perception | Gears |
| PEBL MRTh (performance score) | |
Visuoconstructional reasoning | Tetris |
| PEBL MRT (performance score) |
aThe categorization of exergames into the neurocognitive domains and subdomains was derived from the published Brain-IT training concept. Accordingly, each game was categorized into the primary neurocognitive domain and subdomain being trained (and secondary subdomain in the case of a game that focuses on more than one neurocognitive domain or subdomain). The categorization was made through agreement between an experienced neuropsychologist and the research team to ensure the content validity of the exergames used to train each neurocognitive domain or subdomain. The variables listed in the Outcome variables column under Exergame describe all game metrics that were computed by the exergame system and, therefore, available for analyses (ie, no selection or exclusion of specific metrics). For each of these games, the most appropriate reference assessment was selected from the available published datasets. Specifically, we used the neurocognitive domains or subdomains that the specific games primarily trained (as defined in
of the published Brain-IT training concept [ ]) as references and selected the reference assessment that measured the same primary neurocognitive domain or subdomain from the available published datasets.bWMS-IV-LM 1: Wechsler Memory Scale–Fourth Edition–Logical Memory subtest part 1.
cPEBL: Psychology Experiment Building Language.
dDSF: digit span forward.
eDSB: digit span backward.
fTAP: Test of Attentional Performance.
gTMT-A: Trail Making Test part A.
hMRT: Mental Rotation Task.
Reference Assessments
Data from several domain-specific neuropsychological assessments collected during the premeasurement were used as reference. For each assessment, the participant received standardized instructions from a study investigator, who also ensured that the participant understood the task and the procedure. The assessments and the outcome variables used for this analysis are listed in
. The study protocol of the ES [ ] provides detailed information about the assessments. For each of the games, the most appropriate reference assessment was selected from the available published datasets. Specifically, we used the neurocognitive domains or subdomains that the specific games primarily trained (as defined in of the published Brain-IT training concept [ ]) as references and selected the reference assessment that measured the same primary neurocognitive domain or subdomain from the available published datasets, as listed in of the published Brain-IT training concept [ ]. A brief description of the assessments and their function is provided in this section.For the neurocognitive domain of learning and memory, the German version of the Logical Memory subtest part 1 from the Wechsler Memory Scale–Fourth Edition [
, ] was used to measure the neurocognitive subdomain of free recall. Data for the subdomain of serial recall were obtained from the digit span forward test of the Psychology Experiment Building Language (PEBL) [ ], a computer-based software platform for psychological experiments. Executive function was assessed using the HOTAP picture-sorting test part A [ ] to evaluate planning skills and the PEBL digit span backward test [ ] to evaluate working memory. For the neurocognitive domain of complex attention, data from the PEBL Trail Making Test part A [ ] for processing speed and data from the Go-No go task 1 of 2 from the Test of Attentional Performance [ , ] for selective attention were used. Finally, we considered data from the computerized PEBL Mental Rotation Task [ ] that is based on the classic mental rotation task to assess visuospatial skills.Data Analysis and Statistical Methods
Overview
Statistical analysis was conducted using R (version 2023.12.0+369; R Foundation for Statistical Computing). Data are reported as mean and SD for interval scales or continuous data and as counts and percentages for categorical variables. Descriptive statistics were calculated for all outcome variables. The assumption of normality distribution was checked through the Shapiro-Wilk test alongside a visual examination of the data [
].Primary Objective: Cross-Sectional Analysis
For the primary objective, we used a cross-sectional analysis. Only the game metrics from the first time that each participant played the first level of difficulty were analyzed. There were 2 main reasons for this. First, not all participants played the same number of repetitions of a game level due to individual adaptations, and second, we aimed to minimize the influence of learning effects in this cross-sectional analysis. This analysis included all participants who played the corresponding game at least once. The training concept underwent minor adjustments between the FS and the ES. One such adjustment involved extending the duration of certain games from 2 to 3 minutes. Metrics that were affected by this change and are time dependent were documented as metrics over time (min–1). The remaining minor adjustments to the training concept did not affect the analyses or the interpretation of our results and included factors such as the introduction of newly available games, which were not analyzed in this study.
One-way bivariate correlation analyses were conducted to investigate the relationship between game metrics and performance in clinical assessment within the corresponding neurocognitive subdomains. Depending on the distribution of the data, either Pearson (parametric statistical analyses) or Spearman (nonparametric statistical analyses) correlation analyses were conducted. P values and correlation coefficients (Pearson r or Spearman ρ), including 95% CIs, were calculated (using bootstrap for nonparametric analyses) [
]. To confirm the alternative hypothesis, we predetermined the following criteria: (1) statistically significant correlation (P≤.05; uncorrected; 1-sided) with (2) a correlation coefficient of ρ≥0.4, as proposed by Streiner et al [ ] for studies in which perfect correlations are rare and many influencing factors are outside of the researchers’ control, which is the case in this analysis due to the complex nature of exergames and various influences such as task design, external factors, and personal characteristics. The correlation coefficients were interpreted as weak (ρ<0.3), moderate (ρ=0.3-0.5), or strong (ρ>0.5) [ , ].We decided against applying a Bonferroni correction routinely for multiple comparisons considering our hypothesis and methodology. This is recommended because our study did not imperatively require the avoidance of type-I errors and we had preplanned hypotheses [
]. In addition, correlations were assessed only within their predefined neurocognitive subdomains, making our data dependent. Sullivan and Feinn [ ] noted that applying the Bonferroni correction to dependent data may hinder the detection of relevant associations and increase the risk of a type-II error.In this study, we did not conduct an a priori sample size calculation as we analyzed existing datasets from studies conducted as part of the Brain-IT project. Therefore, the sample size for these analyses was determined based on the number of datasets available from the 2 RCTs and was fixed and could not be altered. To ensure appropriate interpretation of our results, we conducted a post hoc power analysis using G*Power (version 3.1) [
, ]. As the Spearman rank correlation coefficient is computationally identical to the Pearson product-moment coefficient, power analyses were conducted using the same methodology as for estimating the power of a Pearson correlation for nonparametric analyses.Secondary Objective: Longitudinal Analysis
For the secondary analysis, we only used game metrics that met the predefined criteria of the primary objective to ensure the validity of this analysis. For this analysis, data were collected longitudinally over the 12-week intervention period, excluding participants who dropped out during the intervention. The definition of each of our 10 game levels followed the principles described in the extended taxonomy supporting motor-cognitive learning by Huber et al [
]. As this taxonomy specifically integrates the model of skill acquisition [ ] that has also been described to apply to cognitive skill learning and relearning [ ], we based our longitudinal analysis on the model of skill acquisition [ ], which integrates the 3-stage model of human skill acquisition by Fitts [ ].Accordingly, the performance data (game metrics) were presented as a function of time. As participants did not play each game an equal number of times, we expressed the variable “time” as a percentage of game completion time (ie, the first time that a participant played the game was 1%, and the last time they played the game was 100%). Full completion (100%) represented either the introduction of a new, slightly more challenging game or the end of the intervention. In addition, a second graph was generated displaying the performance changes within specific game levels. This was done to provide further insights into the data and is particularly relevant for accuracy metrics (eg, precision score), for which we expected different characteristics of the performance curves. More specifically, we generally expected either the typical sigmoid curve or curvilinear curves of rapid initial improvements followed by a plateau in performance. However, for accuracy metrics, we expected that this curve would only be visible within a specific game level, with a slight drop in accuracy each time the game level was increased.
Performance change curves were visually represented for each individual participant along with an overall curve that reflected the average performance across all participants [
]. However, we refrained from making any a priori assumptions on the type of relationships (eg, linear vs curvilinear relationship) as, on the one hand, our algorithmic decisions on individualized progression of exergame demands aimed to “break through” the expected (repeated) plateau in performance, which would be expected to create repeated curvilinear curves with a slight drop in performance after each plateau once the game difficulty was increased; however, on the other hand, there are no previous analyses available for the exergame systems and metrics used to base this theory-derived assumption on, whereas similar previous research has observed a more linear progression change over time in patients with mNCD in contrast to the typical curvilinear curves observed in healthy older adults [ ]. Therefore, we chose to use the locally estimated scatterplot smoothing regression model to fit smooth curves, which provides a flexible approach to visualize data. This model is recommended for exploratory analysis because there is no need for a priori specification of relationships [ ]. Finally, we measured and reported the average interquartile ranges number of times that participants played the game.Results
Overview
A summary of the participant flow is provided in
. Data were available for a total of 31 participants who were allocated to the intervention group in the 2 RCTs analyzed. The time between the initial premeasurement with the group allocation and the start of the intervention ranged from 1 to 2 weeks.
Demographic Data
The demographic and clinical characteristics of all participants are listed in
.Participant characteristic | Values | ||
Age (y), mean (SD) | 76.4 (7.5) | ||
Sex (female), n (%) | 9 (29) | ||
Education (y), mean (SD) | 14.9 (4.0) | ||
BMI (kg/m2), mean (SD) | 24 (2.4) | ||
Global cognitive performance (QMCIb total score), mean (SD) | 57.4 (14.0) | ||
Etiology of mNCDc, n (%) | |||
mNCD due to Alzheimer disease | 19 (61) | ||
Mild frontotemporal NCDd | 4 (13) | ||
Mild vascular NCD | 5 (16) | ||
Unclear or not yet determined | 3 (10) |
aDemographic information was assessed based on self-report of the study participants. Global cognitive performance was assessed using the validated German version [
] of the Quick Mild Cognitive Impairment screen total score [ , ]. The characterization of the etiology of mild neurocognitive disorder was derived from the diagnostic information provided by the collaborating recruitment partners.bQMCI: Quick Mild Cognitive Impairment screen.
cmNCD: mild neurocognitive disorder.
dNCD: neurocognitive disorder.
Primary Objective: Cross-Sectional Analysis
presents the results of the correlation analyses between game metrics and neurocognitive reference assessments. In the neurocognitive domain of learning and memory, 3 metrics of the game Shopping Tour (ie, mean reaction time, precision score, and collected items) and 1 metric of the game Simon (ie, point rate) met the alternative hypothesis criteria by exhibiting statistically significant moderate to strong correlations with the corresponding reference assessments (see for all P values). In the neurocognitive domains of executive function and visuospatial skills, the metric point rate of the games Targets and Gears met the alternative hypothesis criteria by exhibiting statistically significant moderate correlations with the corresponding reference assessments. None of the metrics of the games targeting the neurocognitive domain of complex attention or of the remaining 2 games, Nomis and Tetris, met the alternative hypothesis criteria. Descriptive statistics for all outcome variables are provided in Tables S2 and S3 in .
Neurocognitive domain, game and reference assessment, and outcome | Sample size, n (%) | Statistics | ||||||||||||||
Type of analysis | P value | Spearman ρ (95% CI) | Pearson r (95% CI) | HAb met? | Statistical power (post hoc) | |||||||||||
Learning and memory | ||||||||||||||||
Shopping Tour (WMS-IV-LM 1c score) | ||||||||||||||||
Mean reaction time (ms) | 31 (100) | Nonparametric | <.001d | –0.747e (–0.876 to 0.548) | —f | Yes | 0.979 | |||||||||
Number of mistakes | 31 (100) | Parametric | .15 | — | –0.264 (–0.565 to 0.100) | No | 0.669 | |||||||||
Number of collected items | 31 (100) | Nonparametric | <.001d | 0.691 (0.478 to 0.830) | — | Yes | 0.923 | |||||||||
Precision score (%) | 31 (100) | Parametric | <.001d | — | 0.607 (0.374 to 1) | Yes | 0.739 | |||||||||
Simon (PEBLg DSFhscore) | ||||||||||||||||
Mean reaction time (ms) | 26 (84) | Nonparametric | .04d | –0.344 (–0.680 to 0.145) | — | No | 0.516 | |||||||||
Point rate | 26 (84) | Parametric | .008d | — | 0.471 (0.166 to 1) | Yes | 0.528 | |||||||||
Executive function | ||||||||||||||||
Nomis (PEBL DSBi score) | ||||||||||||||||
Mean reaction time (ms) | 13 (42) | Nonparametric | .40 | –0.079 (–0.575 to 0.572) | — | No | 0.505 | |||||||||
Point rate | 13 (42) | Nonparametric | .60 | –0.081 (–0.590 to 0.579) | — | No | 0.705 | |||||||||
Targets (HOTAP combined score) | ||||||||||||||||
Number of hits | 30 (97) | Nonparametric | .03d | 0.353 (–0.062 to 0.626) | — | No | 0.514 | |||||||||
Number of misses | 30 (97) | Nonparametric | .08 | –0.264 (–0.597 to 0.098) | — | No | 0.512 | |||||||||
Point rate | 30 (97) | Nonparametric | .006d | 0.455 (0.144 to 0.715) | — | Yes | 0.523 | |||||||||
Complex attention | ||||||||||||||||
Habitats (TAPj Go-No go reaction time) | ||||||||||||||||
Mean reaction time (ms) | 28 (90) | Parametric | .25 | — | –0.131 (–1 to 0.195) | No | 0.506 | |||||||||
Point rate | 28 (90) | Nonparametric | .36 | 0.072 (–0.371 to 0.448) | — | No | 0.503 | |||||||||
Simple (PEBL TMT-Ak completion time) | ||||||||||||||||
Mean reaction time (ms) | 29 (94) | Nonparametric | .58 | 0.039 (–0.361 to 0.474) | — | No | 0.719 | |||||||||
Point rate | 29 (94) | Nonparametric | .92 | –0.264 (–0.541 to 0.079) | — | No | 0.997 | |||||||||
Visuospatial skills | ||||||||||||||||
Gears (PEBL MRTl performance score) | ||||||||||||||||
Mean reaction time (ms) | 18 (58) | Nonparametric | .63 | 0.085 (–0.402 to 0.663) | — | No | 0.752 | |||||||||
Point rate | 18 (58) | Nonparametric | .02d | 0.474 (0.018 to 0.787) | — | Yes | 0.520 | |||||||||
Tetris (PEBL MRT performance score) | ||||||||||||||||
Point score | 10 (32) | Nonparametric | .06 | –0.043 (–0.783 to 0.089) | — | No | 0.069 |
aThe categorization of exergames into the neurocognitive domains and subdomains was derived from the published Brain-IT training concept, and for each game, we selected the most appropriate reference assessment from the available published datasets (
). The variables listed in the first column describe all game metrics that were computed by the exergame system and, therefore, available for analyses (ie, no selection or exclusion of specific metrics).bHA: alternative hypothesis.
cWMS-IV-LM 1: Wechsler Memory Scale–Fourth Edition–Logical Memory subtest part 1.
dStatistically significant at P<.05.
eItalics indicate a correlation coefficient of >0.4.
fNot applicable.
gPEBL: Psychology Experiment Building Language.
hDSF: digit span forward.
iDSB: digit span backward.
jTAP: Test of Attentional Performance.
kTMT-A: Trail Making Test part A.
lMRT: Mental Rotation Task.
Secondary Objectives: Longitudinal Analysis
This analysis included the 6 game metrics that met the criteria for the primary objective.
presents the results for the number of times that the participants played each game. shows the performance progression curves over gameplay time in percentages. For all game metrics, there was an initial increase in average performance progression, or a decrease in the case of the metric “mean reaction time.” This was followed by a plateau (see Simon, Targets, and the mean reaction time metric of Shopping Tour) or even a slight decline (see Gears and the precision score and collected items metrics of Shopping Tour) in performance. Interindividual differences in performance progression were observed, particularly for the metric “point rate.” While some individuals exhibited a flatter curve, others had a more pronounced initial increase or a sigmoidal curve pattern. shows the performance progression within difficulty levels. In addition to the findings shown in , we observed that, in the game Shopping Tour, the metrics “precision score” and “collected items” increased within most levels, whereas the metric of mean reaction time initially declined and then leveled off. For the metric of point rate in the games Gears and Simon, we observed considerable drops in performance between certain levels of difficulty.Game | Sessions played, median (IQR) |
Shopping Tour | 132 (99) |
Simon | 72 (23) |
Targets | 95 (44) |
Gears | 64 (25) |
aThe descriptive statistics presented refer to the median and the IQR of the number of sessions that the study participants (individuals with mild neurocognitive disorder) played in each of the exergames analyzed in this study over the 12-week intervention period.


Discussion
Principal Findings
Overview
This study aimed to identify valid game metrics as indicators for in-game domain-specific cognitive performance during exergaming in individuals with mNCD by analyzing the correlation between game metrics and neurocognitive assessments. Key findings were that 33% (6/18) of the game metrics within the neurocognitive domains of learning and memory, executive function, and visuospatial skills were identified as valid indicators for in-game domain-specific cognitive performance during exergaming. This study’s secondary objective was to visually explore game metric performance changes over time and within difficulty levels. The results revealed high interindividual variability and overall trends of the expected typical curvilinear curves of rapid initial improvements followed by a plateau in performance.
Primary Outcome: Cross-Sectional Analysis
In total, 4 games (Shopping Tour, Simon, Targets, and Gears) had at least one metric that met the predefined criteria and, therefore, was identified as a valid indicator for in-game domain-specific cognitive performance during exergaming. We observed that 83% (5/6) of the game metrics that met our alternative hypothesis reflected the precision of response (ie, point rate or precision score), whereas measures of the speed of response (ie, mean reaction time) met the criteria in only 17% (1/6) of cases even in the neurocognitive domain of “complex attention,” which specifically trains processing speed. Furthermore, the FS of the Brain-IT training concept has already shown that reaction times had a high inter- and intraindividual variability, making it difficult to initiate training adaptations based on these game metrics [
]. These observations suggest that game metrics reflecting the precision of response may be a better indicator of cognitive performance during exergaming than metrics reflecting the speed of response. The game Shopping Tour exhibited strong correlations in 75% (3/4) of the metrics, whereas other games exhibited weak correlations in all collected metrics. This suggests that the results may depend on two main factors: (1) the validity of game metrics as an indicator of performance and (2) how well the game content aligns with its assigned neurocognitive domain or subdomain. The second factor, the content validity of the game, was confirmed by neuropsychologists during the design phase of the Brain-IT training concept [ ]. However, it is important to note that exergames typically target different neurocognitive functions as well as motor skills. Although each game focuses on a specific neurocognitive domain or subdomain, other domains are often indirectly trained, which may confound the domain-specific analyses. Therefore, the observation that certain games displayed no metric that met the alternative hypothesis criteria is likely related to the validity of the game metrics as indicators of performance or to the content of the games not being specific enough to target the corresponding neurocognitive domains or subdomains.While we have established that game metrics and game content may contribute to measuring domain-specific exergame performance, the complex nature of exergames indicates that additional factors more closely related to the training itself should also be considered. These factors are typically minimized in clinical assessments, yet they are a significant aspect of exergaming and may have influenced the outcomes of this study. An important consideration is the influence of the exergame design, including both hardware and software, on player performance. One such task design aspect is the scoring system. If certain tasks within the game are not appropriately scored relative to their difficulty, this could result in an inaccurate reflection of performance measures. Another consideration is the involvement of different difficulty levels. In our analysis, we correlated game metrics from only the first level of difficulty, which was considered an introductory level and may not have been challenging enough to show the full capabilities of the participants. In line with this consideration, Litz et al [
] conducted a similar study correlating exergame metrics with clinical assessments but across various difficulty levels and found some variable outcomes for the same game in different levels, with a tendency toward more consistent relationships between in-game metrics and standardized cognitive assessments at higher difficulty levels or when averaging scores over all game levels. Furthermore, exergames often include feedback mechanisms that reward correct decision-making (ie, precision of response) more than decision speed, which may explain the low correlation with reaction time metrics in our results. External factors such as physical environment or external support, as well as personal factors such as emotional status or self-efficacy [ ], should also be considered as potential influencers of performance. In relation to this, Sajjadi et al [ ] conducted a systematic review on adapting serious games to users with the aim of increasing engagement and effectiveness. This review proposed the use of game metrics for in-game adaptions but also highlighted the importance of considering additional factors such as physiological states and the personal traits of the user [ ].Our findings align with those of previous studies that have analyzed correlated metrics from exergames with clinical assessments of cognitive performance [
, , , ]. Specifically, Guimarães et al [ ] correlated game metrics measured on the first gameplay against standard cognitive and mobility assessments, including the Montreal Cognitive Assessment (MoCA), in 13 older adults with mobility limitations. They observed that 2 of the 16 analyzed game metrics (from 4 different games) showed a statistically significant correlation with the MoCA total score, both related to scores on precision or accuracy of responses. They also correlated these game metrics with all the MoCA subdomain scores and found some (ie, 8 of the 112 analyses) statistically significant correlations with the visuospatial, executive functioning, and abstraction components of the MoCA. Similar results were observed for the physical performance outcomes, with several (17 of the 48 analyses) step characteristic outcomes during exergaming showing statistically significant correlations with physical test scores [ ]. Petsani et al [ ] computed Pearson correlations between the in-game metrics of 13 individuals with Parkinson disease with the Symbol Digit Modalities Test using a feature vector containing the mean value of all sessions for each in-game metric for each participant. They observed that 1 of the 11 analyzed game metrics (from 4 different games) showed a strong correlation with the Symbol Digit Modalities Test score [ ]. Konstantinidis et al [ ] computed Pearson correlations between an aggregated score derived from various in-game metrics of 113 participants (ie, 38 cognitively healthy, 64 with mild cognitive impairment, and 14 with mild dementia) with the Mini-Mental State Examination (MMSE), MoCA, and Trail Making Test parts A and B. They found consistent, statistically significant, moderate to strong correlations for all 4 analyses [ ]. Finally, Litz et al [ ] computed Spearman rank correlations between in-game metrics and 5 outcomes from 4 cognitive assessments in independently living older adults who were cognitively intact and multimorbid. They observed consistent moderate to strong correlations for most (88 out of 130) analyses, with similar observations for a range of physical assessments [ ].These observations indicate that only a selection of game metrics may be valid indicators for in-game cognitive performance. However, identifying specific game metrics that performed better across these different studies was challenging due to the high heterogeneity in study design, analyzed game metrics, and reference assessments used. Notably, while our study was based on a domain-specific approach, these previous studies correlated game metrics with a range of cognitive (and physical) assessments without initial categorization of game specificity. In addition, they frequently used broad assessments, such as the MoCA and MMSE, which are tests that cover several cognitive domains in the outcome score [
, ]. The correlation analyses between different game metrics and the MoCA or MMSE ranged from weak [ , ] to moderate [ , ] and strong [ , ], indicating diverse outcomes. When investigating specific neurocognitive domains, we also found diverse findings. In contrast to our findings, Litz et al [ ] found moderate to strong correlations for the metrics scores and reaction time within the domain of complex attention. On the other hand, we found some moderate to strong correlations within the domain of learning and memory, whereas Guimarães et al [ ] did not find such correlations in similar games.Some studies have also used game metrics to predict cognitive or physical performance as measured using standardized clinical assessments. Petsani et al [
] used a decision tree classifier with the Gini index to predict, based on in-game metrics, whether participants belonged to groups with better or worse physical and cognitive states. This categorization was derived from clustering analysis based on 18 outcomes from a set of neuropsychological and physical assessments. They observed a high classification accuracy at 84.6% [ ]. Konstantinidis et al [ ] used a multilayer feedforward neural network to assess the predictive value of their aggregated game score and found an overall accuracy of 70.69% to distinguish between cognitively healthy, mild cognitive impairment, and mild dementia [ ].In summary, previous studies have shown mixed results but were able to identify certain game metrics that appear as valid indicators for physical, cognitive, or motor-cognitive performance in exergaming and may be reflective of standardized neuropsychological or physical assessments. However, while the use of such game metrics to individually tailor and adjust the games is widespread, these metrics have evidently rarely been scientifically investigated as we only identified a handful of similar previous studies [
, , , ]. Therefore, it appears important to carefully select and prevalidate game metrics in future studies, especially when they are used for purposes such as in-game adaptations or performance monitoring. Given the current advancements in computing fields such as artificial intelligence, it is likely that real-time personalized adaptation will play an even more important role in the future of exergames [ ]. This development requires valid metrics as indicators of physical and cognitive in-game performance characteristics. The challenge for researchers and game developers is to identify or develop valid game metrics and integrate them strategically into exergame designs, preferably with a strong background in sports science or neuroscience to ensure construct validity, as previously proposed by our research group [ ].Secondary Outcome: Longitudinal Analysis
The results of the secondary objective indicated a common trend across all games, with the expected typical curvilinear curves of rapid initial improvements suggesting an early phase of skill acquisition followed by a plateau in performance. This finding is consistent with those of the existing literature [
, , ] as well as with models of skill acquisition [ , ] that have also been described to apply to cognitive skill learning and relearning [ ]. Specifically, the model of human performance by Fitts and Posner [ ] identifies 3 stages: the cognitive phase, in which the user learns what to do and experiences a rapid performance gain; the associative phase, in which the user refines their actions and increases efficiency; and, finally, the autonomous phase, in which the skill becomes automatic and the user becomes proficient.In our analysis, certain game metrics exhibited a more pronounced plateau (ie, Simon) or even a decline in performance (ie, Gears and the collected items metric of Shopping Tour), which would not be expected from the literature and suggests that players may not have experienced the associative phase. However, upon examining the data across specific difficulty levels, we observed that these plateaus did not necessarily indicate a halt in performance improvement but, rather, an increase in task difficulty that was not appropriately rewarded by the game metrics. In some games, the task difficulty significantly increased between certain levels (eg, Gears with more complex figures or Simon with longer sequences to remember; see the published training concept in Supplementary File 2 of the ES [
]). Between these levels of increasing task difficulty, a clear drop in performance was observed (eg, in Gears after levels 3 and 6 or in Simon after levels 1, 3, 5, and 7). This indicates that, for these games, the scoring system failed to account for varying levels of difficulty by awarding the same number of points for correctly completed tasks of varying difficulty. In contrast, the performance progression of Targets and the precision score metric of Shopping Tour demonstrated a consistent improvement with individual dips and leaps as is expected [ ]. This indicates that the adaptation mechanisms were functioning well for these games and the game metrics can be used to monitor performance progression. Furthermore, for the game Shopping Tour, we observed inconsistent curves for different metrics. The mean reaction time decreased as the game level increased, reached an early plateau, and remained low. This was a somewhat unexpected finding as we expected reaction times to increase slightly and then show the typical reciprocal curve with repeated practice within the same game level, similar to what we expected from the precision metric. This observation suggests that average reaction speed may play a subordinate role in this game. In contrast, the precision score showed continuous improvement, with the expected temporary drops immediately after increasing the game levels (with the exception of the increase from level 1 to 2), which was similar to what was found in the study by Guimarães et al [ ]. The exceptional observation that the precision score in the game Shopping Tour did not temporarily drop when the game level increased from level 1 to level 2 may be explained by (1) early habituation and learning effects in the use of and interaction with the exergame device and the specific game and (2) the fact that the increment between these levels may have been too small to induce a temporary drop in performance.Overall, these observations may indicate that measures of precision of response are also better in reflecting performance changes over time compared to measures of speed of response. In general, both visual representations of performance progression (over gameplay time in
and across difficulty levels in ) provided valuable insights. If the scoring system can account for varying levels of difficulty (with scores that increase with task difficulty), the first option ( ) may be recommended because it enables the analysis of an individual’s overall performance progression regardless of the difficulty level they finally reached. However, this approach does not apply to accuracy metrics (eg, precision score). Accuracy metrics should exhibit a plateau curve toward maximum precision followed by a sharp initial drop after each level increase in the game followed by the typical reciprocal curves of improvement followed by a plateau in performance. This pattern is consistent with the results obtained in this study. However, it can only be observed in the visual representation that shows different levels of difficulty ( ). In summary, our findings indicate the need for adaptable game metrics that can also account for varying tasks and levels of difficulty within the game.Our data also demonstrate high interindividual variability in performance progression. These findings could be associated with the high variability in adherence (total sessions played per participant), which was mainly because some participants played the games much more often than instructed. In particular, the total number of completed sessions ranged from 33 to 159, with an average number of completed sessions of 54.4 (SD 13.0) in the FS and 71.5 (SD 26.2) in the ES [
, ]. However, it could also be associated with clinical or demographic characteristics of the participants. Research does confirm that individuals adapt their skills at different rates, which might be attributed to different learning styles [ , , ]. Furthermore, previous studies have indicated that cognitively impaired groups have been observed to exhibit a flatter performance progression with a less prominent initial increase compared to healthy controls [ , ]. Other studies have suggested that age has an influence on individual performance progress in serious games [ , ]. Therefore, the observed interindividual variability in our analysis could be considered normal and an expected outcome as our group was relatively heterogeneous in clinical and demographic characteristics. Additional research is required to explore what individual characteristics may lead to different performance progressions; identify predictors for steeper performance curves in this field; and apply these findings to identify and adopt strategies that facilitate individual learning success and, thus, effectiveness in improving cognitive (and physical) performance.Strengths and Limitations
One of the main strengths of this study was that we conducted the correlation analyses between game metrics and clinical assessments only within specific preassigned neurocognitive domains or subdomains. This approach was enabled by the already available content validation of the games to their primary trained neurocognitive domain or subdomain within the Brain-IT training concept [
]. Therefore, the primary objective was hypothesis driven, with predefined criteria for interpretation of the results. In addition, we used data from clinically validated and well-established neuropsychological assessments across all neurocognitive domains.There are also some limitations that need to be discussed. First and most importantly, the sample size was small, and we did not conduct an a priori sample size calculation as we analyzed existing datasets from studies conducted as part of the Brain-IT project. To ensure appropriate interpretation of our results, we conducted a post hoc power analysis using G*Power (version 3.1) [
, ], which revealed that most of our analyses were underpowered. This might have resulted in potential false-negative or false-positive results, affects the robustness and generalizability of our findings, and warrants confirmatory studies with a priori sample size calculations to ensure more reliable and robust conclusions. Considering our criteria to confirm the alternative hypotheses as assumptions for a power analysis, a sample size of 37 or 50 would be required to show (1) a statistically significant correlation (P≤.05; uncorrected; 1-sided) with (2) a correlation coefficient of ρ≥0.4 with sufficient power of β>.80 or β>.90, respectively, which may be used as a reference for planning future studies on this topic.Second, this study was a secondary analysis, and the original studies as well as the intervention were not originally designed a priori to investigate the objectives of this study, which restricted our methodological choices in this secondary analysis. Although the game sessions were standardized in terms of game design and intervention delivery, there were interindividual differences in gameplay due to personalization and individual progression of the intervention.
Specifically, these restrictions limited the following methodological aspects for the primary research question. According to the Brain-IT training concept, the games were individually assigned considering the participants’ baseline neuropsychological performance, with all participants starting at level 1 of the game demands in the first training session. Thereafter, either the games progressed or more difficult games were introduced as soon as (1) a plateau in performance was reached on predetermined game metrics, (2) a predetermined target score was reached on a predetermined game metric for progression to the next level, (3) the participant requested an increase in task demands, or (4) the staff supervising the participants deemed the increase in task demands feasible [
]. Given these rules for individualized progression of exergame demands, the path through these games and game levels was different for each participant, with large differences in the time to progress to the next levels and the maximum level reached after the 12-week intervention (as shown in ). For example, one participant might reach level 5 after only 2 weeks of training, whereas another might not reach this level until after 12 weeks. As the first research objective was tied to baseline cognitive performance as measured using standardized neuropsychological assessments, only the first level provided the same standardized conditions for all study participants that allowed the cross-sectional analyses to address this research objective. This methodological limitation may have resulted in participants with higher baseline functioning being underchallenged. However, we did not observe any floor or ceiling effects in any of the game metrics except for number of misses in the game Targets and point score in the game Tetris (Figures S1-S8 in ), whereas there was a good amount of variability in all game metrics (Table S3 in ). Therefore, it can be assumed that this limitation did not have a significant impact on the observed results. Nevertheless, analysis of higher levels of difficulty would undoubtedly provide additional insights. As an example, the findings of Litz et al [ ] observed a tendency toward more consistent relationships between in-game metrics and standardized cognitive assessments at higher difficulty levels or when averaging scores over all game levels [ ]. However, given the individual paths through the game levels and the corresponding time differences to reach a given level in this study, it would be impossible to disentangle learning effects (for participants who had already played the game several times) and performance improvements over time during the intervention from “true” cognitive performance as measured at baseline. Furthermore, not all participants played the same games, resulting in smaller sample sizes in some games. Games that were considered more difficult, such as Nomis and Tetris, had small sample sizes as they were only played by participants with a higher baseline cognitive performance in the neurocognitive domains of executive functioning and visuospatial skills, which informed the allocation of the games in the first session. This could have biased the analysis. Therefore, future studies should be designed to specifically address the research question of this study rather than relying on existing datasets that might restrict the methodological choices in the analyses of the data. This entails a cross-sectional design with a priori sample size calculations rather than the use of data from longitudinal studies and applying different game levels to investigate whether the game levels induce the appropriate changes in perceived task demands (eg, as has been investigated in the work by Manser and de Bruin [ ]) and whether and how performance at higher difficulty levels might differentially relate to cognitive performance assessed using standardized neuropsychological assessments.The personalization and individualized progression of the training also presented a challenge for the analysis of our secondary research question (ie, performance progression) as players varied in the number of times they played the games and difficulty levels were individually adapted. To account for these interindividual differences, we found a method to present performance progression across gameplay by normalizing time to game completion as a percentage. Using this method, we were able to account for variability in adherence and difficulty levels at a group level. However, this method may result in loss of some information, and individual curves were compressed or stretched. These methodological constraints also prevented us from conducting more in-depth analyses of the observed high interindividual variability in performance progression (see
and with color coding of individual participant progression curves), such as exploratory analyses to identify potential factors contributing to the variability (eg, age, years of education, and baseline cognitive status) as these factors (particularly baseline cognitive status) influenced individualized tailoring (personalized allocation as well as individualized progression of games and game levels). Therefore, future studies should be specifically designed to account for these aspects and allow for more in-depth analysis to disentangle the factors that influence the high variability in performance progression over time and derive evidence-based recommendations on the implications of accounting for this variability in the methodological aspects of individually tailoring exergame-based training.Third, although each exergame focused on 1 specific neurocognitive domain or subdomain, they often indirectly trained other domains, which may have confounded the domain-specific analysis. Fourth, the absence of data from specific motor assessments precluded their inclusion in our analyses. Finally, this study only included older adults with mNCD, and the results cannot be generalized to other populations.
Implications for Research
The personalization of training is becoming increasingly popular in the current development of technology-supported interventions, including exergames. Within this field, research should aim to identify valid parameters that can be used for in-game adaptation. The potential of game metrics should be further investigated through studies with structured interventions while also considering the impact of game task design as well as personal and external factors. Given the proposal by Netz [
] that (adherence to) optimal exercise intensity and motor-cognitive task complexity are driving mechanisms for global and task-specific neuroplasticity, respectively, and as discussed in previous research, personalized training adaptations should go beyond game metrics and consider additional aspects of players related to physiological states (e.g., attention, stress) or personal characteristics (e.g., learning style, intelligence) to provide an optimal exercise intensity and motor-cognitive task complexity [ ]. Different biofeedback systems, such as biocybernetic adaptation loops, which continuously adjust game difficulty or game content in response to real-time physiological data from the user [ - ], are currently being investigated in this area and should be further explored. Furthermore, previous reviews have identified several features of fully immersive virtual reality (VR) that can support skill learning [ ], whereas fully immersive VR or augmented reality has a fundamental advantage in delivering ecologically valid game scenarios via recreating a certain level of naturalistic sensory-motor interaction between the user and graphical user interface [ ]. Such features include but are not limited to multisensory interactivity with haptic feedback in a high-fidelity VR with 3D representations and the presence of avatars [ ]. Given that specially designed and customized VR interventions have been more effective in neurorehabilitation compared to the use of commercially available VR systems [ ], these features should be more thoroughly investigated regarding their capacity to improve tailoring of the exercises to provide optimal exercise intensity and motor-cognitive task complexity.The variability in performance progression across participants presents further aspects that need to be investigated. Research should try to identify factors that can help understand and identify how individuals differ in their cognitive and motor-cognitive skill acquisition and performance progression. Different factors that may contribute to a steeper progression curve should be investigated. This should include subgroup analyses, which we suggest should include assessing (1) the baseline physical and cognitive state of the individuals [
, , ], (2) age [ , ], (3) adherence, and (4) self-efficacy [ ]. Moreover, game metrics obtained in the “game module” only provide information on physical and cognitive performance in a specific task (game). However, solely analyzing game metrics does not allow for any conclusions on the efficacy or effectiveness of the training as the habituation and learning effects cannot be disentangled from “true” physical and motor-cognitive performance improvements. Such “true” performance adaptations would either be assessed using standardized physical, motor, or neuropsychological assessments or obtained in a separate evaluation module that integrates scientifically validated gamified assessments to regularly verify whether the exercise or training stimulus is sufficient to induce the desired skill-related changes or near- and far-transfer effects. Therefore, future longitudinal studies should also seek to investigate to which extent performance improvements measured using game metrics within the “game” module translate to physical, motor, or cognitive performance enhancements measured using validated assessments and elucidate factors for promoting such transference effects.It has been proposed that (adherence to) optimal intensity and complexity of exercises are the driving mechanisms for global neuroplasticity (ie, exercise intensity) and task-specific neuroplasticity (ie, cognitive and motor-cognitive task complexity) [
]. The ability of technology-enhanced training to individually progress exercises according to performance metrics in real time offers a potential advantage over conventional exercises and is considered a key advantage of serious video games (such as exergames or cognitive training games) [ ]. There is initial empirical evidence supporting this assumption by showing that training with adaptive difficulty algorithms may be superior to generic games to improve both cognitive and noncognitive outcomes in individuals with mNCD in the context of computerized cognitive training [ ]. Most studies on exergame-based training [ , ] and technology-supported physical activity interventions in general [ ] seem to also have recognized this potential benefit and individually tailored or progressed the exercises by adjusting the difficulty or complexity of the task or games according to the users’ ability or performance [ ]. However, only 42% [ ] or 14% [ ] of studies have reported or provided sufficient details on exercise intensity, whereas details on cognitive demands were neither assessed nor reported in any of the analyzed studies in the review by Manser et al [ ]. Moreover, while a recent review observed that tailored technology-enhanced physical activity interventions (including exergame-based training) resulted in significantly higher adherence than control interventions [ ], a recent systematic review observed that the effects on cognitive performance were moderated not by individualization of training (ie, tailored vs generic [one size fits all]) for exergame-based training in middle-aged to older adults but by other variables (eg, “body position” variable, favoring stepping movements in a standing position, or “exercise intensity” variable, favoring moderate exercise intensity) [ ]. Therefore, it remains to be investigated in more detail whether individualization of training translates to moderating the efficacy or effectiveness of technology-enhanced training and how such individualization may be optimally designed to maximize efficacy or effectiveness in exergame-based training [ ], as well as the broader context of technology-enhanced physical and motor-cognitive training. Ultimately, these investigations should seek to improve the personalization of technology-enhanced interventions by identifying and adopting strategies that facilitate individual learning success and, thus, promote effectiveness in improving cognitive (and physical) performance.Conclusions
This secondary analysis of 2 RCTs provided valuable insights into exergame metrics. This study demonstrated that a selection of game metrics can serve as valid indicators of in-game domain-specific cognitive performance during exergaming. Moreover, metrics that reflect the precision of responses overall performed better than metrics reflecting the speed of responses, suggesting that metrics for precision may be better indicators of cognitive performance and performance progression during exergaming. Therefore, it is recommended to carefully select and prevalidate game metrics for purposes beyond game performance measurement and monitoring, such as informing personalized adjustments of the difficulty and complexity of exergames. Incorporating valid game metrics could increase their value in exergame designs, which, in turn, could increase exergame engagement and effectiveness. Due to the complex nature of exergames, in-game performance may depend on additional factors such as game design, external conditions, and personal characteristics of the player. Furthermore, the analysis of performance progression revealed high interindividual differences, underlining the importance of personalized training and suggesting the need for further research to explore characteristics explaining these differences and identify and adopt strategies that facilitate individual learning success and, thus, promote effectiveness in improving health- or disease-related outcomes.
Acknowledgments
The authors would like to thank all collaborating recruitment partners of the original randomized controlled trials for establishing contact between individuals who had mild neurocognitive disorder and the study team and all participants in this study for their participation and valuable contribution to this project. They would also like to thank all the master’s students in our study team who, in addition to the first author, were involved in the data collection and supervision of the participants, namely, Patricia Groth, Kathrin Rohr, Chiara Bassi, Nadine Decher, Anna Riedler, Enis Ljatifi, Julia Müller, and Julia Czopek-Rowinska. The studies that collected the data analyzed in this study were funded by Dementia Research Switzerland – Synapsis Foundation (grant 2019-PI06) and the Gebauer Foundation and were financially supported by the Dalle Molle Foundation (Quality of life prize 2020). The Senso Flex training systems were provided free of charge by Dividat AG for the duration of these studies. Neither the funders nor Dividat AG played any role in the design of this study, nor did they play any role in the collection, management, analysis, and interpretation of data; writing of the report; or decision to submit the report for publication.
Data Availability
Publicly available datasets for the feasibility study [
] and the effectiveness study [ ] were analyzed in this study. These datasets include the demographic and clinical information analyzed, as well as the raw data for the cognitive assessments. Descriptive statistics on the data on the game metrics are included in this paper and its supplementary files. Further inquiries can be directed to the corresponding author.Authors' Contributions
WK, PM, and EDdB conceptualized the study. WK was responsible for data management, statistical analysis, and writing the manuscript under the supervision of PM and EDdB. All authors reviewed and revised the manuscript and approved the final version.
Conflicts of Interest
None declared.
CONSORT 2017 checklist for nonpharmacologic treatments and supplementary tables and figures, including descriptive statistics for clinical assessments and game metric data, along with scatterplots of correlation analyses between exergame metrics with neuropsychological assessments.
PDF File (Adobe PDF File), 373 KBCONSORT-eHEALTH checklist (V 1.6.1).
PDF File (Adobe PDF File), 1086 KBReferences
- Risk reduction of cognitive decline and dementia: WHO guidelines. World Health Organization. 2019. URL: https://www.who.int/publications/i/item/risk-reduction-of-cognitive-decline-and-dementia [accessed 2024-04-29]
- Ge S, Zhu Z, Wu B, McConnell E. Technology-based cognitive training and rehabilitation interventions for individuals with mild cognitive impairment: a systematic review. BMC Geriatr. Sep 15, 2018;18(1):213. [FREE Full text] [CrossRef] [Medline]
- Valenzuela T, Okubo Y, Woodbury A, Lord S, Delbaere K. Adherence to technology-based exercise programs in older adults: a systematic review. J Geriatr Phys Ther. 2018;41(1):49-61. [CrossRef] [Medline]
- Chan AT, Ip R, Tran J, Chan JY, Tsoi K. Computerized cognitive training for memory functions in mild cognitive impairment or dementia: a systematic review and meta-analysis. NPJ Digit Med. Jan 03, 2024;7(1):1. [FREE Full text] [CrossRef] [Medline]
- Manser P, Herold F, de Bruin ED. Components of effective exergame-based training to improve cognitive functioning in middle-aged to older adults - a systematic review and meta-analysis. Ageing Res Rev. Aug 2024;99:102385. [FREE Full text] [CrossRef] [Medline]
- Hernandez-Martinez J, Ramos-Espinoza F, Muñoz-Vásquez C, Guzman-Muñoz E, Herrera-Valenzuela T, Branco B, et al. Effects of active exergames on physical performance in older people: an overview of systematic reviews and meta-analysis. Front Public Health. 2024;12:1250299. [FREE Full text] [CrossRef] [Medline]
- Li R, Geng J, Yang R, Ge Y, Hesketh T. Effectiveness of computerized cognitive training in delaying cognitive function decline in people with mild cognitive impairment: systematic review and meta-analysis. J Med Internet Res. Oct 27, 2022;24(10):e38624. [FREE Full text] [CrossRef] [Medline]
- Stojan R, Voelcker-Rehage C. A systematic review on the cognitive benefits and neurophysiological correlates of exergaming in healthy older adults. J Clin Med. May 23, 2019;8(5):63. [FREE Full text] [CrossRef] [Medline]
- Temprado JJ. Can exergames be improved to better enhance behavioral adaptability in older adults? An ecological dynamics perspective. Front Aging Neurosci. 2021;13:670166. [FREE Full text] [CrossRef] [Medline]
- Torre MM, Temprado JJ. A review of combined training studies in older adults according to a new categorization of conventional interventions. Front Aging Neurosci. 2021;13:808539. [FREE Full text] [CrossRef] [Medline]
- Witherspoon L. ACSM Information on Exergaming. American College of Sports Medicine. URL: https://healthysd.gov/wp-content/uploads/2014/11/exergaming.pdf [accessed 2024-04-29]
- Torre MM, Temprado JJ. Effects of exergames on brain and cognition in older adults: a review based on a new categorization of combined training intervention. Front Aging Neurosci. 2022;14:859715. [FREE Full text] [CrossRef] [Medline]
- Gallou-Guyot M, Mandigout S, Bherer L, Perrochon A. Effects of exergames and cognitive-motor dual-task training on cognitive, physical and dual-task functions in cognitively healthy older adults: an overview. Ageing Res Rev. Nov 2020;63:101135. [FREE Full text] [CrossRef] [Medline]
- Jiang J, Guo W, Wang B. Effects of exergaming on executive function of older adults: a systematic review and meta-analysis. PeerJ. 2022;10:e13194. [FREE Full text] [CrossRef] [Medline]
- Cai Z, Ma Y, Li L, Lu GZ. Effects of exergaming in older individuals with mild cognitive impairment and dementia: a systematic review and meta-analysis. Geriatr Nurs. 2023;51:351-359. [FREE Full text] [CrossRef] [Medline]
- Zhao Y, Feng H, Wu X, Du Y, Yang X, Hu M, et al. Effectiveness of exergaming in improving cognitive and physical function in people with mild cognitive impairment or dementia: systematic review. JMIR Serious Games. Jun 30, 2020;8(2):e16841. [FREE Full text] [CrossRef] [Medline]
- Swinnen N, Vandenbulcke M, de Bruin ED, Akkerman R, Stubbs B, Firth J, et al. The efficacy of exergaming in people with major neurocognitive disorder residing in long-term care facilities: a pilot randomized controlled trial. Alzheimers Res Ther. Mar 30, 2021;13(1):70. [FREE Full text] [CrossRef] [Medline]
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR). Washington, DC. American Psychiatric Association; 2013.
- Kasper S, Bancher C, Eckert A, Förstl H, Frölich L, Hort J, et al. Management of mild cognitive impairment (MCI): the need for national and international guidelines. World J Biol Psychiatry. Oct 2020;21(8):579-594. [FREE Full text] [CrossRef] [Medline]
- Lissek V, Suchan B. Preventing dementia? Interventional approaches in mild cognitive impairment. Neurosci Biobehav Rev. Mar 2021;122:143-164. [FREE Full text] [CrossRef] [Medline]
- Livingston G, Huntley J, Liu KY, Costafreda SG, Selbæk G, Alladi S, et al. Dementia prevention, intervention, and care: 2024 report of the Lancet standing Commission. Lancet. Aug 10, 2024;404(10452):572-628. [FREE Full text] [CrossRef] [Medline]
- Izquierdo M, de Souto Barreto P, Arai H, Bischoff-Ferrari H, Cadore E, Cesari M, et al. Global consensus on optimal exercise recommendations for enhancing healthy longevity in older adults (ICFSR). J Nutr Health Aging. Jan 2025;29(1):100401. [FREE Full text] [CrossRef] [Medline]
- Rodakowski J, Saghafi E, Butters MA, Skidmore ER. Non-pharmacological interventions for adults with mild cognitive impairment and early stage dementia: an updated scoping review. Mol Aspects Med. 2015;43-44:38-53. [FREE Full text] [CrossRef] [Medline]
- Yang C, Moore A, Mpofu E, Dorstyn D, Li Q, Yin C. Effectiveness of combined cognitive and physical interventions to enhance functioning in older adults with mild cognitive impairment: a systematic review of randomized controlled trials. Gerontologist. Nov 23, 2020;60(8):633-642. [FREE Full text] [CrossRef] [Medline]
- Meng Q, Yin H, Wang S, Shang B, Meng X, Yan M, et al. The effect of combined cognitive intervention and physical exercise on cognitive function in older adults with mild cognitive impairment: a meta-analysis of randomized controlled trials. Aging Clin Exp Res. Feb 2022;34(2):261-276. [FREE Full text] [CrossRef] [Medline]
- Gómez-Soria I, Marin-Puyalto J, Peralta-Marrupe P, Latorre E, Calatayud E. Effects of multi-component non-pharmacological interventions on cognition in participants with mild cognitive impairment: a systematic review and meta-analysis. Arch Gerontol Geriatr. 2022;103:104751. [FREE Full text] [CrossRef] [Medline]
- Gavelin HM, Dong C, Minkov R, Bahar-Fuchs A, Ellis KA, Lautenschlager N, et al. Combined physical and cognitive training for older adults with and without cognitive impairment: a systematic review and network meta-analysis of randomized controlled trials. Ageing Res Rev. Mar 2021;66:101232. [FREE Full text] [CrossRef] [Medline]
- Barnes DE, Yaffe K. The projected effect of risk factor reduction on Alzheimer's disease prevalence. Lancet Neurol. Sep 2011;10(9):819-828. [FREE Full text] [CrossRef] [Medline]
- Livingston G, Huntley J, Sommerlad A, Ames D, Ballard C, Banerjee S, et al. Dementia prevention, intervention, and care: 2020 report of the Lancet Commission. Lancet. Aug 2020;396(10248):413-446. [FREE Full text] [CrossRef]
- Herold F, Hamacher D, Schega L, Müller NG. Thinking while moving or moving while thinking - concepts of motor-cognitive training for cognitive performance enhancement. Front Aging Neurosci. 2018;10:228. [FREE Full text] [CrossRef] [Medline]
- Lu Y, Bu FQ, Wang F, Liu L, Zhang S, Wang G, et al. Recent advances on the molecular mechanisms of exercise-induced improvements of cognitive dysfunction. Transl Neurodegener. Feb 27, 2023;12(1):9. [FREE Full text] [CrossRef] [Medline]
- Huuha AM, Norevik CS, Moreira JB, Kobro-Flatmoen A, Scrimgeour N, Kivipelto M, et al. Can exercise training teach us how to treat Alzheimer's disease? Ageing Res Rev. Mar 2022;75:101559. [FREE Full text] [CrossRef] [Medline]
- Smith PJ. Pathways of prevention: a scoping review of dietary and exercise interventions for neurocognition. Brain Plast. Dec 26, 2019;5(1):3-38. [FREE Full text] [CrossRef] [Medline]
- Kivipelto M, Mangialasche F, Ngandu T. Lifestyle interventions to prevent cognitive impairment, dementia and Alzheimer disease. Nat Rev Neurol. Nov 2018;14(11):653-666. [FREE Full text] [CrossRef] [Medline]
- Wang YY, Wang XX, Chen L, Liu Y, Li YR. A systematic review and network meta-analysis comparing various non-pharmacological treatments for older people with mild cognitive impairment. Asian J Psychiatr. Aug 2023;86:103635. [FREE Full text] [CrossRef] [Medline]
- Wang YQ, Jia RX, Liang JH, Li J, Qian S, Li JY, et al. Effects of non-pharmacological therapies for people with mild cognitive impairment. A Bayesian network meta-analysis. Int J Geriatr Psychiatry. Jun 2020;35(6):591-600. [FREE Full text] [CrossRef] [Medline]
- Fabel K, Kempermann G. Physical activity and the regulation of neurogenesis in the adult and aging brain. Neuromolecular Med. 2008;10(2):59-66. [FREE Full text] [CrossRef] [Medline]
- Kempermann G, Fabel K, Ehninger D, Babu H, Leal-Galicia P, Garthe A, et al. Why and how physical activity promotes experience-induced brain plasticity. Front Neurosci. 2010;4:189. [FREE Full text] [CrossRef] [Medline]
- Veronese N, Soysal P, Demurtas J, Solmi M, Bruyère O, Christodoulou N, Alzheimer Europe, European College of Neuropsychopharmacology, European Geriatric Medicine Society (Lead Society), European Interdisciplinary Council on Ageing, European Society of ClinicalEconomic Aspects of OsteoporosisOsteoarthritis, International Association of Gerontology and Geriatrics-European Region, Scottish Brain Health ARC, World Psychiatry Association-Preventive Psychiatry Section, et al. endorsed by the European Academy of Neurology. Physical activity and exercise for the prevention and management of mild cognitive impairment and dementia: a collaborative international guideline. Eur Geriatr Med. Oct 2023;14(5):925-952. [FREE Full text] [CrossRef] [Medline]
- Voinescu A, Papaioannou T, Petrini K, Stanton Fraser D. Exergaming for dementia and mild cognitive impairment. Cochrane Database Syst Rev. Sep 25, 2024;9(9):CD013853. [FREE Full text] [CrossRef] [Medline]
- Chan JY, Liu J, Chan AT, Tsoi KK. Exergaming and cognitive functions in people with mild cognitive impairment and dementia: a meta-analysis. NJP Digit Med. Jun 15, 2024;7(1):CD013853. [CrossRef]
- Cai Z, Ma Y, Li L, Lu G. Effects of exergaming in older individuals with mild cognitive impairment and dementia: a systematic review and meta-analysis. Geriatr Nurs. 2023;51:351-359. [FREE Full text] [CrossRef] [Medline]
- Marques LM, Uchida PM, Barbosa SP. The impact of Exergames on emotional experience: a systematic review. Front Public Health. Sep 7, 2023;11:1209520. [FREE Full text] [CrossRef] [Medline]
- Marshall J, Linehan C. Are exergames exercise? A scoping review of the short-term effects of exertion games. IEEE Trans. Games. Jun 2021;13(2):160-169. [FREE Full text] [CrossRef]
- Tan X, Wang K, Sun W, Li X, Wang W, Tian F. A review of recent advances in cognitive-motor dual-tasking for Parkinson’s disease rehabilitation. Sensors. Sep 30, 2024;24(19):6353. [FREE Full text] [CrossRef]
- Dubbeldam R, Stemplewski R, Pavlova J, Cyma-Wejchenig M, Lee S, Esser P, et al. Technology-assisted physical activity interventions for older people in their home-based environment: a scoping review (Preprint). JMIRP. Preprint posted online August 25, 2024. [FREE Full text] [CrossRef]
- Rüth M, Schmelzer M, Burtniak K, Kaspar K. Commercial exergames for rehabilitation of physical health and quality of life: a systematic review of randomized controlled trials with adults in unsupervised home environments. Front Psychol. Jun 2, 2023;14:1155569. [FREE Full text] [CrossRef] [Medline]
- Perrot A, Maillot P. Factors for optimizing intervention programs for cognition in older adults: the value of exergames. NJP Aging. Mar 29, 2023;9(1):4. [FREE Full text] [CrossRef]
- Sajjadi P, Ewais A, De Troyer O. Individualization in serious games: a systematic review of the literature on the aspects of the players to adapt to. Entertain Comput. Mar 2022;41:100468. [FREE Full text] [CrossRef]
- Sinclair J, Hingston P, Masek M. Considerations for the design of exergames. In: Proceedings of the 5th International Conference on Computer Graphics and Interactive Techniques in Australia and Southeast Asia. 2007. Presented at: GRAPHITE '07; December 1-4, 2007:289-295; Perth, Australia. URL: https://dl.acm.org/doi/10.1145/1321261.1321313 [CrossRef]
- Bamparopoulos G, Konstantinidis E, Bratsas C, Bamidis PD. Towards exergaming commons: composing the exergame ontology for publishing open game data. J Biomed Semantics. 2016;7:4. [FREE Full text] [CrossRef] [Medline]
- Serrano-Laguna Á, Martínez-Ortiz I, Haag J, Regan D, Johnson A, Fernández-Manjón B. Applying standards to systematize learning analytics in serious games. Comput Stand Interfaces. 2017:116-123. [FREE Full text] [CrossRef]
- Petsani D, Konstantinidis EI, Zilidou VI, Bamidis PD. Exploring health profiles from physical and cognitive serious game analytics. In: Proceedings of the 2nd International Conference on Technology and Innovation in Sports, Health and Wellbeing. 2018. Presented at: TISHW '18; June 20-22, 2018:1-6; Thessaloniki, Greece. URL: https://ieeexplore.ieee.org/document/8559562 [CrossRef]
- Petsani D, Konstantinidis E, Katsouli AM, Zilidou V, Dias S, Hadjileontiadis L, et al. Digital biomarkers for well-being through exergame interactions: exploratory study. JMIR Serious Games. Sep 13, 2022;10(3):e34768. [FREE Full text] [CrossRef] [Medline]
- Sevcenko N, Ninaus M, Wortha F, Moeller K, Gerjets P. Measuring cognitive load using in-game metrics of a serious simulation game. Front Psychol. 2021;12:572437. [FREE Full text] [CrossRef] [Medline]
- Konstantinidis EI, Bamidis PD, Billis A, Kartsidis P, Petsani D, Papageorgiou SG. Physical training in-game metrics for cognitive assessment: evidence from extended trials with the fitforall exergaming platform. Sensors (Basel). Aug 26, 2021;21(17):5756. [FREE Full text] [CrossRef] [Medline]
- Guimarães V, Sousa I, de Bruin ED, Pais J, Correia M. Using shoe-mounted inertial sensors and stepping exergames to assess the motor-cognitive status of older adults: a correlational study. Digit Health. 2023;9:20552076231167001. [FREE Full text] [CrossRef] [Medline]
- Manser P, Poikonen H, de Bruin ED. Feasibility, usability, and acceptance of "Brain-IT"-a newly developed exergame-based training concept for the secondary prevention of mild neurocognitive disorder: a pilot randomized controlled trial. Front Aging Neurosci. 2023;15:1163388. [FREE Full text] [CrossRef] [Medline]
- Manser P, de Bruin ED. "Brain-IT": exergame training with biofeedback breathing in neurocognitive disorders. Alzheimers Dement. Jul 2024;20(7):4747-4764. [FREE Full text] [CrossRef] [Medline]
- Targeting the brain using information technology for secondary prevention of mild neurocognitive disorder. Patrick Manser. URL: https://www.patrick-manser.com/projects/4278-brain-it-08-2020-04-2024 [accessed 2025-05-29]
- Manser P, de Bruin ED. Making the best out of IT: design and development of exergames for older adults with mild neurocognitive disorder - a methodological paper. Front Aging Neurosci. 2021;13:734012. [FREE Full text] [CrossRef] [Medline]
- Manser P, Adcock-Omlin M, de Bruin ED. Design considerations for an exergame-based training intervention for older adults with mild neurocognitive disorder: qualitative study including focus groups with experts and health care professionals and individual semistructured in-depth patient interviews. JMIR Serious Games. Jan 05, 2023;11:e37616. [FREE Full text] [CrossRef] [Medline]
- Manser P, Michels L, Schmidt A, Barinka F, de Bruin ED. Effectiveness of an individualized exergame-based motor-cognitive training concept targeted to improve cognitive functioning in older adults with mild neurocognitive disorder: study protocol for a randomized controlled trial. JMIR Res Protoc. Feb 06, 2023;12:e41173. [FREE Full text] [CrossRef] [Medline]
- ICD-11 international classificiation of disease 11th revision the global standard for diagnostic health information. World Health Organization. URL: https://icd.who.int/en [accessed 2020-07-20]
- American Psychiatric Association. Diagnostic and Statistical Manual Of Mental Disorders: DSM-5™, 5th edition. Washington, DC. American Psychiatric Association; 2013.
- Manser P, de Bruin ED. Diagnostic accuracy, reliability, and construct validity of the German quick mild cognitive impairment screen. BMC Geriatr. Jul 18, 2024;24(1):613. [FREE Full text] [CrossRef] [Medline]
- O’Caoimh R, Molloy DW. The quick mild cognitive impairment screen (Qmci). In: Larner AJ, editor. Cognitive Screening Instruments: A Practical Approach. Cham, Switzerland. Springer; 2015:255-272.
- O'Caoimh R, Gao Y, Svendovski A, Gallagher P, Eustace J, Molloy D. Comparing approaches to optimize cut-off scores for short cognitive screening instruments in mild cognitive impairment and dementia. J Alzheimers Dis. Feb 10, 2017;57(1):123-133. [FREE Full text] [CrossRef] [Medline]
- Alles über den Dividat Senso. Dividat Senso. URL: https://dividat.com/senso [accessed 2024-04-29]
- Sachdev PS, Blacker D, Blazer D, Ganguli M, Jeste D, Paulsen JS, et al. Classifying neurocognitive disorders: the DSM-5 approach. Nat Rev Neurol. Nov 2014;10(11):634-642. [FREE Full text] [CrossRef] [Medline]
- Manser P, de Bruin ED. Test-retest reliability and validity of vagally-mediated heart rate variability to monitor internal training load in older adults: a within-subjects (repeated-measures) randomized study. BMC Sports Sci Med Rehabil. Jun 27, 2024;16(1):141. [FREE Full text] [CrossRef] [Medline]
- Huber SK, Manser P, de Bruin ED. PEMOCS: theory derivation of a concept for PErsonalized MOtor-Cognitive exergame training in chronic Stroke-a methodological paper with an application example. Front Sports Act Living. 2024;6:1397949. [FREE Full text] [CrossRef] [Medline]
- Gentile AM. Skill acquisition: action, movement, and neuromotor processes. In: Carr JH, Shepherd RD, editors. Movement Science: Foundations for Physical Therapy in Rehabilitation. Rockville, MA. Aspen Publishers; 2000:111-187.
- Fitts PM. Human Performance. New York, NY. Praeger Publishers; 1967.
- VanLehn K. Cognitive skill acquisition. Annu Rev Psychol. 1996;47:513-539. [FREE Full text] [CrossRef] [Medline]
- Slade SC, Dionne CE, Underwood M, Buchbinder R. Consensus on exercise reporting template (CERT): explanation and elaboration statement. Br J Sports Med. Dec 2016;50(23):1428-1437. [FREE Full text] [CrossRef] [Medline]
- O'Caoimh R, Gao Y, McGlade C, Healy L, Gallagher P, Timmons S, et al. The quick mild cognitive impairment (Qmci) screen: a new screening tool for mild cognitive impairment. Ir J Med Sci. 2012;181:S228-S229. [CrossRef]
- O’Caoimh R, Molloy DW. The quick mild cognitive impairment screen (Qmci). In: Larner AJ, editor. Cognitive Screening Instruments: A Practical Approach. Cham, Switzerland. Springer; 2017:255-272.
- Petermann F, Lepach A. Wechsler memory scale® – fourth edition. Testzentrale. 2012. URL: https://www.testzentrale.de/shop/wechsler-memory-scaler-fourth-edition.html [accessed 2024-04-29]
- Wechsler D. Wechsler memory scale. 4th edition. Pearson. URL: https://www.pearsonassessments.com/en-us/Store/Professional-Assessments/Cognition-&-Neuro/Wechsler-Memory-Scale-%7C-Fourth-Edition/p/100000281 [accessed 2024-04-29]
- Mueller ST, Piper BJ. The psychology experiment building language (PEBL) and PEBL test battery. J Neurosci Methods. Jan 30, 2014;222:250-259. [FREE Full text] [CrossRef] [Medline]
- Menzel-Begemann A. Testverfahren zur Erfassung der Planungsfähigkeit im Alltag. Testzentrale. URL: https://www.testzentrale.de/shop/handlungsorganisation-und-tagesplanung.html [accessed 2024-04-29]
- Pflüger M, Gschwandtner U. Testbatterie zur Aufmerksamkeitsprüfung (TAP) Version 1.7. Z Klin Psychol Psychother. Apr 1, 2003;32(2):155-157. [CrossRef]
- Pflüger M, Gschwandtner U. Testbatterie zur Aufmerksamkeitsprüfung (TAP) Version 1.7. Z Klin Psychol Psychother. Apr 1, 2003;32(2):155-157. [FREE Full text] [CrossRef]
- Field A, Miles J, Field Z. Discovering Statistics Using R. Thousand Oaks, CA. Sage Publications; 2012.
- Streiner DL, Norman GR, Cairney J. Health Measurement Scales: A Practical Guide to Their Development and Use. Oxford, UK. Oxford Academic Press; 2015.
- Cohen J. Statistical power analysis. Curr Dir Psychol Sci. Jun 01, 1992;1(3):98-101. [CrossRef]
- Armstrong RA. When to use the Bonferroni correction. Ophthalmic Physiol Opt. Sep 2014;34(5):502-508. [FREE Full text] [CrossRef] [Medline]
- Sullivan GM, Feinn RS. Facts and fictions about handling multiple comparisons. J Grad Med Educ. Aug 2021;13(4):457-460. [FREE Full text] [CrossRef] [Medline]
- Faul F, Erdfelder E, Buchner A, Lang A. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses. Behav Res Methods. Nov 2009;41(4):1149-1160. [FREE Full text] [CrossRef]
- Faul F, Erdfelder E, Lang A, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. May 2007;39(2):175-191. [FREE Full text] [CrossRef] [Medline]
- Anderson DI, Lohse KR, Lopes TC, Williams AM. Individual differences in motor skill learning: past, present and future. Hum Mov Sci. Aug 2021;78:102818. [FREE Full text] [CrossRef] [Medline]
- Jacoby WG. Loess: a nonparametric, graphical tool for depicting relationships between variables. Elect Stud. Dec 2000;19(4):577-613. [FREE Full text] [CrossRef]
- Litz E, Ball C, Jansen C, Werner C, de Bruin E, Hauer K. Validation of a motor-cognitive assessment for a stepping exergame in older adults: use of game-specific, internal data stream. Games Health J. Apr 2020;9(2):95-107. [FREE Full text] [CrossRef] [Medline]
- Wulf G, Lewthwaite R. Optimizing performance through intrinsic motivation and attention for learning: the OPTIMAL theory of motor learning. Psychon Bull Rev. Oct 29, 2016;23(5):1382-1414. [CrossRef] [Medline]
- Julayanont P, Nasreddine ZS. Montreal cognitive assessment (MoCA): concept and clinical review. In: Larner AJ, editor. Cognitive Screening Instruments: A Practical Approach. Cham, Switzerland. Springer; 2017:139-195.
- Arevalo-Rodriguez I, Smailagic N, Roqué I Figuls M, Ciapponi A, Sanchez-Perez E, Giannakou A, et al. Mini-Mental State Examination (MMSE) for the detection of Alzheimer's disease and other dementias in people with mild cognitive impairment (MCI). Cochrane Database Syst Rev. Mar 05, 2015;2015(3):CD010783. [FREE Full text] [CrossRef] [Medline]
- Streicher A, Smeddinck JD. Personalized and adaptive serious games. In: Proceedings of the 2015 Conference on Entertainment Computing and Serious Games. 2015. Presented at: ECSG '15; July 5-10, 2015:332-377; Dagstuhl Castle, Germany. URL: https://link.springer.com/chapter/10.1007/978-3-319-46152-6_14 [CrossRef]
- Schmidt RA, Lee TD, Winstein C, Wulf G, Zelaznik H. Motor Control and Learning: A Behavioral Emphasis. 5th edition. New York, NY. Human Kinetics; 2018.
- Newell A, Rosenbloom PS. Mechanisms of skill acquisition and the law of practice. In: Anderson JR, editor. Cognitive Skills and Their Acquisition. New York, NY. Psychology Press; 2013:1-55.
- Gray WD, Lindstedt JK. Plateaus, dips, and leaps: where to look for inventions and discoveries during skilled performance. Cogn Sci. Sep 2017;41(7):1838-1870. [FREE Full text] [CrossRef] [Medline]
- Haider H, Frensch PA. Why aggregated learning follows the power law of practice when individual learning does not: Comment on Rickard (1997, 1999), Delaney et al. (1998), and Palmeri (1999). J Exp Psychol Learn Mem Cogn. 2002;28(2):392-406. [CrossRef]
- Papp KV, Snyder PJ, Maruff P, Bartkowiak J, Pietrzak R. Detecting subtle changes in visuospatial executive function and learning in the amnestic variant of mild cognitive impairment. PLoS One. 2011;6(7):e21688. [FREE Full text] [CrossRef] [Medline]
- Bonnechère B, Bier J, Van Hove O, Sheldon S, Samadoulougou S, Kirakoya-Samadoulougou F, et al. Age-associated capacity to progress when playing cognitive mobile games: ecological retrospective observational study. JMIR Serious Games. Jun 12, 2020;8(2):e17121. [CrossRef] [Medline]
- Fernandez-Ballesteros R, Botella, Zamarron, Cabras, Molina, Schettini, et al. Tarraga. Cognitive plasticity in normal and pathological aging. Clin Interv Aging. Jan 2012:15. [CrossRef]
- Netz Y. Is there a preferred mode of exercise for cognition enhancement in older age?: a narrative review. Front Med (Lausanne). 2019;6:57. [FREE Full text] [CrossRef] [Medline]
- Pope AT, Stephens CL, Gilleade K. Biocybernetic adaptation as biofeedback training method. In: Fairclough SH, Gilleade K, editors. Advances in Physiological Computing. Cham, Switzerland. Springer; 2014:91-115.
- Fairclough S. A closed-loop perspective on symbiotic human-computer interaction. In: Proceedings of the 4th International Conference on Symbiotic Interaction. 2015. Presented at: Symbiotic Interactionth International Workshop, Symbiotic , Berlin, Germany, October , , Proceedings 4pringer; October 7-8, 2015:57-67; Berlin, Germany. URL: https://link.springer.com/chapter/10.1007/978-3-319-24917-9_6 [CrossRef]
- Muñoz J, Gouveia E, Cameirão M, Bermúdez i Badia S. The biocybernetic loop engine: an integrated tool for creating physiologically adaptive videogames. In: Proceedings of the 4th International Conference on Physiological Computing Systems. 2017. Presented at: PCS '17; July 27-28, 2017:45-47; Madrid, Spain. URL: https://www.scitepress.org/Link.aspx?doi=10.5220/0006429800450054 [CrossRef]
- Tusher HM, Mallam S, Nazir S. A systematic review of virtual reality features for skill training. Tech Know Learn. Jan 05, 2024;29(2):843-878. [FREE Full text] [CrossRef]
- Tieri G, Morone G, Paolucci S, Iosa M. Virtual reality in cognitive and motor rehabilitation: facts, fiction and fallacies. Expert Rev Med Devices. Feb 2018;15(2):107-117. [FREE Full text] [CrossRef] [Medline]
- Voinescu A, Sui J, Stanton Fraser D. Virtual reality in neurorehabilitation: an umbrella review of meta-analyses. J Clin Med. Apr 02, 2021;10(7):23. [FREE Full text] [CrossRef] [Medline]
- Sokolov AA, Collignon A, Bieler-Aeschlimann M. Serious video games and virtual reality for prevention and neurorehabilitation of cognitive decline because of aging and neurodegeneration. Curr Opin Neurol. Apr 2020;33(2):239-248. [FREE Full text] [CrossRef] [Medline]
- Bahar-Fuchs A, Webb S, Bartsch L, Clare L, Rebok G, Cherbuin N, et al. Tailored and adaptive computerized cognitive training in older adults at risk for dementia: a randomized controlled trial. J Alzheimers Dis. Oct 03, 2017;60(3):889-911. [FREE Full text] [CrossRef] [Medline]
- Maier M, Rubio Ballester B, Duff A, Duarte Oller E, Verschure PF. Effect of specific over nonspecific VR-based rehabilitation on poststroke motor recovery: a systematic meta-analysis. Neurorehabil Neural Repair. Feb 2019;33(2):112-129. [FREE Full text] [CrossRef] [Medline]
- Data for project 'feasibility, usability and acceptance of a newly developed exergame-based training concept for older adults with mild neurocognitive disorder - a pilot randomized controlled trial'. zenodo. URL: https://zenodo.org/records/7428378 [accessed 2024-04-29]
- Data for Project ''Brain-IT' - exergame training with biofeedback breathing in neurocognitive disorders. zenodo. URL: https://zenodo.org/records/10695988 [accessed 2024-04-29]
Abbreviations
CONSORT: Consolidated Standards of Reporting Trials |
ES: effectiveness study |
FS: feasibility study |
MMSE: Mini-Mental State Examination |
mNCD: mild neurocognitive disorder |
MoCA: Montreal Cognitive Assessment |
PEBL: Psychology Experiment Building Language |
RCT: randomized controlled trial |
VR: virtual reality |
Edited by A Coristine; submitted 28.08.24; peer-reviewed by P Makmee, X Cheng; comments to author 18.12.24; revised version received 22.01.25; accepted 12.03.25; published 21.05.25.
Copyright©Wanda Kaiser, Eling D de Bruin, Patrick Manser. Originally published in JMIR Serious Games (https://games.jmir.org), 21.05.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Serious Games, is properly cited. The complete bibliographic information, a link to the original publication on https://games.jmir.org, as well as this copyright and license information must be included.