How to Measure the Effect of Reminiscence Group Therapy on Mute People with Dementia?: A Trial using a Facial Emotion Recognition Method

Liu Xiangyu, Ryuji Yamazaki, Hiroko Kase

Abstract


Purpose Reminiscence group therapy (RGT) is widely used for non-pharmacological dementia treatment. In previous studies, questionnaires and laboratory tests have been used to verify the effects of the therapy. With the application of facial expression recognition tools, it is possible to perform analysis in a noncontact, objective, and efficient way. In our study, we used this approach by substituting linguistic analysis for additional validation in one tracheotomized participant among eight RGT people with dementia (PWDs). This study investigated whether RGT positively affected mute PWD based on non-verbal information and evaluated the role of facial expression recognition tools for human-human or human-robot groups in RGT. Method The RGT interventional experiment was conducted in a nursing home in Tokyo. With the help of staff, we set up the experiment with eight participants split into two groups (human-human RGT group (G1): N = 4, human-robot RGT group (G2): N = 4). We modified a Pepper robot with Raspberry Pi, a single-board computer, so that the operator can teleoperate the robot with a tablet, show pictures on a display screen on its chest, talk through speakers in front of it, and listen to participants through a headset with the tablet for teleoperation (Fig 1). From December 4th, 2020, to January 12th, 2021, both G1 (age M = 85.75, SD ± 3.7) for the face-to-face RGT — where the mute PWD (MMSE = 18) was included —and G2 (age M = 89, SD ± 2.73) for Pepper RGT participated in six RGT sessions (20 minutes for each session). Every session started with greetings and a self-introduction, allowing participants to remember each other’s names. Participants were required to interact with pictures on the Pepper robot’s screen. The Emo-Rec application is a facial emotion recognition tool powered by AI deep learning. It can quickly detect the probability of which emotion the person in the picture or video is closer to as a percentage — for example, 20% happy and 70% sad (Liu, 2021). Multiplying the happiness rate by one, sadness rate by 20, and the remaining emotions by ten, we obtain the Emo-Rec point in the same range as the face scale. The face scale (1-20) has been widely used to quantify emotions since it was first proposed. To obtain the participants’ emotional points, we recorded all RGT experiments with a video camera. According to the sampling method adopted by the time-varying change point model to reflect changes in overall sentiment (Albers & Bringmann, 2020), we randomly selected nine time points for each session using SPSS 12 Compatibility. We used the Emo-Rec application to score the nine sampling frames of the video to compare whether the participants experienced value drops during the experiment. This study complied with the requirements of the Waseda Ethical Review Committee (approval code: 2019-328). Results and Discussion The results show that there was a steady decrease in the Emo-Rec Point of the two groups during RGT. The Emo-Rec score of the mute PWD ranged from 11.76 to 8.20 (Fig 2). During the RGT process, the robot group had a sharper trend than the human group, and the mute PWD also showed a definite downward trend at the Emo-Rec point. Both groups of participants showed noticeable emotional improvement, and the improvement in the mute PWD and robot group was more prominent. It is hoped that future research can help PWDs using humanoid robots to automatically evaluate RGT participants, including those who cannot speak.


Refbacks

  • There are currently no refbacks.