User Acceptance As Key Factor For Development And Risk Management Of An AI-based Robot Arm Control

Olga Kozlova, Annalies Baumeister, Sebastian Reutzel, Elizaveta Gardo, Beatrix-Patrizia Tolle

Abstract


Purpose User acceptance plays an important role in the risk management of medical products, as it can have a significant impact on the safety and effectiveness of a product. Usability issues in accordance with the requirements of IEC 62366 must be carefully considered in risk management (Questions Catalog 2023). This is even more important for medical products using artificial Intelligence (AI) and the development of such products. This paper aims to show how user acceptance had a meaningful impact on the development process and risk management towards the development of an AI-based Control for a robot arm in the research project DoF-Adaptiv. Currently, three robot arms are available for people with severe physical impairments in Germany. These robotic arms, such as the Jaco from Kinova, are mounted on an electric wheelchair, and users control them in the same way they control their wheelchair, typically with a joystick. To make the robot arm more accessible an AI-based Control using smart glases could be a solution (Baumeister et al. 2022). However, using AI leads to new ethical and social implications that could influence the user’s acceptance of the new control.

Method In the DOF-Adaptiv project, future users were involved in the development and evaluation through workshops and interviews. This included qualitative interviews with the target groups to gain a deeper understanding of their needs and activities. Additionally, ethics workshops were conducted. The project also employed "personas” creating them initially together with future users and later discussing them in the context of the risk management workshops. The internal risk management workshops were significantly influenced by insights from previous studies involving future users within the DOF-Adaptiv project. This participatory approach resulted in important insights towards the development and evaluation of AI applications in healthcare.

Results and Discussion While AI gains more significance in medicine, the current legal framework for AI medical products is not yet sufficiently established. The risk management workshop and the ethical workshops with future users provided valuable insights, including the identification of acceptance issues as a specific risk associated with AI medical products. Particularly relevant was the acceptance issue with AI medical products during the risk management workshops, as they are new to the users. In particular, the comprehensibility of the instructions for use, users' trust, and the correct perception and interpretation of the results were the focus of attention. There was a deep discussion about the importance of clarity regarding what is behind the term AI, the types of AI that exist, and what this means in each specific case. It was concluded that clearly defining the term AI is of great importance to avoid misunderstandings and promote user acceptance. Precise communication among manufacturers, regulatory authorities, and other stakeholders is necessary to ensure effective and transparent risk assessment of AI medical products. The use of developed personas proved to be a suitable method for evaluating user acceptance from a risk management perspective. This insight could make a significant contribution to risk management in the context of AI medical products.

 

 

References

Baumeister, A., Pascher, M., Shivashankar, Y., Goldau, F., Frese, U., Gerken, J., Gardó, E., Klein, B., Tolle, B. (2022): AI for Simplifying the Use of an Assistive Robotic Arm for People with severe Body Impairments. Conference: 13th World Conference of Gerontechnology (ISG 2022); At: Daegu, Korea DOI: 10.13140/RG.2.2.13860.35208/3

Baumeister, A., Kozlova, O., Gardo, E., Tolle, P. (2024): Nutzer*innen Akzeptanz als Erfolgsfaktor und Risiko bei KI-Anwendungen im Bereich Healthcare. In: Klein, B., Rägle, S., & Klüber, S. (Hrsg.), Künstliche Intelligenz im Healthcare-Sektor.

Questions Catalog 2023. "Artificial Intelligence in Medical Devices: Question Catalog." Online: https://www.ig-nb.de/?tx_epxelo_file[id]=861877&cHash=5c3e1b0889ef5e017c2ff309f4ea82a8 (accessed on February 18, 2024).

 

 

Keywords: user acceptance, user involvement, medical device regulation, risk management, AI medical product

Affiliation: Sciencenter FUTURE AGING, Frankfurt University of Applied Sciences, Germany

Corresponding Author Email: olga.kozlova@fb4.fra-uas.de

Authors’ ORCID IDs: 0009-0008-7426-3002 (Kozlova), 0009-0002-1007-7549 (Baumeister)

Refbacks

  • There are currently no refbacks.