Who will take responsibility for AI errors? Interview with Daniela Vacek on AI ethics and responsibility

On 11/12/2025, the journal Akadémia (SAS News) published an interview with Daniela Vacek from the Institute of Philosophy of the Slovak Academy of Sciences, who researches the ethics and responsibility of artificial intelligence – from autonomous cars to AI therapists to digital avatars.

The interview explains how the perception of AI changes from a “tool” to a system to which users attribute human characteristics and roles (e.g. therapist, counselor, friend or partner), and what ethical consequences this brings.

The content focuses mainly on the following areas:

  • Liability in autonomous systems: the question of who is liable if an AI system causes damage (e.g. in self-driving cars).
  • AI roles in close relationships: risks and possible short-term benefits in roles related to partnership and camaraderie, as well as in assistant or advisor roles, especially on sensitive topics.
  • AI and young users: the Character.AI platform and the operator Character Technologies Inc. are mentioned in the interview in connection with the planned restriction of access for users under 18 years of age.
  • AI avatars and the “digital double”: possible scenarios of replacing a human with an avatar and issues of control, deviations from intentions and responsibility.
  • Credibility and responsible AI in practice: emphasis on the role of stakeholders in the development, deployment and use of AI.

In her answers, the author brings an expert perspective on the issues of responsibility and ethical impacts of AI in situations where real damage (both individual and social) can occur and where the way people interact with AI and what expectations they have from it is simultaneously changing.