New three year research program – Hasler Responsible AI

The following projects have been selected for funding through the Responsible AI program:

Scientist, Institution Project Name
Prof. Benjamin F. Grewe, ETH Zürich Learning to Learn Safely
Prof. Dr. Martin Raubal, ETH Zürich Interpretable and Robust Machine Learning for Mobility Analysis
Prof. Volkan Cevher, EPFL   Mathematical Foundations for RISE of AI
Prof. François Fleuret, University of Geneva Interpretability, safety, and efficiency through representation disentanglement (ISER)
Prof. Fanny Yang, ETH Zürich Interpretable predictions for medical imaging diagnostics
Dr. Antti Hyvärinen, USI Formal Reasoning on Neural Networks
Dr. Sébastien Marcel, IDIAP reSponsible fAir FacE Recognition (SAFER)
Prof. Mrinmaya Sachan, ETH Zürich AI for Verification of Scientific Claims
Dr. Meritxell Bach Cuadra, UNIL Explaining AI decisions in personalized healthcare: towards integration of deep learning into diagnosis and treatment planning (MSxplain)


In recent years, artificial intelligence (AI) has achieved remarkable technological breakthroughs with machine learning, primarily caused by the development of complex statistical models and algorithms based on large amounts of data and ever improving computational capacity. Well-known examples of successful applications are language translations, machines that can conduct conversations with people, autonomous vehicles or image recognition software that is now in part superior to humans. As a basic technology, AI has the potential to change almost all areas of our society and offers considerable innovation and growth potential. 

Trustworthiness is a necessary condition for broad acceptance of AI in society, in particular if systems that use machine-learning algorithms provide information to human decision makers or even operate and decide autonomously. Typical sensitive applications are, for example, hiring decisions, jurisdiction support, autonomous driving, credit rating, law enforcement, medical diagnosis and therapy. Unfortunately, exactly those algorithms that are responsible for the current high expectations also show strong weaknesses in terms of predictable and transparent behavior. Well-known keywords that describe these deficiencies are robustness, adversarial attacks, traceability of decisions, discrimination, and bias. Resolving these deficiencies without impairing the efficiency of AI is a complex scientific endeavor that entails the formalization of the various aspects of responsible and trustworthy behavior, their consideration in algorithm design, and assertions that the information and actions provided by the AI system satisfy conditions for a responsible decision.

Goals of the Research Program

The Hasler Responsible AI program will support research projects that investigate machine-learning algorithms and artificial intelligence systems whose results meet requirements on responsibility and trustworthiness. Projects are expected to seriously engage in the application of the new models and methods in scenarios that are relevant to society. In addition, projects should respect the interdisciplinary character of research in the area of Responsible AI by involving the necessary expertise. 

Particular research subjects include but are not restricted to

  • technical robustness and safety, providing formal guarantees
  • transparency of decisions, explainable AI, interpretability, understandability, traceability
  • fairness, bias, discrimination, causality
  • disinformation, fake news, ethics of AI

Advances in information and communication technology as well as in computer science must be at the center of all projects in the Hasler Responsible AI research program.