Aktuelle Veranstaltungen



TRUST AND OPACITY IN AI: PERSPECTIVES FROM EPISTEMOLOGY, ETHICS, AND POLITICAL PHILOSOPHY


Conference, November 16-17, 2023


Organizers: Rico Hauswald (TU Dresden), Martin Hähnel (University of Bremen), Kathi Beier (University of Bremen)


Location: TU Dresden, SLUB Makerspace M2, Zellescher Weg 17, 01069 Dresden


Click here for a detailed map of the locations of the conference.



As artificial intelligence (AI) is becoming part of our everyday lives, we are faced with the question of how to use it responsibly. In public discourse, this issue is often framed in terms of trust – for example by asking whether, to what extent, and under what conditions trusting AI systems is appropriate. In this context, the philosophical debates on practical, political, and epistemic trust that have been ongoing since the 1980s have recently been gaining momentum and developed within the philosophy of AI.

However, a number of fundamental questions remain unanswered. For example, some authors have argued that the concept of trust is interpersonal in nature and therefore entirely inapplicable to relationships with AI systems. According to these authors, AI systems cannot be “trusted” in the strict sense of the term, but can at best be “relied upon”. Other authors have disputed this assessment, arguing that at least certain kinds of trust can apply to relationships with AI technologies. Also controversial is the influence of AI’s notorious black-box character on its potential trustworthiness. While some authors consider AI systems to be trustworthy only to the extent that their internal processes can be made transparent and explainable, others point out that, after all, we do trust humans without being able to understand their cognitive processes. In the case of experts and epistemic authorities, we often do not even grasp the reasons and justifications they give. Another point of contention is the trustworthiness of the developers of innovative AI systems, i.e. the extent to which the trustworthiness of AI systems can be reduced to, and should be based on, trust in the developers themselves. In this context, the debate on “ethics by design” or “embedded ethics” seems to be crucial as it helps evaluate the various attempts currently being made to promote trust in AI by taking ethical principles and usability aspects into account.

The aim of this conference is to facilitate an exchange on these and related issues and to discuss the ethical as well as the political and epistemic dimensions of trust and opacity in AI systems. We would like to discuss questions such as:


  • What are the emotional, psychological and normative preconditions of trust, and can they be meaningfully applied to AI systems or robots, or is speaking of “trust in AI” a category mistake?
  • Is trust a value (perhaps a value in itself) that makes interaction with AI systems possible in the first place? What are the dangers and disadvantages of trusting AI technologies, and when is mistrust justified?
  • Does AI need to be explainable in order to be trustworthy? If so, what exactly does “explainability” mean and how can it be established?
  • When AI systems take over tasks from humans (e.g. as care robots), what are the similarities and differences in trusting them compared to trust in human actors?
  • When AI systems are used as sources of information (e.g. in the form of diagnostic systems in medicine), is trust in them similar to or different from classical testimonial trust and epistemic trust in experts and epistemic authorities?
  • How promising are ethics-by-design approaches, and what are the possibilities and limits of attempts to “embed” trust in AI systems?
  • Do AI technologies (e.g. ChatGPT) contribute to the destruction of existing trust relationships (e.g. in schools, universities, etc.)? How should relationships of trust between humans and AI systems be structured to meet ethical norms and standards without undermining existing human-to-human trust relationships? What influence do political and legal regulatory processes have on these trust-building micro-processes?


Speakers and Working Titles (updated October 12, 2023)

Martin Baesler (University of Freiburg): Trust and participation: The transformation of the public sphere through automated decision-making

Christian Budnik (University of Zürich): Relationships With Robots? Three Arguments Against The Possibility Of Trust In AI Algorithms

Juan M. Durán (TU Delft): On the epistemological basis for trustworthy AI

Ori Freiman (McMaster University, Hamilton): The Effects of Opacity on Trust: From Concepts to Measurements

Rico Hauswald (TU Dresden): Trust and Opacity: Comparing AI Systems and Human Experts

Andreas Kaminski (TU Darmstadt): Beyond Explainability: From Model Opacity to Pragmatic Opacity

Philip J. Nickel (Eindhoven University of Technology): Human-Centered Artificial Intelligence, Disruption, and Explainability

Karoline Reinhardt (University of Passau): A matter of trust? Perspectives from AI-Ethics

Eva Schmidt (TU Dortmund): Stakes and Understanding the Decisions of Artificial Intelligent Systems

Ines Schröder (University of Freiburg): Trust as responsive pre-condition for forming dignified relations


Registration

Participation is free, but please register by sending an email to: rico.hauswald@tu-dresden.de


Travel

Here you can find useful information on how to reach Dresden by train, plane, or car: https://www.dresden-convention.com/en/dresden/destination-dresden/getting-to-dresden


The main train station of Dresden is about 20 minutes by foot, or roughly 15 minutes by public transport to the TU Dresden, where the conference will take place.


   



Ethische Theorien für KI / Ethical Theories for AI


Universität Bremen, 17.11.22-18.11.22           

 

Programm:


Donnerstag / Thursday, 17. November 2022


16:00

Begrüßung/Welcome: Prof. Dr. Dagmar Borchers (Dekanin/Dean FB09, Universität Bremen)

Einführung/Introduction: Dr. Martin Hähnel (Coordinator VUKIM, Universität Bremen)

 

16:15

Gibt es so etwas wie "Robot Rights"? (KEYNOTE) 

John-Stuart Gordon (Lithuanian University of Health Sciences Kaunas)

 

17:15

Ethik der Künstlichen Intelligenz: Ein philosophischer Vorschlag zur Systematisierung des Diskurses

Sophie Jörg (Hochschule für Philosophie München):


18:00

Pause

 

18:15

Der Capability Ansatz als ethischer Interpretationsrahmen für den Einsatz von Künstlicher Intelligenz in der Medizin

Sabine Wöhlke & Henk Jasper van Gils-Schmidt (Hochschule für Angewandte Wissenschaften Hamburg)


19:15

Förderung des Gemeinwohls durch Solidarität in der medizinischen Forschung? Lehren aus der bioethischen Debatte für eine Ethik der KI

Katharina Rudschies (Universität Hamburg)


19:45

End of first session

 

20:00

Social Dinner  

 

 

Freitag / Friday, 18. November 2022


9:00

Feministische Perspektiven einer KI-Ethik

Regina Müller (Universität Bremen)


9:45

Critical theory as an ethical theory for AI

Rosalie Waelen (University of Twente)

 

10:30

Pause

 

10:45

Robots, Wrasse, and the Evolution of Reciprocity

Michael Dale (Eindhoven University of Technology)


11:30

Cultivating Virtues in Machines Using Reinforcement Learning

Martin Kaas (University of York)

 

12:15

Lunch

 

13:30

Permissible Inferences: On the Epistemic Grounds of the Ethics of Medical Artificial Intelligence

Thomas Grote (Universität Tübingen)

 

14:15

Algorithms for Ethical Decision-Making in the Clinic

Lukas Meier (University of Cambridge)

 

15:00

Pause

 

15:15

Rethinking goals of machine ethics: Could moral machines create a better world?

Andrea Berber (University of Belgrad)


16:00

Final remarks and end of workshop

 

 

Further Information:

The conference will take place in the Haus der Wissenschaft in Bremen.
For those who would like to join the workshop online, a Zoom-link will be provided!


Share by: