License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/DagRep.13.4.1
URN: urn:nbn:de:0030-drops-192367
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2023/19236/
Ciabattoni, Agata ;
Horty, John F. ;
Slavkovik, Marija ;
van der Torre, Leendert ;
Knoks, Aleks
Weitere Beteiligte (Hrsg. etc.): Agata Ciabattoni and John F. Horty and Marija Slavkovik and Leendert van der Torre and Aleks Knoks
Normative Reasoning for AI (Dagstuhl Seminar 23151)
Abstract
Normative reasoning is reasoning about normative matters - such as obligations, permissions, and the rights of individuals or groups. It is prevalent in both legal and ethical discourse, and it can - and arguably should - play a crucial role in the construction of autonomous agents. We often find it important to know whether specific norms apply in a given situation, and to understand why and when they apply, and why some other norms do not apply. In most cases, our reasons for wanting to know are purely practical - we want to make the correct decision - but they can also be more theoretical - as they are when we engage in theoretical ethics. Either way, the same questions are crucial for designing autonomous agents sensitive to legal, ethical, and social norms. This Dagstuhl Seminar brought together experts in computer science, logic (including deontic logic and argumentation), philosophy, ethics, and law with the aim of finding effective ways of formalizing norms and embedding normative reasoning in AI systems. We discussed new ways of using deontic logic and argumentation to provide explanations answering normative why questions, including such questions as "Why should I do A (rather than B)?", "Why should you do A (rather than I)?", "Why do you have the right to do A despite a certain fact or a certain norm?", and "Why does one normative system forbid me to do A, while another one allows it?". We also explored the use of formal methods in combination with sub-symbolic AI (or Machine Learning) with a view towards designing autonomous agents that can follow (legal, ethical, and social) norms.
BibTeX - Entry
@Article{ciabattoni_et_al:DagRep.13.4.1,
author = {Ciabattoni, Agata and Horty, John F. and Slavkovik, Marija and van der Torre, Leendert and Knoks, Aleks},
title = {{Normative Reasoning for AI (Dagstuhl Seminar 23151)}},
pages = {1--23},
journal = {Dagstuhl Reports},
ISSN = {2192-5283},
year = {2023},
volume = {13},
number = {4},
editor = {Ciabattoni, Agata and Horty, John F. and Slavkovik, Marija and van der Torre, Leendert and Knoks, Aleks},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2023/19236},
URN = {urn:nbn:de:0030-drops-192367},
doi = {10.4230/DagRep.13.4.1},
annote = {Keywords: deontic logic, autonomous agents, AI ethics, deontic explanations}
}
Keywords: |
|
deontic logic, autonomous agents, AI ethics, deontic explanations |
Collection: |
|
DagRep, Volume 13, Issue 4 |
Issue Date: |
|
2023 |
Date of publication: |
|
02.11.2023 |