Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Paulino Passos G, Toni F, 2023,

    Learning case relevance in case-based reasoning with abstract argumentation

    , 36th International Conference on Legal Knowledge and Information Systems, Publisher: IOS Press, Pages: 95-1000, ISSN: 0922-6389

    Case-based reasoning is known to play an important role in several legal settings. We focus on a recent approach to case-based reasoning, supported by an instantiation of abstract argumentation whereby arguments represent cases and attack between arguments results from outcome disagreement between cases and a notion of relevance. We explore how relevance can be learnt automatically with the help of decision trees, and explore the combination of case-based reasoning with abstract argumentation (AA-CBR) and learning of case relevance for prediction in legal settings. Specifically, we show that, for two legal datasets, AA-CBR with decision-tree-based learning of case relevance performs competitively in comparison with decision trees, and that AA-CBR with decision-tree-based learning of case relevance results in a more compact representation than their decision tree counterparts, which could facilitate cognitively tractable explanations.

  • Conference paper
    Jiang J, Lan J, Leofante F, Rago A, Toni Fet al., 2023,

    Provably robust and plausible counterfactual explanations for neural networks via robust optimisation

    , The 15th Asian Conference on Machine Learning, Publisher: ML Research Press

    Counterfactual Explanations (CEs) have received increasing interest as a major methodology for explaining neural network classifiers. Usually, CEs for an input-output pair are defined as data points with minimum distance to the input that are classified with a different label than the output. To tackle the established problem that CEs are easily invalidated when model parameters are updated (e.g. retrained), studies have proposed ways to certify the robustness of CEs under model parameter changes bounded by a norm ball. However, existing methods targeting this form of robustness are not sound or complete, and they may generate implausible CEs, i.e., outliers wrt the training dataset. In fact, no existing method simultaneously optimises for closeness and plausibility while preserving robustness guarantees. In this work, we propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE), a method leveraging on robust optimisation techniques to address the aforementioned limitations in the literature. We formulate an iterative algorithm to compute provably robust CEs and prove its convergence, soundness and completeness. Through a comparative experiment involving six baselines, five of which target robustness, we show that PROPLACE achieves state-of-the-art performances against metrics on three evaluation aspects.

  • Conference paper
    Tirsi C-G, Proietti M, Toni F, 2023,

    ABALearn: an automated logic-based learning system for ABA frameworks

    , AIxIA 2023, Publisher: Springer Nature, ISSN: 1687-7470

    We introduce ABALearn, an automated algorithm that learns Assumption-Based Argumentation (ABA) frameworks from training data consisting of positive and negative examples, and a given background knowledge. ABALearn’s ability to generate comprehensible rules for decision-making promotes transparency and interpretability, addressing the challenges associated with the black-box nature of traditional machine learning models. This implementation is based on the strategy proposed in a previous work. The resulting ABA frameworks can be mapped onto logicprograms with negation as failure. The main advantage of this algorithm is that it requires minimal information about the learning problem and it is also capable of learning circular debates. Our results show that this approach is competitive with state-of-the-art alternatives, demonstrat-ing its potential to be used in real-world applications. Overall, this work contributes to the development of automated learning techniques for argumentation frameworks in the context of Explainable AI (XAI) andprovides insights into how such learners can be applied to make predictions.

  • Conference paper
    Russo F, Toni F, 2023,

    Causal discovery and knowledge injection for contestable neural networks

    , 26th European Conference on Artificial Intelligence ECAI 2023, Publisher: IOS Press, Pages: 2025-2032, ISSN: 0922-6389

    Neural networks have proven to be effective at solvingmachine learning tasks but it is unclear whether they learn any relevant causal relationships, while their black-box nature makes it difficult for modellers to understand and debug them. We propose a novelmethod overcoming these issues by allowing a two-way interactionwhereby neural-network-empowered machines can expose the underpinning learnt causal graphs and humans can contest the machinesby modifying the causal graphs before re-injecting them into the machines, so that the learnt models are guaranteed to conform to thegraphs and adhere to expert knowledge (some of which can also begiven up-front). By building a window into the model behaviour andenabling knowledge injection, our method allows practitioners to debug networks based on the causal structure discovered from the dataand underpinning the predictions. Experiments with real and synthetic tabular data show that our method improves predictive performance up to 2.4x while producing parsimonious networks, up to 7xsmaller in the input layer, compared to SOTA regularised networks.

  • Conference paper
    Yin X, Potyka N, Toni F, 2023,

    Argument attribution explanations in quantitative bipolar argumentation frameworks

    , 26th European Conference on Artificial Intelligence ECAI 2023, Publisher: IOS Press, Pages: 2898-2905, ISSN: 0922-6389

    Argumentative explainable AI has been advocated by several in recent years, with an increasing interest on explaining the reasoning outcomes of Argumentation Frameworks (AFs). While there is a considerable body of research on qualitatively explaining the reasoning outcomes of AFs with debates/disputes/dialogues in the spirit of extension-based semantics, explaining the quantitative reasoning outcomes of AFs under gradual semantics has not received much attention, despite widespread use in applications. In this paper, we contribute to filling this gap by proposing a novel theory of Argument Attribution Explanations (AAEs) by incorporating the spirit of feature attribution from machine learning in the context of Quantitative Bipolar Argumentation Frameworks (QBAFs): whereas feature attribution is used to determine the influence of features towards outputs of machine learning models, AAEs are used to determine the influence of arguments towards topic arguments of interest. We study desirable properties of AAEs, including some new ones and some partially adapted from the literature to our setting. To demonstrate the applicability of our AAEs in practice, we conclude by carrying out two case studies in the scenarios of fake news detection and movie recommender systems.

  • Conference paper
    Leofante F, Lomuscio A, 2023,

    Robust explanations for human-neural multi-agent systems with formal verification

    , The 20th European Conference on Multi-Agent Systems (EUMAS 2023), Publisher: Springer, Pages: 244-262, ISSN: 1611-3349

    The quality of explanations in human-agent interactions isfundamental to the development of trustworthy AI systems. In this paper we study the problem of generating robust contrastive explanations for human-neural multi-agent systems and introduce two novel verification-based algorithms to (i) identify non-robust explanations generated by other methods and (ii) generate contrastive explanations equipped with formal robustness certificates. We present an implementation and evaluate the effectiveness of the approach on two case studies involving neural agents trained on credit scoring and traffic sign recognition tasks.

  • Conference paper
    Gorur D, Rago A, Toni F, 2023,

    ArguCast: a system for online multi-forecasting with gradual argumentation

    , Knowledge Representation 2023, Publisher: CEUR-WS.org, Pages: 40-51

    Judgmental forecasting is a form of forecasting which employs (human) users to make predictions about specied future events. Judgmental forecasting has been shown to perform better than quantitative methods for forecasting, e.g. when historical data is unavailable or causal reasoning is needed. However, it has a number of limitations, arising from users’ irrationality and cognitive biases. To mitigate against these phenomena, we leverage on computational argumentation, a eld which excels in the representation and resolution of conicting knowledge and human-like reasoning, and propose novel ArguCast frameworks (ACFs) and the novel online system ArguCast, integrating ACFs. ACFs and ArguCast accommodate multi-forecasting, by allowing multiple users to debate on multiple forecasting predictions simultaneously, each potentially admitting multiple outcomes. Finally, we propose a novel notion of user rationality in ACFs based on votes on arguments in ACFs, allowing the ltering out of irrational opinions before obtaining group forecasting predictions by means commonly used in judgmental forecasting.

  • Conference paper
    Leofante F, Botoeva E, Rajani V, 2023,

    Counterfactual explanations and model multiplicity: a relational verification view

    , The 20th International Conference on Principles of Knowledge Representation and Reasoning (KR2023), Publisher: IJCAI Organization, Pages: 763-768, ISSN: 2334-1033

    We study the interplay between counterfactual explanationsand model multiplicity in the context of neural network clas-sifiers. We show that current explanation methods often pro-duce counterfactuals whose validity is not preserved undermodel multiplicity. We then study the problem of generatingcounterfactuals that are guaranteed to be robust to model multiplicity, characterise its complexity and propose an approach to solve this problem using ideas from relational verification.

  • Conference paper
    Rago A, Li H, Toni F, 2023,

    Interactive explanations by conflict resolution via argumentative exchanges

    , 20th International Conference on Principles of Knowledge Representation and Reasoning (KR2023), Publisher: IJCAI Organization, Pages: 582-592, ISSN: 2334-1033

    As the field of explainable AI (XAI) is maturing, calls forinteractive explanations for (the outputs of) AI models aregrowing, but the state-of-the-art predominantly focuses onstatic explanations. In this paper, we focus instead on interactive explanations framed as conflict resolution between agents (i.e. AI models and/or humans) by leveraging on computational argumentation. Specifically, we define Argumentative eXchanges (AXs) for dynamically sharing, in multi-agent systems, information harboured in individual agents’ quantitative bipolar argumentation frameworks towards resolving conflicts amongst the agents. We then deploy AXs in the XAI setting in which a machine and a human interact about the machine’s predictions. We identify and assess several theoretical properties characterising AXs that are suitable for XAI. Finally, we instantiate AXs for XAI by defining various agent behaviours, e.g. capturing counterfactual patterns of reasoning in machines and highlighting the effects ofcognitive biases in humans. We show experimentally (in asimulated environment) the comparative advantages of these behaviours in terms of conflict resolution, and show that the strongest argument may not always be the most effective.

  • Conference paper
    Nguyen H-T, Satoh K, Goebel R, Stathis K, Toni Fet al., 2023,

    Black-box analysis: GPTs across time in legal textual entailment task

    , ISAILD symposium - International Symposium on Artificial Intelligence and Legal Documents, Publisher: IEEE
  • Journal article
    Toni F, Rago A, Cyras K, 2023,

    Forecasting with jury-based probabilistic argumentation

    , Journal of Applied Non Classical Logics, Vol: 33, Pages: 224-243, ISSN: 1166-3081

    Probabilistic Argumentation supports a form of hybrid reasoning by integratingquantitative (probabilistic) reasoning and qualitative argumentation in a naturalway. Jury-based Probabilistic Argumentation supports the combination of opinionsby different reasoners. In this paper we show how Jury-based Probabilistic Abstract Argumentation (JPAA) and a form of Jury-based Probabilistic Assumptionbased Argumentation (JPABA) can naturally support forecasting, whereby subjective probability estimates are combined to make predictions about future occurrences of events. The form of JPABA we consider is an instance of JPAA andresults from integrating Assumption-Based Argumentation (ABA) and probabilityspaces expressed by Bayesian networks, under the so-called constellation approach.It keeps the underlying structured argumentation and probabilistic reasoning modules separate while integrating them. We show how JPAA and (the considered formof) JPABA can be used to support forecasting by 1) supporting different forecasters (jurors) to determine the probability of arguments (and, in the JPABA case,sentences) with respect to their own probability spaces, while sharing arguments(and their components); and 2) supporting the aggregation of individual forecaststo produce group forecasts.

  • Conference paper
    Ayoobi H, Potyka N, Toni F, 2023,

    SpArX: Sparse Argumentative Explanations for Neural Networks

    , European Conference on Artificial Intelligence 2023

    Neural networks (NNs) have various applications in AI, but explaining their decisions remains challenging. Existing approaches often focus on explaining how changing individual inputs affects NNs' outputs. However, an explanation that is consistent with the input-output behaviour of an NN is not necessarily faithful to the actual mechanics thereof. In this paper, we exploit relationships between multi-layer perceptrons (MLPs) and quantitative argumentation frameworks (QAFs) to create argumentative explanations for the mechanics of MLPs. Our SpArX method first sparsifies the MLP while maintaining as much of the original structure as possible. It then translates the sparse MLP into an equivalent QAF to shed light on the underlying decision process of the MLP, producing global and/or local explanations. We demonstrate experimentally that SpArX can give more faithful explanations than existing approaches, while simultaneously providing deeper insightsinto the actual reasoning process of MLPs.

  • Conference paper
    De Angelis E, Proietti M, Toni F, 2023,

    ABA learning via ASP

    , ICLP 2023, Publisher: Open Publishing Association, Pages: 1-8, ISSN: 2075-2180

    Recently, ABA Learning has been proposed as a form of symbolic machine learning for drawing Assumption-Based Argumentation frameworks from background knowledge and positive and negative examples. We propose a novel method for implementing ABA Learning using Answer SetProgramming as a way to help guide Rote Learning and generalisation in ABA Learning.

  • Conference paper
    Toni F, Potyka N, Ulbricht M, Totis Pet al., 2023,

    Understanding ProbLog as probabilistic argumentation

    , ICLP 2023, Publisher: Open Publishing Association, Pages: 183-189, ISSN: 2075-2180

    ProbLog is a popular probabilistic logic programming language and tool, widely used for applications requiring to deal with inherent uncertainties in structured domains. In this paper we study someconnections between ProbLog and a variant of another well-known formalism combining symbolicreasoning and reasoning under uncertainty, namely probabilistic argumentation. Specifically, weshow that ProbLog is an instance of a form of Probabilistic Abstract Argumentation (PAA) underthe constellation approach, which builds upon Assumption-Based Argumentation (ABA). The connections pave the way towards equipping ProbLog with a variety of alternative semantics, inheritedfrom PAA/PABA, as well as obtaining novel argumentation semantics for PAA/PABA, leveraging onexisting connections between ProbLog and argumentation. Moreover, the connections pave the waytowards novel forms of argumentative explanations for ProbLog’s outputs.

  • Conference paper
    Mihailescu I, Weng A, Sharma S, Ghitu M, Grewal D, Chew K, Ayoobi H, Potyka N, Toni Fet al., 2023,

    PySpArX - A Python library for generating Sparse Argumentative eXplanations for neural networks

    , ICLP 2023, Publisher: Open Publishing Association, Pages: 336-336, ISSN: 2075-2180
  • Conference paper
    Paulino Passos G, Satoh K, Toni F, 2023,

    A dataset of contractual events in court decisions

    , Logic Programming and Legal Reasoning Workshop @ ICLP 2023, Publisher: CEUR Workshop Proceedings, ISSN: 1613-0073

    The promise of automation of legal reasoning is developing technology that reduces human time required for legal tasks or that improves human performance on such tasks. In order to do so, different methods and systems based on logic programming were developed. However, in order to apply such methods on legal data, it is necessary to provide an interface between human users and the legal reasoning system, and the most natural interface in the legal domain is natural language, in particular, written text. In order to perform reasoning in written text using logic programming methods, it is then necessary to map expressions in text to atoms and predicates in the formal language, a task referred generally as information extraction. In this work, we propose a new dataset for the task of information extraction, in particular event extraction, in court decisions, focusing on contracts. Our dataset captures contractual relations and events that affect them in some way, such as negotiations preceding a (possible) contract, the execution of a contract, or its termination. We conducted text annotation with law students and graduates, resulting in a dataset with 207 documents, 3934 sentences, 4627 entities, and 1825 events. We describe here this resource, the annotation process, its evaluation with inter-annotator agreement metrics, and discuss challenges during the development of this resource and for the future.

  • Conference paper
    Nguyen H-T, Toni F, Stathis K, Satoh Ket al., 2023,

    Beyond logic programming for legal reasoning

    , Logic Programming and Legal Reasoning Workshop@ICLP2023, Publisher: CEUR-WS.org, ISSN: 1613-0073

    Logic programming has long being advocated for legal reasoning, and several approaches have been putforward relying upon explicit representation of the law in logic programming terms. In this positionpaper we focus on the PROLEG logic-programming-based framework for formalizing and reasoningwith Japanese presupposed ultimate fact theory. Specifically, we examine challenges and opportunitiesin leveraging deep learning techniques for improving legal reasoning using PROLEG, identifying fourdistinct options ranging from enhancing fact extraction using deep learning to end-to-end solutionsfor reasoning with textual legal descriptions. We assess advantages and limitations of each option,considering their technical feasibility, interpretability, and alignment with the needs of legal practitionersand decision-makers. We believe that our analysis can serve as a guideline for developers aiming tobuild effective decision-support systems for the legal domain, while fostering a deeper understanding ofchallenges and potential advancements by neuro-symbolic approaches in legal applications.

  • Conference paper
    Proietti M, Toni F, 2023,

    A roadmap for neuro-argumentative learning

    , 17th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy 2023), Publisher: CEUR Workshop Proceedings, Pages: 1-8, ISSN: 1613-0073

    Computational argumentation (CA) has emerged, in recent decades, as a powerful formalism for knowl-edge representation and reasoning in the presence of conflicting information, notably when reasoningnon-monotonically with rules and exceptions. Much existing work in CA has focused, to date, on rea-soning with given argumentation frameworks (AFs) or, more recently, on using AFs, possibly automat-ically drawn from other systems, for supporting forms of XAI. In this short paper we focus insteadon the problem of learning AFs from data, with a focus on neuro-symbolic approaches. Specifically,we overview existing forms of neuro-argumentative (machine) learning, resulting from a combinationof neural machine learning mechanisms and argumentative (symbolic) reasoning. We include in ouroverview neuro-symbolic paradigms that integrate reasoners with a natural understanding in argumen-tative terms, notably those capturing forms of non-monotonic reasoning in logic programming. We alsooutline avenues and challenges for future work in this spectrum.

  • Conference paper
    Jiang J, Leofante F, Rago A, Toni Fet al., 2023,

    Formalising the robustness of counterfactual explanations for neural networks

    , 37th AAAI Conference on Artificial Intelligence (AAAI 2023), Publisher: Association for the Advancement of Artificial Intelligence, Pages: 14901-14909, ISSN: 2374-3468

    The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., following retraining), which raises questions about their reliability in real-world applications. Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees. To remedy this, we propose the first notion to formally and deterministically assess the robustness (to model changes) of CFXs for neural networks, that we call ∆-robustness. We introduce an abstraction framework based on interval neural networks to verify the ∆-robustness of CFXs against a possibly infinite set of changes to the model parameters, i.e., weights and biases. We then demonstrate the utility of this approach in two distinct ways. First, we analyse the ∆-robustness of a number of CFX generation methods from the literature and show that they unanimously host significant deficiencies in this regard. Second, we demonstrate how embedding ∆-robustness within existing methods can provide CFXs which are provably robust.

  • Conference paper
    Potyka N, Yin X, Toni F, 2023,

    Explaining random forests using bipolar argumentation and Markov networks

    , AAAI 23, Pages: 9458-9460, ISSN: 2159-5399

    Random forests are decision tree ensembles that can be used to solve a variety of machine learning problems. However, as the number of trees and their individual size can be large, their decision making process is often incomprehensible. We show that their decision process can be naturally represented as an argumentation problem, which allows creating global explanations via argumentative reasoning. We generalize sufficientand necessary argumentative explanations using a Markov network encoding, discuss the relevance of these explanations and establish relationships to families of abductive explanations from the literature. As the complexity of the explanation problems is high, we present an efficient approximation algorithm with probabilistic approximation guarantees.

  • Conference paper
    Nguyen H-T, Goebel R, Toni F, Stathis K, Satoh Ket al., 2023,

    How well do SOTA legal reasoning models support abductive reasoning?

    , Logic Programming and Legal Reasoning Workshop@ICLP2023

    We examine how well the state-of-the-art (SOTA) models used in legal reasoning support abductivereasoning tasks. Abductive reasoning is a form of logical inference in which a hypothesis is formulatedfrom a set of observations, and that hypothesis is used to explain the observations. The ability toformulate such hypotheses is important for lawyers and legal scholars as it helps them articulate logicalarguments, interpret laws, and develop legal theories. Our motivation is to consider the belief thatdeep learning models, especially large language models (LLMs), will soon replace lawyers because theyperform well on tasks related to legal text processing. But to do so, we believe, requires some form ofabductive hypothesis formation. In other words, while LLMs become more popular and powerful, wewant to investigate their capacity for abductive reasoning. To pursue this goal, we start by building alogic-augmented dataset for abductive reasoning with 498,697 samples and then use it to evaluate theperformance of a SOTA model in the legal field. Our experimental results show that although thesemodels can perform well on tasks related to some aspects of legal text processing, they still fall short insupporting abductive reasoning tasks.

  • Journal article
    Lertvittayakumjorn P, Toni F, 2023,

    Argumentative explanations for pattern-based text classifiers

    , Argument and Computation, Vol: 14, Pages: 163-234, ISSN: 1946-2174

    Recent works in Explainable AI mostly address the transparency issue of black-box models or create explanations for any kind of models (i.e., they are model-agnostic), while leaving explanations of interpretable models largely underexplored. In this paper, we fill this gap by focusing on explanations for a specific interpretable model, namely pattern-based logistic regression (PLR) for binary text classification. We do so because, albeit interpretable, PLR is challenging when it comes to explanations. In particular, we found that a standard way to extract explanations from this model does not consider relations among the features, making the explanations hardly plausible to humans. Hence, we propose AXPLR, a novel explanation method using (forms of) computational argumentation to generate explanations (for outputs computed by PLR) which unearth model agreements and disagreements among the features. Specifically, we use computational argumentation as follows: we see features (patterns) in PLR as arguments in a form of quantified bipolar argumentation frameworks (QBAFs) and extract attacks and supports between arguments based on specificity of the arguments; we understand logistic regression as a gradual semantics for these QBAFs, used to determine the arguments’ dialectic strength; and we study standard properties of gradual semantics for QBAFs in the context of our argumentative re-interpretation of PLR, sanctioning its suitability for explanatory purposes. We then show how to extract intuitive explanations (for outputs computed by PLR) from the constructed QBAFs. Finally, we conduct an empirical evaluation and two experiments in the context of human-AI collaboration to demonstrate the advantages of our resulting AXPLR method.

  • Conference paper
    Leofante F, Lomuscio A, 2023,

    Towards robust contrastive explanations for human-neural multi-agent systems

    , International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), Publisher: ACM, Pages: 2343-2345

    Generating explanations of high quality is fundamental to the development of trustworthy human-AI interactions. We here study the problem of generating contrastive explanations with formal robustness guarantees. We formalise a new notion of robustness and introduce two novel verification-based algorithms to (i) identify non-robust explanations generated by other methods and (ii) generate contrastive explanations augmented with provablerobustness certificates. We present an implementation and evaluate the utility of the approach on two case studies concerning neural agents trainedon credit scoring and image classification tasks.

  • Journal article
    Rago A, Russo F, Albini E, Toni F, Baroni Pet al., 2023,

    Explaining classifiers’ outputs with causal models and argumentation

    , Journal of Applied Logics, Vol: 10, Pages: 421-449, ISSN: 2631-9810

    We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for mod-els’ outputs. The conceptualisation is based on reinterpreting properties of semantics of AFs as explanation moulds, which are means for characterising argumentative relations. We demonstrate our methodology by reinterpreting the property of bi-variate reinforcement in bipolar AFs, showing how the ex-tracted bipolar AFs may be used as relation-based explanations for the outputs of causal models. We then evaluate our method empirically when the causal models represent (Bayesian and neural network) machine learning models for classification. The results show advantages over a popular approach from the literature, both in highlighting specific relationships between feature and classification variables and in generating counterfactual explanations with respect to a commonly used metric.

  • Conference paper
    Santhirasekaram A, Kori A, Winkler M, Rockall A, Toni F, Glocker Bet al., 2023,

    Robust Hierarchical Symbolic Explanations in Hyperbolic Space for Image Classification

    , Computer Vision and Pattern Recognition
  • Journal article
    Albini E, Rago A, Baroni P, Toni Fet al., 2023,

    Achieving descriptive accuracy in explanations via argumentation: the case of probabilistic classifiers

    , Frontiers in Artificial Intelligence, Vol: 6, Pages: 1-18, ISSN: 2624-8212

    The pursuit of trust in and fairness of AI systems in order to enable human-centric goals has been gathering pace of late, often supported by the use of explanations for the outputs of these systems. Several properties of explanations have been highlighted as critical for achieving trustworthy and fair AI systems, but one that has thus far been overlooked is that of descriptive accuracy (DA), i.e., that the explanation contents are in correspondence with the internal working of the explained system. Indeed, the violation of this core property would lead to the paradoxical situation of systems producing explanations which are not suitably related to how the system actually works: clearly this may hinder user trust. Further, if explanations violate DA then they can be deceitful, resulting in an unfair behavior toward the users. Crucial as the DA property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalizing DA and of analyzing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature, variants thereof and a novel form of explanation that we propose. We conduct experiments with a varied selection of concrete probabilistic classifiers and highlight the importance, with a user study, of our most demanding notion of dialectical DA, which our novel method satisfies by design and others may violate. We thus demonstrate how DA could be a critical component in achieving trustworthy and fair systems, in line with the principles of human-centric AI.

  • Conference paper
    Nguyen HT, Goebel R, Toni F, Stathis K, Satoh Ket al., 2023,

    LawGiBa – Combining GPT, knowledge bases, and logic programming in a legal assistance system

    , JURIX 2023: The Thirty-sixth Annual Conference, Maastricht, the Netherlands, 18–20 December 2023, Publisher: IOS Press, Pages: 371-374, ISSN: 0922-6389

    We present LawGiBa, a proof-of-concept demonstration system for legal assistance that combines GPT, legal knowledge bases, and Prolog’s logic programming structure to provide explanations for legal queries. This novel combination effectively and feasibly addresses the hallucination issue of large language models (LLMs) in critical domains, such as law. Through this system, we demonstrate how incorporating a legal knowledge base and logical reasoning can enhance the accuracy and reliability of legal advice provided by AI models like GPT. Though our work is primarily a demonstration, it provides a framework to explore how knowledge bases and logic programming structures can be further integrated with generative AI systems, to achieve improved results across various natural languages and legal systems.

  • Conference paper
    Jiang J, Lan J, Leofante F, Rago A, Toni Fet al., 2023,

    Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation.

    , Publisher: PMLR, Pages: 582-597
  • Conference paper
    Albini E, Rago A, Baroni P, Toni Fet al., 2022,

    Descriptive accuracy in explanations: the case of probabilistic classifiers

    , 15th International Conference on Scalable Uncertainty Management (SUM 2022), Publisher: Springer, Pages: 279-294

    A user receiving an explanation for outcomes produced by an artificially intelligent system expects that it satisfies the key property of descriptive accuracy (DA), i.e. that the explanation contents are in correspondence with the internal working of the system. Crucial as this property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalising DA and of analysing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature and a novel form of explanation that we propose and complement our analysis with experiments carried out on a varied selection of concrete probabilistic classifiers.

  • Conference paper
    Maurizio P, Toni F, 2022,

    Learning assumption-based argumentation frameworks

    , 31st International Conference on Inductive Logic Programming (ILP 2022)

    . We propose a novel approach to logic-based learning whichgenerates assumption-based argumentation (ABA) frameworks from positive and negative examples, using a given background knowledge. TheseABA frameworks can be mapped onto logic programs with negationas failure that may be non-stratified. Whereas existing argumentationbased methods learn exceptions to general rules by interpreting the exceptions as rebuttal attacks, our approach interprets them as undercutting attacks. Our learning technique is based on the use of transformationrules, including some adapted from logic program transformation rules(notably folding) as well as others, such as rote learning and assumptionintroduction. We present a general strategy that applies the transformation rules in a suitable order to learn stratified frameworks, and we alsopropose a variant that handles the non-stratified case. We illustrate thebenefits of our approach with a number of examples, which show that,on one hand, we are able to easily reconstruct other logic-based learningapproaches and, on the other hand, we can work out in a very simpleand natural way problems that seem to be hard for existing techniques.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1247&limit=30&page=2&respub-action=search.html Current Millis: 1734852915374 Current Time: Sun Dec 22 07:35:15 GMT 2024