Browsing by Author "Ciatto, G."
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Conference ObjectPublication Metadata only EXPECTATION: Personalized explainable artificial intelligence for decentralized agents with heterogeneous knowledge(Springer, 2021) Calvaresi, D.; Ciatto, G.; Najjar, A.; Aydoğan, Reyhan; Van der Torre, L.; Omicini, A.; Schumacher, M.; Computer Science; AYDOĞAN, ReyhanExplainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis. However, explanation techniques are still embryotic, and they mainly target ML experts rather than heterogeneous end-users. Furthermore, existing solutions assume data to be centralised, homogeneous, and fully/continuously accessible—circumstances seldom found altogether in practice. Arguably, a system-wide perspective is currently missing. The project named “Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge ” (Expectation) aims at overcoming such limitations. This manuscript presents the overall objectives and approach of the Expectation project, focusing on the theoretical and practical advance of the state of the art of XAI towards the construction of personalised explanations in spite of decentralisation and heterogeneity of knowledge, agents, and explainees (both humans or virtual). To tackle the challenges posed by personalisation, decentralisation, and heterogeneity, the project fruitfully combines abstractions, methods, and approaches from the multi-agent systems, knowledge extraction/injection, negotiation, argumentation, and symbolic reasoning communities.Conference ObjectPublication Metadata only A general-purpose protocol for multi-agent based explanations(Springer, 2023) Ciatto, G.; Magnini, M.; Buzcu, Berk; Aydoğan, Reyhan; Omicini, A.; Computer Science; AYDOĞAN, Reyhan; Buzcu, BerkBuilding on prior works on explanation negotiation protocols, this paper proposes a general-purpose protocol for multi-agent systems where recommender agents may need to provide explanations for their recommendations. The protocol specifies the roles and responsibilities of the explainee and the explainer agent and the types of information that should be exchanged between them to ensure a clear and effective explanation. However, it does not prescribe any particular sort of recommendation or explanation, hence remaining agnostic w.r.t. such notions. Novelty lays in the extended support for both ordinary and contrastive explanations, as well as for the situation where no explanation is needed as none is requested by the explainee. Accordingly, we formally present and analyse the protocol, motivating its design and discussing its generality. We also discuss the reification of the protocol into a re-usable software library, namely PyXMas, which is meant to support developers willing to build explainable MAS leveraging our protocol. Finally, we discuss how custom notions of recommendation and explanation can be easily plugged into PyXMas.ArticlePublication Open Access Symbolic knowledge extraction for explainable nutritional recommenders(Elsevier, 2023-06) Magnini, M.; Ciatto, G.; Cantürk, Furkan; Aydoğan, Reyhan; Omicini, A.; Computer Science; AYDOĞAN, Reyhan; Cantürk, FurkanBackground and objective: This paper focuses on nutritional recommendation systems (RS), i.e. AI-powered automatic systems providing users with suggestions about what to eat to pursue their weight/body shape goals. A trade-off among (potentially) conflictual requirements must be taken into account when designing these kinds of systems, there including: (i) adherence to experts’ prescriptions, (ii) adherence to users’ tastes and preferences, (iii) explainability of the whole recommendation process. Accordingly, in this paper we propose a novel approach to the engineering of nutritional RS, combining machine learning and symbolic knowledge extraction to profile users—hence harmonising the aforementioned requirements. MethodsOur contribution focuses on the data processing workflow. Stemming from neural networks (NN) trained to predict user preferences, we use CART Breiman et al.(1984) to extract symbolic rules in Prolog Körner et al.(2022) form, and we combine them with expert prescriptions brought in similar form. We can then query the resulting symbolic knowledge base via logic solvers, to draw explainable recommendations. ResultsExperiments are performed involving a publicly available dataset of 45,723 recipes, plus 12 synthetic datasets about as many imaginary users, and 6 experts’ prescriptions. Fully-connected 4-layered NN are trained on those datasets, reaching ∼86% test-set accuracy, on average. Extracted rules, in turn, have ∼80% fidelity w.r.t. those NN. The resulting recommendation system has a test-set precision of ∼74%. The symbolic approach makes it possible to devise how the system draws recommendations. Conclusions Thanks to our approach, intelligent agents may learn users’ preferences from data, convert them into symbolic form, and extend them with experts’ goal-directed prescriptions. The resulting recommendations are then simultaneously acceptable for the end user and adequate under a nutritional perspective, while the whole process of recommendation generation is made explainable.