Artificial Intelligence (AI) is currently revolutionising mainstream services by processing large datasets to make decisions without human supervision. Complex algorithms, beyond the understanding of users, have raised significant concerns about impacts on individual, economic and social life, for example, their role in granting parole, who gets financial credit, and targeting users with news or adverts.
Questions that users may have about AI systems’ decisions include:
• Based on what features did the system reach this decision?
• How can I trust the recommendation of the system?
• Does the system’s decision make sense based on its application in the real world?
• Is the decision accurate enough for real world use?
The aim of this work is to investigate the intelligibility of AI systems and to use methods from human-computer interaction (HCI) to design a series of user interfaces to make AI systems’ decisions understandable to users. The vision of this work is to make AI systems more transparent, fair and accountable using XAI methodologies.
The primary objectives of the work are to:
• Investigate users mental models for AI systems
• Co-design and evaluate explanatory XAL interfaces via a series of prototype AI applications
• Develop a library of reusable XAI design patterns for developing interfaces for AI systems
The project work requires strong skills in Human Computer Interaction, Design Thinking and Qualitative methods.
Student requirements for this project
Min. 2.1 BSc in Computer Science, B. Design or related disciplines
Student Stipend € 18500
Materials/ Travel etc € 2,600.
Fees € paid
Applicants for this project are required to complete an Expression of Interest (www.dit.ie/media/documents/study/postgraduateresearch/EOI%20Form.doc) and email it to [email protected]