Project Details

Description

This PhD project, a collaboration between Ulster University and Thales IAS, aims to advance research in Explainable AI (XAI), addressing the growing need for trust and transparency in AI systems. While Deep Neural Networks (DNNs) have driven significant advancements in autonomous systems with improved accuracy, scalability, and generalisation, understanding and explaining how AI models generate specific results remains a major challenge. XAI comprises methods and processes that help users comprehend and trust machine learning outputs. It is critical for implementing responsible AI, enabling fairness, accountability, and explainability at scale. Various XAI methods exist, applicable across different stages of the machine learning pipeline, but their selection and application for specific use cases need further investigation. The successful applicant will collaborate closely with Thales IAS, and have the opportunity to spend three months on-site (subject to passing security requirements) to access resources, consult with experts, and gain hands-on experience in data analysis focused on research and development tasks aligned with the research project or related challenges of interest to Thales.
StatusActive
Effective start/end date15/09/2514/09/28

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.