The following scientific works and research publications have been developed in the framework of the iToBoS project:
- Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy.
- Towards the Interpretability of Deep Learning Models for Human Neuroimaging.
- Finding and removing Clever Hans: Using explanation methods to debug and improve deep models.
- Explain and improve: LRP-inference fine-tuning for image captioning models.
- PatClarC: Using pattern concept activation vectors for noise-robust model debugging.
- Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations.
- Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement.
- ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs.
- Explaining Machine Learning Models for Clinical Gait Analysis.
- Explainable AI Methods - A Brief Overview.
- Explaining the Predictions of Unsupervised Learning Models.
- CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations.
- Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI.
- Registration of polarimetric images for in vivo skin diagnostics.
- Focus stacking in non-contact dermoscopy.
- Measurably Stronger Explanation Reliability via Model Canonization.
- What to Hide from Your Students: Attention-Guided Masked Image Modeling.
- Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations.
- Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations.
- Impact of standardization in tissue processing: the performance of different fixatives.
- Optimizing Explanations by Network Canonization and Hyperparameter Search.
- Mueller Matrix Microscopy for In Vivo Scar Tissue Diagnostics and Treatment Evaluation.
- The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus.
- AI privacy toolkit.
- Generative Adversarial Network for Personalized Art Therapy in Melanoma Disease Management.
- Explainable AI for Time Series via Virtual Inspection Layers.
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models.
- Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence.
- xxAI - Beyond Explainable Artificial Intelligence.
- Association of germline variants in telomere maintenance genes (POT1, TERF2IP, ACD, and TERT) with spitzoid morphology in familial melanoma: A multi-center case series.