The following scientific works and research publications have been developed in the framework of the iToBoS project:
- Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy.
- Towards the Interpretability of Deep Learning Models for Multi-modal Neuroimaging: Finding Structural Changes of the Ageing Brain.
- Finding and removing Clever Hans: Using explanation methods to debug and improve deep models.
- Explain and improve: LRP-inference fine-tuning for image captioning models.
- PatClarC: Using pattern concept activation vectors for noise-robust model debugging.
- Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations.
- Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement.
- ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs.
- Explaining Machine Learning Models for Clinical Gait Analysis.
- Explainable AI Methods - A Brief Overview.
- Explaining the Predictions of Unsupervised Learning Models.
- CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations.
- Applying Artificial Intelligence Privacy Technology in the Healthcare Domain.
- Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI.
- Registration of polarimetric images for in vivo skin diagnostics.
- Focus stacking in non-contact dermoscopy.
- Measurably Stronger Explanation Reliability via Model Canonization.
- What to Hide from Your Students: Attention-Guided Masked Image Modeling.
- Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations.
- Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations.
- Impact of standardization in tissue processing: the performance of different fixatives.
- OpenFilter: A Framework to Democratize Research Access to Social Media AR Filters (neurips.cc).
- Optimizing Explanations by Network Canonization and Hyperparameter Search.
- Mueller Matrix Microscopy for In Vivo Scar Tissue Diagnostics and Treatment Evaluation.
- The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus.
- AI privacy toolkit.
- Generative Adversarial Network for Personalized Art Therapy in Melanoma Disease Management.
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models.
- Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence.
- xxAI - Beyond Explainable Artificial Intelligence.
- Toward Explainable Artificial Intelligence for Regression Models: A methodological perspective.
- XAI-based Comparison of Input Representations for Audio Event Classification.
- Association of germline variants in telomere maintenance genes (POT1, TERF2IP, ACD, and TERT) with spitzoid morphology in familial melanoma: A multi-center case series.
- A group of three miRNAs can act as candidate circulating biomarkers in liquid biopsies from melanoma patients.
- Layer-wise Feedback Propagation.
- Evaluating deep transfer learning for whole-brain cognitive decoding.
- Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?.
- Human-Centered Evaluation of XAI Methods.
- Automatic Skin Cancer Detection Using Clinical Images: A Comprehensive Review.
- Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations.
- Improved polarimetric analysis of human skin through stitching: advantages, limitations, and applications in dermatology.
- AudioMNIST: Exploring Explainable Artificial Intelligence for audio analysis on a simple benchmark.
- Dicing with data: the risks, benefits, tensions and tech of health data in the iToBoS project.
- Unraveling the Complex Nexus of Human Papillomavirus (HPV) in Extragenital Keratinocyte Skin Tumors: A Comprehensive Analysis of Bowen’s Disease and In Situ Squamous-Cell Carcinoma.
- Genetic testing for familial melanoma.
- Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence.
- Explaining Predictive Uncertainty by Exposing Second-Order Effects.
- DualView: Data Attribution from the Dual Perspective.
- Monitoring of multiple fabrication parameters of electrospun polymer fibers using Mueller matrix analysis.
- Addressing the generalization of 3D registration methods with a featureless baseline and an unbiased benchmark.
- From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space.
- A protocol for annotation of total body photography for machine learning to analyze skin phenotype and lesion classification.
- CoSy: Evaluating Textual Explanations of Neurons.
- Explainable AI for Time Series via Virtual Inspection Layers.
- A Narrative Review: Opportunities and Challenges in Artificial Intelligence Skin Image Analyses Using Total Body Photography.
- Interpretable AI for Dermoscopic Images of Pigmented Skin Lesions.
- Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations.
- Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression.
- PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits.
- Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification.
- A Fresh Look at Sanity Checks for Saliency Maps.
- Perspectives for Generative AI-Assisted Art Therapy for Melanoma Patients.
- Skin 2.0: How Cutaneous Digital Twins Could Reshape Dermatology.
- Integrating generative AI with ABCDE rule analysis for enhanced skin cancer diagnosis, dermatologist training and patient education.
- Patient Consensus. Data, Ai and data –dependent models in business and research.
- SPOT: Self-Training with Patch-Order Permutation for Object-Centric Learning with Autoregressive Transformers.
- Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers.