Clinical trial has commenced in Hospital Clinic Barcelona

The Foundation Clinic for Biomedical Research (FCRB) have started with the iToBoS Data Acquisition Clinical Trial, taking place at the Hospital Clinic of Barcelona.

How to solve the gravity-induced coma aberration of liquid lenses

In a past post, we explained that the biggest limitation of liquid lenses is gravity-induced coma aberrations which cause worse optical performance when the lenses are used with their optical axis oriented differently than vertical.

Gravity-induced coma aberration in the liquid lenses

The decision to integrate liquid lenses into iToBoS’ full-body scanner was taken due to the need to take thousands of pictures in the shortest amount of time.

Latent Diffusion Models

Diffusion models rival and can even surpass GANs on image synthesis, as they are able to generate more diverse outputs by having a better coverage of the data distribution and do not suffer from mode collapse and training instabilities of GANs.


Transformers are deep learning architectures created to solve sequence-to-sequence tasks (such as language translation) and proposed in [1].

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond

Just shy over a year ago, the Quantus toolkit v0.1.1 has been shared with the Machine Learning (ML) community as a pre-print on

XAI Beyond Explaining

Explainable Artificial Intelligence (XAI) can not only be employed to get an insight into the reasoning process of an Artificial Intelligence (AI) model.

Attention is all you need

The paper ‘Attention Is All You Need’ introduces transformers and the sequence-to-sequence architecture.

XAI Hyperparameter Optimization

Rule-based eXplainable AI (XAI) methods, such as layer-wise relevance propagation (LRP) and DeepLift, provide large flexibility thanks to configurable rules, allowing AI practitioners to tailor the XAI method to the problem at hand.

DenseNet Canonization

As we saw in a previous post, some challenges caused in explaining neural network decisions can be overcome via canonization.