Ai Wine

AI wine is a series of regular AI seminars open to the whole AI community that take place in the big common room of Torus AI at the addresse: Batiment El Paso, 12 avenue de l’Europe, 31520 Ramoville-Saint-Agne. It’s also made available online for those people who want to attend but cannot come physically.

The current three organizers of he seminar are Francesco Costantino (Professor of Mathematics, University of Toulouse), Pierre Gillibert (AI Scientist, Torus AI) and Milad Mozafari (ANITI postdoc).

The format of the seminar is as follows: one or two talks per session, to take place on Thursdays, starting at 4pm. Coffee and cookies before the seminar, and and cheese after the seminar. 


Thursday 10 November 2022:

Milad Mozafari (4pm)
Reconstructing Natural Scenes from fMRI Patterns using deep generative networks

Decoding and reconstructing images from brain imaging data is a research area of high interest. Recent progress in deep generative neural networks has introduced new opportunities to tackle this problem. Here, we make use of large-scale generative networks to decode and reconstruct natural scenes from fMRI patterns. We computed a linear mapping between fMRI data, acquired over images from 150 different categories of ImageNet, and their corresponding deep network representations. Then, we applied this mapping to the fMRI activity patterns obtained from unseen test images in order to retrieve their latent vectors, and reconstruct the corresponding images. Not only the pairwise image decoding from the predicted latent vectors was highly accurate but also qualitative and quantitative assessments revealed that the resulting image reconstructions were visually plausible, successfully captured many attributes of the original images, and had high perceptual similarity with the original content.

Ismail Khalfaoui Hassani (5pm)
From Dilated convolution with learnable spacings to gaussian mixture kernel convolution

Recent works indicate that convolutional neural networks (CNN) need large receptive fields (RF) to compete with visual transformers and their attention mechanism. In CNNs, RFs can simply be enlarged by increasing the convolution kernel sizes. Yet the number of trainable parameters, which scales quadratically with the kernel’s size in the 2D case, rapidly becomes prohibitive, and the training is notoriously difficult. This paper presents a new method to increase the RF size without increasing the number of parameters. The dilated convolution (DC) has already been proposed for the same purpose. DC can be seen as a convolution with a kernel that contains only a few non-zero elements placed on a regular grid. Here we present a new version of the DC in which the spacings between the non-zero elements, or equivalently their positions, are no longer fixed but learnable via backpropagation thanks to an interpolation technique. We call this method “Dilated Convolution with Learnable Spacings” (DCLS) and generalize it to the n-dimensional convolution case. However, our main focus here will be on the 2D case. We first tried our approach on ResNet50: we drop-in replaced the standard convolutions with DCLS ones, which increased the accuracy of ImageNet1k classification at iso-parameters, but at the expense of the throughput. Next, we used the recent ConvNeXt state-of-the-art convolutional architecture and drop-in replaced the depthwise convolutions with DCLS ones. This not only increased the accuracy of ImageNet1k classification but also of typical downstream and robustness tasks, again at iso-parameters but this time with negligible cost on throughput, as ConvNeXt uses separable convolutions. Conversely, classic DC led to poor performance with both ResNet50 and ConvNeXt.