Graphic_test2.jpg

Lille / Octobre 11-13 2017

Octobre 11-12: symposium Perception des images et deficit du champ visuel
Octobre 12-13 : 11th GDR vision meeting

GDR-Vision

 

The GDR Vision meeting will take place in Lille, the 12 and 13 of Octobre 2017, organized by Laurent Madelain (SCALab, UMR 9193, U. Lille).

As always we welcome talks (15min + 5min questions) and posters submissions. 

We will also have three keynote speakers (50min + 10min questions).

Keynote speakers

sbh.jpg

Suliann Ben Hamed,

Centre de Neuroscience Cognitive, CNRS, Bron

http://benhamedteam.cnc.isc.cnrs.fr/fr/

Natalie_Hempel_de_Ibarra.jpg

Natalie Hempel de Ibarra,

Centre for Research in Animal Behaviour (CRAB), University of Exeter

http://psychology.exeter.ac.uk/staff/index.php?web_id=Natalie_Hempel_de_Ibarra

jenny_201x300.jpg

Jenny Read,

Institute of Neuroscience, Newcastle University

http://www.jennyreadresearch.com/

 Abstracts


Natalie Hempel de Ibarra, Centre for Research in Animal Behaviour (CRAB), University of Exeter

Insect vision: when and why small body size matters

Animals vary greatly in body size which defines to a large extent the resolution of their sensory systems and how their brains process sensory information. This is particularly evident when comparing different designs of eyes across the animal kingdom. Insects are small in size, have low-resolution eyes yet sophisticated vision that enables them to solve a range of perceptual and navigational tasks. I will present ideas and current work that investigates how bees acquire and learn visual information for spatial orientation and localisation of flowers, whilst controlling their flight movements in a three-dimensional space.


Suliann Ben Hamed, Centre de Neuroscience Cognitive, CNRS, Bron

The spatial and temporal dynamics of attention: insights from the real-time decoding of the attentional spotlight.

As early as 1890, William James, defined attention as the cognitive process by which the mind takes possession in clear and vivid form of one out of what seem several simultaneous objects or trains of thought. Since then, this cognitive function has been explored by experimental psychologists and neuroscientists alike and the knowledge we have gained onto this process up to now is based on indirect task-based inferences on attention, rather than on where attention is actually being placed by the subject. I will present a new approach to the study of attention, based on the real-time tracking of covert spatial attention spotlight from the ongoing activity of bilateral prefrontal dense neuronal recordings in the non-human primate and I will show that this approach is instrumental to characterize the spatial and temporal dynamics of attentional processes.


Jenny Read (presenting), Sid Henriksen, Dan Butts & Bruce Cumming, Institute of Neuroscience, Newcastle University

The neural basis of stereopsis: understanding how binocular disparity is encoded in primary visual cortex

Primate stereopsis is remarkably precise and can break camouflage, revealing structures that are monocularly invisible. This ability depends on matching up the two eyes’ images, a process which begins with disparity-sensitive neurons in primary visual cortex, V1. The currently accepted model of these neurons is a 3-layer linear/nonlinear neural network. The weights from the input layer to the hidden layer represent binocular simple-cell receptive fields. These simple cells then converge onto a single V1 complex cell. With the right parameters, this model can reproduce many general properties of V1 neurons, notably their attenuated responses to anticorrelated images. Here, contrast is inverted in one eye, meaning there are many false local matches but no global depth. However, attempts to fit these models to V1 neurons using spike-triggered covariance have not shown this attenuation. Thus it is unclear whether this model really describes how V1 works. We have used a new machine learning approach to train models on correlated, uncorrelated and anti-correlated random-line patterns with a range of disparities. Despite being given only raw images – not disparity or correlation – as input, the model predicts disparity tuning curves well for all three correlations. This shows for the first time that these models can describe individual V1 neurons. However, many neurons show very high activity for one preferred disparity, which the models cannot capture. This suggests that the real puzzle of V1 neurons may not be how they attenuate their response to false matches, but how they boost their signal for one preferred disparity.


 You must have an account to subscribe or submit an abstract. Use the button on the top right.

Il faut créer un compte ("connexion" en haut a droite) pour pouvoir s'inscrire ou soumettre une proposition de communication.

Personnes connectées : 1 Flux RSS