–>The application deadline is 29/10/18
Job description and requirements
Project reference is PSI2017-83493-R. Active sampling of 3D motion and optic flow (web).
Applications are invited for one PhD position funded for four years by the Spanish Ministry of Science, Innovation and Universities in the Vision and Control of Action Research Group . The selected candidate will carry out the doctoral thesis under the supervision of Pr. Joan López-Moliner within a research project on Optic flow analysis and 3D motion (see description below).
-As a candidate for this position you should have a Master’s degree (or equivalent) in psychology, neuroscience, biology, computer science, physics, or related fields.
-Previous experience with computer programming (Python, R, or C/C++), visual psychophysics, eye-tracking, and quantitative data analysis will be a plus.
-Candidates should have a good level of oral and written English.
-Research methods include human psychophysics, eye movements analysis, behavioral experiments and computational modelling.
Contact: email@example.com (Prof. Joan López-Moliner) for further information and send a short CV (pdf) if interested
Project description: Active sampling of 3D motion and optic flow
Recently, research in visual perception has started to consider the motor activity (e.g. eye or head movements) as a key component when processing sensory information. For example, body, head and eye movements situate the observer in such state with respect to objects in the environment that sampling the incoming flow of sensory information becomes more efficient. This efficiency can be measured through sensorimotor tasks (e.g. intercept a moving target) and compared with ideal observer models based on applying optimality frameworks, that is, considering different sources of uncertainty. This perspective is what «active vision» is about. Active vision processes are anticipatory in nature and help alleviate the potential problems caused by neural temporal delays in sensorimotor tasks when fast timing is critical. One basic aim of this project is then to study how humans actively sample the sensory information about moving targets in the 3D scene around us.
From our previous work we are able to determine regularities in the optic flow (i.e. pattern of retinal motion created by moving objects or self-motion) that carry informative patterns that humans can exploit in order to time their responses when interacting with moving objects (e.g. intercept a moving target). The first question then is whether motor activity unfolds so as to bring the observed in a position or state to sample and process these regularities of the sensory information more efficiently. Secondly, we aim at addressing whether this efficient sampling is susceptible to be learnt, that is, whether the encoding of sensory information in the optic flow can be trained and reach optimal levels. In order to answer these questions we will define gain (cost) functions in the temporal and spatial domains, so that the perceived consequences (i.e. gain) of motor actions provide indications of how and when to execute future actions and decisions. Therefore we expect (a) to validate methods to improve the extraction of temporal information from the optic flow resulting is much less performance variability and (b) provide computational definitions liable to be implemented in display technologies for training purposes.