3.8 Proceedings Paper

Depth and motion cues with phosphene patterns for prosthetic vision

Publisher

IEEE
DOI: 10.1109/ICCVW.2017.179

Keywords

-

Funding

  1. MINECO/FEDER, UE [DPI2014-61792-EXP, DPI2015-65962-R]
  2. MINECO [BES-2013-065834]

Ask authors/readers for more resources

Recent research demonstrates that visual prostheses are able to provide visual perception to people with some kind of blindness. In visual prostheses, image information from the scene is transformed to a phosphene pattern to be sent to the implant. This is a complex problem where the main challenge is the very limited spatial and intensity resolution. Moreover, depth perception, which is relevant to perform agile navigation, is lost and codifying the semantic information to phosphene patterns remains an open problem. In this work, we consider the framework of perception for navigation where aspects such as obstacle avoidance are critical. We propose using a head-mounted RGB-D camera to detect free-space, obstacles and scene direction in front of the user. The main contribution is a new approach to represent depth information and provide motion cues by using particular phosphene patterns. The effectiveness of this approach is tested in simulation with real data from indoor environments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available