Title | CUDA-Optimized real-time rendering of a Foveated Visual System |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Malkin, E, Deza, A, Poggio, T |
Conference Name | Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop at NeurIPS 2020 |
Date Published | 12/2020 |
Abstract | The spatially-varying field of the human visual system has recently received a resurgence of interest with the development of virtual reality (VR) and neural networks. The computational demands of high resolution rendering desired for VR can be offset by savings in the periphery [16], while neural networks trained with foveated input have shown perceptual gains in i.i.d and o.o.d generalization [25, 6]. In this paper, we present a technique that exploits the CUDA GPU architecture to efficiently generate Gaussian-based foveated images at high definition (1920px × 1080px) in real-time (165 Hz), with a larger number of pooling regions than previous Gaussian-based foveation algorithms by several orders of magnitude [10, 25], producing a smoothly foveated image that requires no further blending or stitching, and that can be well fit for any contrast sensitivity function. The approach described can be adapted from Gaussian blurring to any eccentricity-dependent image processing and our algorithm can meet demand for experimentation to evaluate the role of spatially-varying processing across biological and artificial agents, so that foveation can be added easily on top of existing systems rather than forcing their redesign (“emulated foveated renderer” [22]). Altogether, this paper demonstrates how a GPU, with a CUDA block-wise architecture, can be employed for radially-variant rendering, with opportunities for more complex post-processing to ensure a metameric foveation scheme [33]. |
URL | https://arxiv.org/abs/2012.08655 |
Associated Module:
CBMM Relationship:
- CBMM Funded