Skip to content

Pytorch implementation of FastNeRF: High-Fidelity Neural Rendering at 200FPS

Notifications You must be signed in to change notification settings

mrcabellom/fastNerf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FastNerf implementation

This implementation is based on the Paper FastNeRF: High-Fidelity Neural Rendering at 200FPS

Introduction

Recent work on Neural Radiance Fields (NeRF) showed how neural networks can be used to encode complex 3D environments that can be rendered photorealistically from novel viewpoints. Rendering these images is very computationally demanding and recent improvements are still a long way from enabling interactive rates, even on high-end hardware. Motivated by scenarios on mobile and mixed reality devices, we propose FastNeRF, the first NeRF-based system capable of rendering high fidelity photorealistic images at 200Hz on a high-end consumer GPU. The core of our method is a graphics-inspired factorization that allows for (i) compactly caching a deep radiance map at each position in space, (ii) efficiently querying that map using ray directions to estimate the pixel values in the rendered image.

architecture

Getting started

First of all, install the required dependencies using conda

conda env create --file .\environment.yml

Activate the environment

conda activate fastnerf

Training and test

Execute the following code to train and test the model

python main.py

About

Pytorch implementation of FastNeRF: High-Fidelity Neural Rendering at 200FPS

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages