Skip to content

The official implementation of "Secrets of Event-based Optical Flow" (ECCV2022 Oral and IEEE T-PAMI 2024)

License

Notifications You must be signed in to change notification settings

tub-rip/event_based_optical_flow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

👀 The extension paper has been accepted to IEEE T-PAMI! (Paper)

👀 We are now working to make this method more generic, easy-to-use functions (flow = useful_function(events)). Stay tuned!

Secrets of Event-Based Optical Flow (T-PAMI 2024, ECCV 2022)

This is the official repository for Secrets of Event-Based Optical Flow, ECCV 2022 Oral by
Shintaro Shiba, Yoshimitsu Aoki and Guillermo Callego.

We have extended this paper to a journal version: Secrets of Event-based Optical Flow, Depth and Ego-motion Estimation by Contrast Maximization, IEEE T-PAMI 2024.

Secrets of Event-Based Optical Flow

If you use this work in your research, please cite it (see also here):

@Article{Shiba24pami,
  author        = {Shintaro Shiba and Yannick Klose and Yoshimitsu Aoki and Guillermo Gallego},
  title         = {Secrets of Event-based Optical Flow, Depth, and Ego-Motion by Contrast Maximization},
  journal       = {IEEE Trans. Pattern Anal. Mach. Intell. (T-PAMI)},
  year          = 2024,
  pages         = {1--18},
  doi           = {10.1109/TPAMI.2024.3396116}
}

@InProceedings{Shiba22eccv,
  author        = {Shintaro Shiba and Yoshimitsu Aoki and Guillermo Gallego},
  title         = {Secrets of Event-based Optical Flow},
  booktitle     = {European Conference on Computer Vision (ECCV)},
  pages         = {628--645},
  doi           = {10.1007/978-3-031-19797-0_36},
  year          = 2022
}

List of datasets that the flow estimation is tested on

Although this codebase releases just MVSEC examples, I have tested the flow estimation is roughly good in the below datasets. The list is being updated, and if you test new datasets please let us know.

The above is all public datasets, and in our paper (T-PAMI 2024) we also used some non-public dataset from previous works.


Setup

Requirements

Although not all versions are strictly tested, the followings should work.

  • python: 3.8.x, 3.9.x, 3.10.x

GPU is entirely optional. If torch.cuda.is_available() then it automatically switches to use GPU. I'd recomment to use GPU for time-aware solutions, but CPU is ok for no-timeaware method as long as I tested.

Tested environments

  • Mac OS Monterey (both M1 and non-M1)
  • Ubuntu (CUDA 11.1, 11.3, 11.8)
  • PyTorch 1.9-1.12.1, or PyTorch 2.0 (1.13 raises an error during Burgers).

Installation

I strongly recommend to use venv: python3 -m venv <new_venv_path> Also, you can use poetry.

  • Install pytorch < 1.13 or >= 2.0 and torchvision for your environment. Make sure you install the correct CUDA version if you want to use it.

  • If you use poetry, poetry install. If you use only venv, check dependecy libraries and install it from here.

  • If you are having trouble to install pytorch with cuda using poetry refer to this link.

Download dataset

Download each dataset under ./datasets directory. Optionally you can specify other root directory: please check the dataset readme for the details.

Execution

python3 main.py --config_file ./configs/mvsec_indoor_no_timeaware.yaml

If you use poetry, simply add poetry run at the beginning. Please run with -h option to know more about the other options.

Config file

The config (.yaml) file specifies various experimental settings. Please check and change parameters as you like.

Optional tasks (for me)

The code here is already runnable, and explains the ideas of the paper enough. (Please report bugs if any.)

Rather than releasing all of my (sometimes too experimental) codes, I published just a minimal set of the codebase to reproduce. So the following tasks are more optional for me. But if it helps you, I can publish other parts as well. For example:

  • Other data loader

  • Some other cost functions

  • Pretrained model checkpoint file ✔️ released for MVSEC

  • Other solver (especially DNN)

  • The implementation of the Sensors paper

Your feedback is helpful to prioritize the tasks, so please contact me or raise issues. The code is modularized well, so if you want to contribute, it should be easy too.

Citation

If you use this work in your research, please cite it as stated above, below the video.

This code also includes some implementation of the following paper about event collapse in details. Please check it :)

@Article{Shiba22sensors,
  author        = {Shintaro Shiba and Yoshimitsu Aoki and Guillermo Gallego},
  title         = {Event Collapse in Contrast Maximization Frameworks},
  journal       = {Sensors},
  year          = 2022,
  volume        = 22,
  number        = 14,
  pages         = {1--20},
  article-number= 5190,
  doi           = {10.3390/s22145190}
}

Author

Shintaro Shiba @shiba24

LICENSE

Please check License.

Acknowledgement

I appreciate the following repositories for the inspiration:


Additional Resources

About

The official implementation of "Secrets of Event-based Optical Flow" (ECCV2022 Oral and IEEE T-PAMI 2024)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published