Skip to content

A novel manifold-aware transfomer architecture for predicting garment dynamics on unseen geometries [EUROGRAPHICS 2024]

Notifications You must be signed in to change notification settings

PeizhuoLi/manifold-aware-transformers

Repository files navigation

Neural Garment Dynamics via Manifold-Aware Transformers

Python Pytorch

This repository provides the implementation for our manifold-aware transformers, a novel neural network architecture for predicting the dynamics of garments. It is based on our work Neural Garment Dynamics via Manifold-Aware Transformers that is published in EUROGRAPHICS 2024.

Prerequisites

This code has been tested under Ubuntu 20.04. Before starting, please configure your Anaconda environment by

conda env create -f environment.yml
conda activate manifold-aware-transformers

Alternatively, you may install the following packages (and their dependencies) manually:

  • numpy == 1.23.1 (note numpy.bool is deprecated in higher version and it causes an error when loading SMPL model)
  • pytorch == 2.0.1
  • scipy >= 1.10.1
  • cholespy == 1.0.0
  • scikit-sparse == 0.4.4
  • libigl == 2.4.1
  • tensorboard >= 2.12.1
  • tqdm >= 4.65.0
  • chumpy == 0.70

Quick Start

We provide several pre-trained models trained on different datasets. Download the pre-trained models and the example sequences from Google Drive. Please extract the pre-trained models and example sequences, and put them under the pre-trained and data directory directly under the root of the project directory, respectively.

Garment Prediction

Run demo.sh.

The prediction of the network will be stored in [path to pre-trained model]/sequence/prediction.pkl. The corresponding ground truth and body motion are stored in gt.pkl and body.pkl respectively. Please refer to here for the specification and visualization of the predicted mesh sequence.

Mesh Sequence Format and Visualization

We use a custom format to store a sequence of meshes. The specific format can be found in the function write_vert_pos_pickle() in mesh_utils.py.

We provide a plugin for visualizing the mesh sequences directly in Blender here. It is based on the STOP-motion-OBJ plugin by @neverhood311.

Evaluation

We provide a small sample of pre-processed VTO and CLOTH3D datasets for reproducing our quantitative evaluations. Please download and extract the sample data to the data directory directly under the root of the project directory.

Use the following command to calculate the mean vertex error of the pre-trained models on VTO dataset:

python evaluate.py --dataset=vto

and for CLOTH3D dataset:

python evaluate.py --dataset=cloth3d

Due to the nondeterministic algorithms used in Pytorch, the results may differ in each run, and may also slightly differ from the numbers reported in the paper.

Data Preprocessing

The input garment geometry is decimated to improve the running efficiency. A separate module implemented with C++ is required.

Our model requires the signed distance function from the garment geometry to the body geometry as input. It is calculated on the fly for inference time using a highly optimized GPU implementation.

The current implementation for inference requires the same format, and you may use the following steps to preprocess the data for inference as well.

VTO dataset

Please download the VTO dataset from here, and run the following command to preprocess the data:

python parse_data_vto.py --data_path_prefix=[path to downloaded vto dataset] --save_path=[path to save the preprocessed data]

Cloth3D dataset

Please download the CLOTH3D dataset from here, put the DataReader directory from the official starter kit under data/cloth3d, and run the following command to preprocess the data:

python parse_data_cloth3d.py --data_path_prefix=[path to downloaded cloth3d dataset] --save_path=[path to save the preprocessed data]

Train from scratch

Training the model requires the preprocessed data. If you are using a different dataset, please modify the preprocessing script to adapt your dataset.

Although the dataset is preprocessed, additional calculation is carried out beforehand the training starts. Since the size of the datasets used is large, it is generally not possible to store all the precalculations in memory. Instead, the results are stored in to directory pointed by environment variable $TMPDIR. It is recommended to set this path to a high-speed SSD drive.

Please compile the sequence used for training in a text file as the example demonstrated below.

VTO dataset

Use the following command to train the model on the VTO dataset:

python train_frame_based.py --save_path=[path to save] --multiple_dataset=dataset/sequence_lists/vto-training-example.txt

CLOTH3D dataset

python train_frame_based.py --save_path=[path to save] --multiple_dataset=dataset/sequence_lists/cloth3d-training-example.txt --gaussian_filter_sigma=1 --slowdown_ratio=2

CLOTH3D dataset is known to contain noisy simulation results. We mitigate this issue by applying a temporal smoothing filter to the ground truth data. The filter is applied with option --gaussian_filter_sigma=1. In addition, the dataset is generated at 60FPS, and is downsampled to 30FPS with option --slowdown_ratio=2.

Acknowledgments

The code in dataset/smpl.py is adapted from SMPL by @CalciferZh.

Citation

If you use this code for your research, please cite our paper:

@article{Li2024NeuralGarmentDynamics,
  author  = {Li, Peizhuo and Wang, Tuanfeng Y. and Kesdogan, Timur Levent and Ceylan, Duygu and Sorkine-Hornung, Olga},
  title   = {Neural Garment Dynamics via Manifold-Aware Transformers},
  journal = {Computer Graphics Forum (Proceedings of EUROGRAPHICS 2024)},
  volume  = {43},
  number  = {2},
  year    = {2024},
}

About

A novel manifold-aware transfomer architecture for predicting garment dynamics on unseen geometries [EUROGRAPHICS 2024]

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published