GitHub - tho-kn/CEM-4DGS: Official repository of the "Clustered Error Correction with Grouped 4D Gaussian Splatting" (SIGGRAPH Asia 2025) · GitHub
Skip to content

tho-kn/CEM-4DGS

Repository files navigation

CEM4DGS: Clustered Error Correction with Grouped 4D Gaussian Splatting

SIGGRAPH Asia 2025

Taeho Kang1, Jaeyeon Park1, Kyungjin Lee1, Youngki Lee1

1Seoul National University

results

Abstract

Existing 4D Gaussian Splatting (4DGS) methods struggle to accurately reconstruct dynamic scenes, often failing to resolve ambiguous pixel correspondences and inadequate densification in dynamic regions. We address these issues by introducing a novel method composed of two key components: (1) Elliptical Error Clustering and Error Correcting Splat Addition that pinpoints dynamic areas to improve and initialize fitting splats, and (2) Grouped 4D Gaussian Splatting that improves consistency of mapping between splats and represented dynamic objects. Specifically, we classify rendering errors into missing-color and occlusion types, then apply targeted corrections via backprojection or foreground splitting guided by cross-view color consistency. Evaluations on Neural 3D Video and Technicolor datasets demonstrate that our approach significantly improves temporal consistency and achieves state-of-the-art perceptual rendering quality, improving 0.39dB of PSNR on the Technicolor Light Field dataset. Our visualization shows improved alignment between splats and dynamic objects, and the error correction method's capability to identify errors and properly initialize new splats. Our implementation details and source code are available at https://github.com/tho-kn/cem-4dgs.

Method Overview

CEM4DGS extends 4D Gaussian Splatting with the following key contributions:

  • Clustered Error Correction: A novel error correction mechanism that groups erroneous pixels into elliptical clusters and applies targeted corrections via backprojection or foreground splitting guided by cross-view color consistency.
  • Grouped 4D Gaussians: Group-based dynamic representation that gradually splits group based on individual splat's deviation from their group's motion to enable efficient motion modeling.

Installation

Prerequisites

  • Python 3.9+
  • CUDA-compatible GPU (tested on RTX A6000)
  • PyTorch 1.12+ with CUDA support (tested with CUDA 11.8)

Environment Setup

  1. Clone the repository:
git clone https://github.com/tho-kn/CEM4DGS.git
cd CEM4DGS
  1. Create and activate conda environment:
conda create -n CEM4DGS python=3.9
conda activate CEM4DGS
  1. Install PyTorch (adjust CUDA version as needed):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
  1. Install PyTorch3D (Required): Follow the official installation guide. For example (ensure system dependencies like build-essential are installed):

    pip install "git+https://github.com/facebookresearch/pytorch3d.git"
  2. Install remaining dependencies:

    pip install --upgrade setuptools cython wheel
    pip install -r requirements.txt

Docker Usage (Recommended)

We provide a Docker implementation to ensure reproducibility and compatibility, especially for newer GPUs (e.g., NVIDIA Blackwell).

Prerequisites

  • Docker
  • NVIDIA Container Toolkit

Running the Docker Container

We provide a helper script scripts/run_docker.sh that handles user permissions, volume mounting, and GPU selection.

  1. Build and Start: To launch an interactive container:

    chmod +x scripts/run_docker.sh
    ./scripts/run_docker.sh
    # This will build the image (if not present) and drop you into a shell
  2. Environment Variables: You can configure the session using environment variables:

    • GPU_ID: Specific GPU index to use (default: all).
    • DATASET_PATH: Host path to your dataset (default: /data/projects/nerf_data).

    Example:

    GPU_ID=0 DATASET_PATH=/path/to/my/data ./scripts/run_docker.sh

Training inside Docker

Once inside the container, you can run training commands as usual. The dataset is mounted at /data.

# Example training command inside Docker
python train.py --config configs/techni_s1/train.json \
                --model_path output/test_experiment \
                --source_path /data/Technicolor/Trains

Dataset Preparation

We support two main datasets following the preprocessing pipeline from STG:

Neural 3D Video Dataset

  1. Download the dataset from the official repository

  2. Set up preprocessing environment:

./scripts/env_setup.sh
  1. Preprocess all sequences:
./scripts/preprocess_all_n3v.sh <path_to_dataset>

Technicolor Dataset

  1. Download from the official source. You need to request for a permission to access the dataset.

  2. Preprocess sequences:

./scripts/preprocess_all_techni.sh <path_to_dataset>

Training

Script for Training a Single Scene

Train on a single scene, train step 1 first, then train step 2 from the checkpoint of step 1.

python train.py --config configs/<dataset>_s1/<scene_config>.json \
                --model_path <output_directory> \
                --source_path <path_to_preprocessed_scene>
python train.py --config configs/<dataset>_s2/<scene_config>.json \
                --model_path <output_directory> \
                --source_path <path_to_preprocessed_scene> \
                --start_checkpoint <path_to_step1_checkpoint>

Configuration Files

We provide pre-configured training settings for different datasets:

  • configs/N3V_s1/: Neural 3D Video dataset (Step 1)
  • configs/N3V_s2/: Neural 3D Video dataset (Step 2)
  • configs/techni_s1/: Technicolor dataset (Step 1)
  • configs/techni_s2/: Technicolor dataset (Step 2)

Evaluation

Rendering and Metrics

Evaluate trained models:

python render.py --model_path <path_to_trained_model> \
                 --source_path <path_to_dataset> \
                 --skip_train \
                 --iteration <checkpoint_iteration>

Pre-trained Models

Pre-trained models for various scenes are available in our Releases.

Download and extract to use with the evaluation scripts above.

Acknowledgments

This work builds upon several great open source projects:

We thank the authors of these works for making their code publicly available.

Citation

If you find this work useful for your research, please cite:

@inproceedings{kang2025cem4dgs,
  title={Clustered Error Correction with Grouped 4D Gaussian Splatting},
  author={Kang, Taeho and Park, Jaeyeon and Lee, Kyungjin and Lee, Youngki},
  booktitle = {SIGGRAPH Asia 2025 Conference Papers},
  year={2025}
}

About

Official repository of the "Clustered Error Correction with Grouped 4D Gaussian Splatting" (SIGGRAPH Asia 2025)

Resources

License

Stars

Watchers

Forks