DeepFracture: A Generative Approach for Predicting Brittle Fractures

The University of Tokyo

We introduce a novel learning-based approach for generating brittle fracture animations integrated with rigid-body simulations.

Abstract

In the realm of brittle fracture animation, generating realistic destruction animations with physics simulation techniques can be computationally expensive. Although methods using Voronoi diagrams or pre-fractured patterns work for real-time applications, they often lack realism in portraying brittle fractures.

This paper introduces a novel learning-based approach for seamlessly merging realistic brittle fracture animations with rigid-body simulations. Our method utilizes BEM brittle fracture simulations to create fractured patterns and collision conditions for a given shape, which serve as training data for the learning process. To effectively integrate collision conditions and fractured shapes into a deep learning framework, we introduce the concept of latent impulse representation and geometrically-segmented signed distance function (GS-SDF). The latent impulse representation serves as input, capturing information about impact forces on the shape's surface. Simultaneously, a GS-SDF is used as the output representation of the fractured shape.

To address the challenge of optimizing multiple fractured pattern targets with a single latent code, we propose an eight-dimensional latent space based on a normal distribution code within our latent impulse representation design. This adaptation effectively transforms our neural network into a generative one. Our experimental results demonstrate that our approach can generate significantly more detailed brittle fractures compared to existing techniques, all while maintaining commendable computational efficiency during run-time.

Video

Bunny

Squirrel

Base

Pot

Animation

Here is the animation examples for Bunny.

Loading...

BibTeX

@misc{huang2023deepfracture,
      title={DeepFracture: A Generative Approach for Predicting Brittle Fractures}, 
      author={Yuhang Huang and Takashi Kanai},
      year={2023},
      eprint={2310.13344},
      archivePrefix={arXiv},
      primaryClass={cs.GR}
}