Transformer with Implicit Edges for Particle-based Physics Simulation

ECCV 2022

Paper

Abstract

Particle-based systems provide a flexible and unified way to simulate physics systems with complex dynamics. Most existing data-driven simulators for particle-based systems adopt graph neural networks (GNNs) as their network backbones, as particles and their interactions can be naturally represented by graph nodes and graph edges. However, while particle-based systems usually contain hundreds even thousands of particles, the explicit modeling of particle interactions as graph edges inevitably leads to a significant computational overhead, due to the increased number of particle interactions. Consequently, in this paper we propose a novel Transformer-based method, dubbed as Transformer with Implicit Edges (TIE), to capture the rich semantics of particle interactions in an edge-free manner. The core idea of TIE is to decentralize the computation involving pair-wise particle interactions into per-particle updates. This is achieved by adjusting the self-attention module to resemble the update formula of graph edges in GNN. To improve the generalization ability of TIE, we further amend TIE with learnable material-specific abstract particles to disentangle global material-wise semantics from local particle-wise semantics. We evaluate our model on diverse domains of varying complexity and materials. Compared with existing GNN-based methods, without bells and whistles, TIE achieves superior performance and generalization across all these domains.

We present the rollouts on BoxBath domain. TIE with abstract particles is marked by +. Our models predict more faithfull rollouts not only in terms of the posture and position of the rigid cube, but also the vivid and smooth fluid dynamics.

The

TIE Framework

The proposed framework includes three components:

  1. Implicit edge modeling by decomposing the edge formula in GNNs into a combination of receiver and sender tokens.
  2. Decentralizations of receiver and sender tokens in attention module to recover more semantics in interactions.
  3. Trainable abstract particles to disentangle global material-specific semantics from local particle-wise semantics

We demonstrate the propagations for edges in GNNs and TIE, where explicit or implicit edges are shown in red boxes. The process of MLP in each layer is splitted into blocks of square followed by summations. Different blocks of MLP are shown by square areas with different colors. The key idea of TIE is that TIE replaces the explicit edges by receiver tokens and sender tokens. When only considering the trainable weights of each MLP, summation of receiver and sender tokens within a red box equals to the edge within the same depth of red box.

Experimental

Results

Table 1: We report M3SEs (1e-2) results on four base domains, while keep the models' number of parameters similar to each other. TIE achieves superior performance on all domains without suffering from its significant computational overhead. When adding trainable abstract particles, TIE, marked by +, further improves performance on RiceGrip and BoxBath, which involve complex deformations and multi-material interactions respectively.

Methods FluidFall FluidShake RiceGrip BoxBath
M3SEs #Para M3SEs #Para M3SEs #Para M3SEs #Para
DPI-Net 0.08±0.05 0.61M 1.38±0.45 0.62M 0.13±0.09 1.98M 1.33±0.29 1.98M
CConv 0.08±0.02 0.84M 1.41±0.46 0.84M N/A N/A N/A N/A
GNS 0.09±0.02 0.70M 1.66±0.37 0.70M 0.40±0.16 0.71M 1.56±0.23 0.70M
GraphTrans 0.04±0.01 0.77M 1.36±0.37 0.77M 0.12±0.11 0.78M 1.27±0.25 0.77M
TIE (Ours) 0.04±0.01 0.77M 1.22±0.37 0.77M 0.13±0.12 0.78M 1.35±0.35 0.77M
TIE+ (Ours) 0.04±0.00 0.77M 1.30±0.41 0.77M 0.08±0.08 0.78M 0.92±0.16 0.77M

Comparisons of

Forwarding time

We report averaged models' training time for each iteration. The batch size in (a) and (b) is set to 1, while the batch size in (c) and (d) is set to 4. As the number of interactions increases, the time cost for TIE+ remains stable, while other models spend more time to train due to the computational overhead introduced by extra interactions.

Paper

Citation

@InProceedings{shao2022transformer,
 author = {Shao, Yidi and Loy, Chen Change and Dai, Bo},
 title = {Transformer with Implicit Edges for Particle-based Physics Simulation},
 booktitle = {Computer Vision - {ECCV} 2022 - 17th European Conference},
 year = {2022}
}

Contact


Shao Yidi
Email: yidi001 at e.ntu.edu.sg