diff --git a/README.md b/README.md index faed8e7..21f27e5 100644 --- a/README.md +++ b/README.md @@ -310,6 +310,7 @@ A curated list of awesome AIGC 3D papers, inspired by [awesome-NeRF](https://git - [4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency](https://arxiv.org/abs/2312.17225), Yin et al., arxiv 2023 | [github](https://github.com/VITA-Group/4DGen) | [bibtext](./citations/4dgen.txt) - [DreamGaussian4D: Generative 4D Gaussian Splatting](https://arxiv.org/abs/2312.17142), Ren et al., arxiv 2023 | [github](https://github.com/jiawei-ren/dreamgaussian4d) | [bibtext](./citations/dreamgaussian4d.txt) - [Fast Dynamic 3D Object Generation from a Single-view Video](https://arxiv.org/abs/2401.08742), Pan et al., arxiv 2024 | [github](https://github.com/fudan-zvg/Efficient4D) | [bibtext](./citations/efficient4d.txt) +- [ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware Diffusion Guidance](https://arxiv.org/abs/2403.12409), Chen et al., arxiv 2024 | [bibtext](./citations/comboVerse.txt) - [STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians](https://arxiv.org/abs/2403.14939), Zeng et al., arxiv 2024 | [bibtext](./citations/stag4d.txt) - [TC4D: Trajectory-Conditioned Text-to-4D Generation](https://arxiv.org/abs/2403.17920), Bahmani et al., arxiv 2024 | [bibtext](./citations/tc4d.txt) - [Diffusion^2: Dynamic 3D Content Generation via Score Composition of Orthogonal Diffusion Models](https://arxiv.org/abs/2404.02148), Yang et al., arxiv 2024 | [bibtext](./citations/diffusion^2.txt) diff --git a/citations/comboVerse.txt b/citations/comboVerse.txt new file mode 100644 index 0000000..0c8642c --- /dev/null +++ b/citations/comboVerse.txt @@ -0,0 +1,6 @@ +@article{chen2024comboverse, + title={Comboverse: Compositional 3d assets creation using spatially-aware diffusion guidance}, + author={Chen, Yongwei and Wang, Tengfei and Wu, Tong and Pan, Xingang and Jia, Kui and Liu, Ziwei}, + journal={arXiv preprint arXiv:2403.12409}, + year={2024} +} \ No newline at end of file