Zero-1-to-3: Zero-shot One Image to 3D Object (ICCV 2023)
-
Updated
Dec 5, 2023 - Python
Zero-1-to-3: Zero-shot One Image to 3D Object (ICCV 2023)
[ICLR 2024 Spotlight] SyncDreamer: Generating Multiview-consistent Images from a Single-view Image
[arXiv'24] VistaDream: Sampling multiview consistent images for single-view scene reconstruction
Metrical Monocular Photometric Tracker [ECCV2022]
We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
(ECCV 2022) Code for Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency
A diffuser implementation of Zero123. Zero-1-to-3: Zero-shot One Image to 3D Object (ICCV23)
[CVPR 2024] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion
Topologically-Aware Deformation Fields for Single-View 3D Reconstruction (CVPR 2022)
[CVPR 2024] Official implementation of Morphable Diffusion: 3D-Consistent Diffusion for Single-image Avatar Creation
Implementation of CVPR'23: Learning 3D Scene Priors with 2D Supervision
Code and datasets for TPAMI 2021 "SkeletonNet: A Topology-Preserving Solution for Learning Mesh Reconstruction of Object Surfaces from RGB Images "
Neural Reflectance Field from Shading and Shadow under a Fixed Viewpoint
NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF
[AAAI 2023] Official implementation of "Occupancy Planes for Single-view RGB-D Human Reconstruction"
BuilDiff: 3D Building Shape Generation using Single-Image Conditional Point Cloud Diffusion Models
GAMesh: Guided and Augmented Meshing for Deep Point Networks
[ECCV 2024] SUP-NeRF: A Streamlined Unification of Pose Estimation and NeRF for Monocular 3D Object Reconstruction
Exploring the types of losses and decoder functions for regressing to voxels, point clouds, and mesh representations from single view RGB input.
A new method to preprocess ShapeNet to get minimal shift 3D ground truth; 3 Stage single-view 3D reconstruction method; Point cloud surface reconstruction without input normals.
Add a description, image, and links to the single-view-reconstruction topic page so that developers can more easily learn about it.
To associate your repository with the single-view-reconstruction topic, visit your repo's landing page and select "manage topics."