Publication: Illumination-guided inverse rendering benchmark: Learning real objects with few cameras
dc.contributor.author | Yılmaz, Doğa | |
dc.contributor.author | Kıraç, Mustafa Furkan | |
dc.contributor.department | Computer Science | |
dc.contributor.ozuauthor | KIRAÇ, Mustafa Furkan | |
dc.contributor.ozugradstudent | Yılmaz, Doğa | |
dc.date.accessioned | 2023-08-14T08:38:27Z | |
dc.date.available | 2023-08-14T08:38:27Z | |
dc.date.issued | 2023-10 | |
dc.description.abstract | The realm of 3D computer vision and graphics has experienced exponential growth recently, enabling the creation of realistic virtual environments and digital representations of real-world objects. Central to this progression are 3D reconstruction methods that facilitate the virtualization of shape, color, and surface details of real objects. Current methods predominantly employ neural scene representations, which despite their efficacy, grapple with limitations such as necessitating a high number of captured images and the complexity of transforming these representations into explicit geometric forms. An alternative strategy that has gained traction is the deployment of methods such as physically-based differentiable rendering (PBDR) and inverse rendering. These approaches require fewer viewpoints, yield explicit format results, and ensure a smoother transition to other representation methods. To meaningfully assess the performance of different 3D reconstruction methods, it is imperative to utilize benchmark scenes for comparison. Despite the existence of standard objects and scenes within the literature, there is a noticeable deficiency in real-world benchmark data that concurrently captures camera, illumination, and scene parameters — all critical to high-fidelity 3D reconstructions using PBDR and inverse rendering-based methods. In this research, we introduce a methodology for capturing real-world scenes as virtual scenes, integrating illumination parameters alongside camera and scene parameters to enhance the veracity of virtual representations. In addition, we introduce a set of ten real-world scenes, along with their virtual counterparts, designed as benchmarks. These benchmarks encompass a fundamental variety of geometric constructs, including convex, concave, plain, and mixed surfaces. Additionally, we demonstrate the 3D reconstruction results of state-of-the-art 3D reconstruction methods employing PBDR in real-world scenes, using both established methodologies and our proposed one. | en_US |
dc.identifier.doi | 10.1016/j.cag.2023.07.002 | en_US |
dc.identifier.endpage | 121 | en_US |
dc.identifier.issn | 0097-8493 | en_US |
dc.identifier.scopus | 2-s2.0-85165366908 | |
dc.identifier.startpage | 107 | en_US |
dc.identifier.uri | http://hdl.handle.net/10679/8654 | |
dc.identifier.uri | https://doi.org/10.1016/j.cag.2023.07.002 | |
dc.identifier.volume | 115 | en_US |
dc.identifier.wos | 001045274700001 | |
dc.language.iso | eng | en_US |
dc.peerreviewed | yes | en_US |
dc.publicationstatus | Published | en_US |
dc.publisher | Elsevier | en_US |
dc.relation.ispartof | Computers and Graphics (Pergamon) | |
dc.relation.publicationcategory | International Refereed Journal | |
dc.rights | info:eu-repo/semantics/restrictedAccess | |
dc.subject.keywords | 3D reconstruction | en_US |
dc.subject.keywords | Benchmark | en_US |
dc.subject.keywords | Differentiable rendering | en_US |
dc.subject.keywords | Illumination modeling | en_US |
dc.subject.keywords | Inverse rendering | en_US |
dc.subject.keywords | Signed distance functions | en_US |
dc.title | Illumination-guided inverse rendering benchmark: Learning real objects with few cameras | en_US |
dc.type | Article | en_US |
dspace.entity.type | Publication | |
relation.isOrgUnitOfPublication | 85662e71-2a61-492a-b407-df4d38ab90d7 | |
relation.isOrgUnitOfPublication.latestForDiscovery | 85662e71-2a61-492a-b407-df4d38ab90d7 |
Files
License bundle
1 - 1 of 1
- Name:
- license.txt
- Size:
- 1.45 KB
- Format:
- Item-specific license agreed upon to submission
- Description: