Show simple item record

dc.contributor.authorYılmaz, Doğa
dc.contributor.authorKıraç, Mustafa Furkan
dc.date.accessioned2023-08-14T08:38:27Z
dc.date.available2023-08-14T08:38:27Z
dc.date.issued2023-10
dc.identifier.issn0097-8493en_US
dc.identifier.urihttp://hdl.handle.net/10679/8654
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S0097849323001310
dc.description.abstractThe realm of 3D computer vision and graphics has experienced exponential growth recently, enabling the creation of realistic virtual environments and digital representations of real-world objects. Central to this progression are 3D reconstruction methods that facilitate the virtualization of shape, color, and surface details of real objects. Current methods predominantly employ neural scene representations, which despite their efficacy, grapple with limitations such as necessitating a high number of captured images and the complexity of transforming these representations into explicit geometric forms. An alternative strategy that has gained traction is the deployment of methods such as physically-based differentiable rendering (PBDR) and inverse rendering. These approaches require fewer viewpoints, yield explicit format results, and ensure a smoother transition to other representation methods. To meaningfully assess the performance of different 3D reconstruction methods, it is imperative to utilize benchmark scenes for comparison. Despite the existence of standard objects and scenes within the literature, there is a noticeable deficiency in real-world benchmark data that concurrently captures camera, illumination, and scene parameters — all critical to high-fidelity 3D reconstructions using PBDR and inverse rendering-based methods. In this research, we introduce a methodology for capturing real-world scenes as virtual scenes, integrating illumination parameters alongside camera and scene parameters to enhance the veracity of virtual representations. In addition, we introduce a set of ten real-world scenes, along with their virtual counterparts, designed as benchmarks. These benchmarks encompass a fundamental variety of geometric constructs, including convex, concave, plain, and mixed surfaces. Additionally, we demonstrate the 3D reconstruction results of state-of-the-art 3D reconstruction methods employing PBDR in real-world scenes, using both established methodologies and our proposed one.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.relation.ispartofComputers and Graphics (Pergamon)
dc.rightsrestrictedAccess
dc.titleIllumination-guided inverse rendering benchmark: Learning real objects with few camerasen_US
dc.typeArticleen_US
dc.peerreviewedyesen_US
dc.publicationstatusPublisheden_US
dc.contributor.departmentÖzyeğin University
dc.contributor.authorID(ORCID 0000-0001-9177-0489 & YÖK ID 124619) Kıraç, Furkan
dc.contributor.ozuauthorKıraç, Mustafa Furkan
dc.identifier.volume115en_US
dc.identifier.startpage107en_US
dc.identifier.endpage121en_US
dc.identifier.wosWOS:001045274700001
dc.identifier.doi10.1016/j.cag.2023.07.002en_US
dc.subject.keywords3D reconstructionen_US
dc.subject.keywordsBenchmarken_US
dc.subject.keywordsDifferentiable renderingen_US
dc.subject.keywordsIllumination modelingen_US
dc.subject.keywordsInverse renderingen_US
dc.subject.keywordsSigned distance functionsen_US
dc.identifier.scopusSCOPUS:2-s2.0-85165366908
dc.contributor.ozugradstudentYılmaz, Doğa
dc.relation.publicationcategoryArticle - International Refereed Journal - Institutional Academic Staff and Graduate Student


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record


Share this page