ORCID Profile
0000-0002-7703-5194
Current Organisations
NTNU
,
University of York
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Society for Imaging Science & Technology
Date: 29-01-2017
Publisher: Association for Computing Machinery (ACM)
Date: 06-10-2017
DOI: 10.1145/3132187
Abstract: We present a fast, novel image-based technique for reverse engineering woven fabrics at a yarn level. These models can be used in a wide range of interior design and visual special effects applications. To recover our pseudo-Bidirectional Texture Function (BTF), we estimate the three-dimensional (3D) structure and a set of yarn parameters (e.g., yarn width, yarn crossovers) from spatial and frequency domain cues. Drawing inspiration from previous work [Zhao et al. 2012], we solve for the woven fabric pattern and from this build a dataset. In contrast, however, we use a combination of image space analysis and frequency domain analysis, and, in challenging cases, match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, captured with a digital single-lens reflex (DSLR) camera under controlled uniform lighting, the woven cloth structure, depth, and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results. Our pipeline first estimates the weave pattern, yarn characteristics, and noise statistics using a novel combination of low-level image processing and Fourier analysis. Next, we estimate a 3D structure for the fabric s le using a first-order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width, and hence the volume occupied by the yarns, and colors. We demonstrate the efficacy of our approach through comparison images of test scenes rendered using (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern, and (d) the rendered result.
Publisher: Wiley
Date: 05-2016
DOI: 10.1111/CGF.12867
Publisher: ACM
Date: 17-11-2019
Publisher: ACM
Date: 28-11-2016
Publisher: ACM
Date: 30-07-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2020
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Giuseppe Claudio Guarnera.