Web-based Multi-layered Exploration of Annotated Image-based Shape and Material Models

Alberto Jaspe-Villanueva, Ruggero Pintus, Andrea Giachetti, and Enrico Gobbetti

Eurographics Workshop on Graphics and Cultural Heritage, GCH'19 Best paper award


 author = {Alberto Jaspe-Villanueva and Ruggero Pintus and Andrea Giachetti and Enrico Gobbetti},
 title = {Web-based Multi-layered Exploration of Annotated Image-based Shape and Material Models},
 booktitle = {The 16th Eurographics Workshop on Graphics and Cultural Heritage},
 pages = {33--42},
 month = {November},
 year = {2019},
 doi = {10.2312/gch.20191346},
 note = {Best paper award},
 url = {http://vic.crs4.it/vic/cgi-bin/bib-page.cgi?id='Jaspe:2019:WME'},


We introduce a novel versatile approach for letting users explore detailed image-based shape and material models integrated with structured, spatially-associated descriptive information. We represent the objects of interest as a series of registered layers of image-based shape and material information. These layers are represented at multiple scales, and can come out of a variety of pipelines and include both RTI representations and spatially-varying normal and BRDF fields, eventually as a result of fusing multi-spectral data. An overlay image pyramid associates visual annotations to the various scales. The overlay pyramid of each layer can be easily authored at data preparation time using widely available image editing tools. At run-time, an annotated multi-layered dataset is made available to clients by a standard web server. Users can explore these datasets on a variety of devices, from mobile phones to large scale displays in museum installations, using JavaScript/WebGL2 clients capable to perform layer selection, interactive relighting and enhanced visualization, annotation display, and focus-and-context multiple-layer exploration using a lens metaphor. The capabilities of our approach are demonstrated on a variety of cultural heritage use cases involving different kinds of annotated surface and material models.