Web-based Exploration of Annotated Multi-Layered Relightable Image Models

Alberto Jaspe-Villanueva, Moonisa Ahsan, Ruggero Pintus, Andrea Giachetti, Fabio Marton, and Enrico Gobbetti

Journal of Computing and Cultural Heritage 14(2): 24:1-24:31, May 2021


 author = {Alberto {Jaspe Villanueva} and Moonisa Ahsan and Ruggero Pintus and Andrea Giachetti and Enrico Gobbetti},
 title = {Web-based Exploration of Annotated Multi-Layered Relightable Image Models},
 journal = {ACM Journal on Computing and Cultural Heritage},
 volume = {14},
 number = {2},
 pages = {24:1--24:31},
 month = {May},
 year = {2021},
 doi = {10.1145/3430846},
 url = {http://vic.crs4.it/vic/cgi-bin/bib-page.cgi?id='Jaspe:2021:WEA'},


We introduce a novel approach for exploring image-based shape and material models registered with structured descriptive information fused in multi-scale overlays. We represent the objects of interest as a series of registered layers of image-based shape and material data. These layers are represented at different scales and can come out of a variety of pipelines. These layers can include both Reflectance Transformation Imaging representations, and spatially varying normal and Bidirectional Reflectance Distribution Function fields, possibly as a result of fusing multi-spectral data. An overlay image pyramid associates visual annotations to the various scales. The overlay pyramid of each layer is created at data preparation time by either one of the three subsequent methods: (1) by importing it from other pipelines, (2) by creating it with the simple annotation drawing toolkit available within the viewer, and (3) with external image editing tools. This makes it easier for the user to seamlessly draw annotations over the region of interest. At runtime, clients can access an annotated multi-layered dataset by a standard web server. Users can explore these datasets on a variety of devices; they range from small mobile devices to large-scale displays used in museum installations. On all these aforementioned platforms, JavaScript/WebGL2 clients running in browsers are fully capable of performing layer selection, interactive relighting, enhanced visualization, and annotation display. We address the problem of clutter by embedding interactive lenses. This focus-and-context-aware (multiple-layer) exploration tool supports exploration of more than one representation in a single view. That allows mixing and matching of presentation modes and annotation display. The capabilities of our approach are demonstrated on a variety of cultural heritage use-cases. That involves different kinds of annotated surface and material models.