Data Interpretation: Methods and Tools
Intricacies and Potentials of Gathering Paradata in the 3D Modelling Workflow
by Sven Havemann
The term ‘3D modelling’ denotes the whole process of creating a 3D artefact, where ‘3D artefact’ stands for any kind of digital 3D representation of a real object. Usually this real-world object is an item of a certain cultural or historical value, to be thought of as, for instance, the objects typically shown in museum exhibitions.
A 3D artefact is supposed to be a faithful recording and documentation of reality. However, unlike taking a photograph, the creation of a 3D artefact is not only a matter of activating a sensor that takes samples of reality, such as pixel colours arranged in a regular grid. Instead of being a simple measurement, the 3D modelling workflow takes the measured raw data (3D acquisition) and transforms and combines them through a number of processing steps. They typically involve a number of sophisticated geometric algorithms, some of which are discussed in this chapter. This qualifies indeed for the name 3D modelling: 3D artefact creation is much more than just 3D acquisition. In many cases, in particular when high-quality results are requested, the human is also in the loop. Highly skilled and trained 3D operators fill holes in the models, remove scanning deficiencies, and use interactive tools to optimize the surface quality.
The great problem with this approach is the loss of authenticity: with the finished 3D artefact it is no longer possible to clearly distinguish between measured data and data that are ‘invented’ by 3D modelling algorithms. Furthermore, current 3D technology has some inherent limitations that make it principally impossible to gather, along the modelling process, the paradata that would allow assessment of the authenticity of datasets on the basis of individual triangles. It is argued here that a new type of 3D technology is required: a set of algorithms, data structures, and policies that are respected and implemented by all software tools used in the 3D modelling tool chain. Some essential requirements are formulated, and in some cases interesting new ways are indicated how these requirements could be implemented to obtain practical solutions to the problem of gathering paradata in the process of the creation of 3D artefacts.
The order of illustrations below follows the book (please note the distinction between figures and plates).
Figure 13.1 Generative surface reconstruction. A parametric shape template (right) is adapted to given raw data (left). The shape template is fully informed, each point on the generated surface contains a link back to the parametric function that has created it. By exploiting geometric proximity, every measured data point can be assigned a meaning: arch, sub-Arch, rosette, profile, and so on.
Figure 13.2 Shape acquisition in a plaster museum. A beamer projects a coloured random pattern onto the plaster statues. Photos are always shot from the same distance, keeping nearly the same surface point in the centre. Typically 12-17 photos are shot by moving the tripod on a circle arc of ~ 90º. The result is a photo sequence.
Figure 13.3 Shape classes in a plaster museum. From top to bottom: busts, heads, full-length figures, active poses and miscellaneous statues. A scanning campaign requires thorough planning. Each class has a typical scanning order. It is a good idea to define the typical scanning patterns beforehand in order to obtain reasonable, repeatable results. © Gipsmuseum, Institut für Archäologie Universität Graz
Figure 13.4 Plaster casts of the statue of Athena and their metadata. Both casts are believed to derive from the same original. Automatic methods for determining such dependencies through, for instance, similarities between shapes recorded in one or several databases, could benefit cultural heritage research. © Gipsmuseum, Institut für Archäologie Universität Graz
Figure 13.5 A statue such as the Athena Lemnia typically requires eight photographic sequences. They are taken for both the upper and lower parts, from four directions. Due to space constraints in a museum environment it is not always possible to acquire the sequences in the optimal interval of 90 degrees.
Plate 13.1 Photogrammetric photo sequences of the Athena Lemnia statue demonstrating the effectiveness of the random pattern. The sequence consists of 21 photos, the first 8 are shown. The second row shows the reconstruction quality from the dense matching: blue indicates that a reliable depth value is available for the respective pixel, yellow means very low quality
Figure 13.6 A processing pipeline of a single sequence. Column (1) 4-6 images of the sequences are selected. Each image is converted into a mesh, resulting in a multi-layered shape in regions where the image meshes overlap (2) The meshes are cleaned, unused points and outliers are removed (3) A Poisson reconstruction converts the multi-layer into a single-layer mesh (4) Extraneous triangles are cut away manually (5) Mesh simplification (6) Providing the mesh with vertex colours.
Figure 13.7 Detailed surface inspection. The multi-layer mesh (left) is regular, but contains much noise (columns 2 and 3). The Poisson reconstruction (right) does a great job at removing this noise, creating a closed surface. It consists of a single layer. It both eliminates (multi-layer) information and 'invents' information (in holes).
Plate 13.2 The final step: aligning the resulting meshes from the eight image sequences, then fusing them together. This involves a rough manual alignment, followed by numeric refinement. The ICP algorithm has only a small radius of convergence, so considerable experience is required to perform the alignment and select good feature points.
Figure 13.8 Comparison of input data and resulting shapes. Left: Parts of the model of particular interest that invite further investigation. Right: The user queries input data available for these areas (images and depth maps) and compares them with the mesh