Parameter Analysis and Synthesis of Cellular Textures


Master's Thesis, 2018

100 Pages, Grade: 1,0


Excerpt

Contents

1 Introduction

2 Related Work
2.1 Texture Perception
2.2 Stochastic Texture Synthesis
2.3 Non-parametric Texture Synthesis
2.4 Pixel-based Non-parametric Texture Synthesis
2.5 Patch-based Synthesis
2.6 Multi-Exemplar Synthesis
2.7 Synthesis Output Manipulation
2.8 Implications for this Thesis

3 Fundamentals 1
3.1 Textures
3.1.1 Texture Types
3.1.2 Texture Mapping
3.2 Image Filters
3.3 Image Pyramids
3.3.1 Gaussian Pyramids
3.3.2 Laplacian Pyramids
3.3.3 Steerable Pyramids
3.4 Worley-Noise
3.5 Voronoi Diagrams
3.6 Principal Component Analysis
3.7 Gradient Descent

4 Methods and Implementation
4.1 Cellular Textures
4.2 Parameters
4.3 Analysis
4.3.1 Cell Segmentation
4.3.2 Initial Parameter Generation
4.3.3 Parameter Optimization
4.4 Generation
4.4.1 Cell Structure Generation
4.4.2 Statistical Seed Placement
4.4.3 Placement based on binary Input
4.4.4 Storing Individual Cells
4.5 Synthesis
4.5.1 Filling Cells with Color
4.5.2 Texture Refinement
4.5.3 Multi-Exemplar Synthesis

5 Evaluation
5.1 Parameter Analysis
5.2 Generated Results
5.2.1 Tileable Input
5.2.2 Parameter Preservation
5.2.3 Parameter Modification
5.2.4 Multi-Input Synthesis
5.3 Limitations

6 Conclusion and Future Work
6.1 Conclusion
6.2 Future Work

Bibliography

A Gradient Descent Derivations
A.1 Norm Derivations
A.2 Seed Point Optimization
A.3 Simplified Optimization
A.4 Including Vector Transformations

B Texture Comparison

Abstract

In the field of c omputer g raphics, t wo-dimensional t extures are an efficient tool to make a virtual scene richer in detail and therefore vi­sually more appealing. Consequently, the perceived quality of a ren­dered image highly depends on the quality of the used textures. Unfor­tunately, creating textures by hand is a time-consuming task, which in extreme cases can only be performed by professional artists. On the other hand, the use of real photographs often requires post-processing steps, e.g. to make them tileable so that there are no visual transitions between multiple texture copies covering a large surface. Approaches exist that automatically create new texture variants with different sizes based on a given input sample. However, most of these methods focus on generating output textures that are as similar to the original input as possible while not allowing for further modifications. The goal of this thesis is to provide a texture generation approach that works on cellular textures and enables structural modifications w hen generat­ing a new output exemplar. Therefore, a given input sample first gets analyzed for its underlying cellular structure including the distribu­tion, orientation and expansion of the single cells. In a next step, the analyzed parameters can be used and modified by a user to create a desired output structure which is further refined on a per-pixel basis. All in all, a given texture can be modified to meet the requirements of a specific use case while preserving the overall visual characteristics of the input. In contrast to traditional pixel-based and patch-based methods, the technique presented in this thesis captures and repro­duces the underlying structure of cellular images leading to a higher qualitative synthesis result for that class of images.

Keywords: texture analysis, texture synthesis, image processing, cel­lular textures, voronoi structure, optimization, user-driven

Chapter 1

Introduction

During the last decades, computer-generated images have largely become part of our everyday lives. Film, advertisement, video games and even production and medical industries make extensive use of computer-generated scenes and artificial images and the future demand for these will certainly continue to grow.

One important aspect of all of these applications is that the shown images and virtual scenes have to look plausible. Back in 1985, Jim Blinn, a pioneer in the field of computer graphics, is supposed to have said:

“All it takes is for the rendered image to look right.”

Textures are crucial to achieve this “right look”. According to various common definitions, a texture describes the look, feel and continuity of an object, fabric or substance. Obviously, in the field of computer graphics, appearance is by far the most important aspect of these definitions.

In most virtual scenes, objects are represented by polygonal meshes. If such an object is meant to show very detailed surface properties, there are basically two ways to achieve this. The first one is to model these details by actually designing the mesh in a way that its geometric primitives match the desired surface details. However, this is a challenging task that also leads to complex models and con­sequently increases the computational complexity of the virtual scene. Instead, digital or synthesized images can be mapped onto the object's surface to enhance its visual appearance. These images are called textures and provide the desired surface details. As a result, the object itself is allowed to be only a rough ap­proximation of the real world counterpart in terms of polygon count. In addition, textures can be used to efficiently a chieve c ertain e ffects such as r eflections or surface displacements. Because of that, texture mapping is one of the most im­portant methods to enhance surface details of virtual objects and thus the visual appearance of a rendered 3D scene in general. While actual photographs are a rea­sonable attempt to provide realistic textures, it is not easy to generate photographs of a desired texture that are equally illuminated and show no noticeable distor­tions. Moreover, in most cases they are not tileable, which means that they cannot cover large surface areas without visible seams or obviously repeating patterns. On the other hand, manually generating textures is a time-consuming and tedious task that also requires a certain amount of artistic skill, especially when aiming for photo-realism. For this reason, a lot of research has been done in the area of texture synthesis in recent decades.

In its simplest form, texture synthesis is the process of automatically generating an output image based on a certain input. Since textures commonly get categorized into groups ranging from regular to stochastic, the chosen input and synthesis ap­proaches are also different for each type of texture. These approaches vary from either being purely based on statistics and probability distributions to copying and blending whole parts of a given input texture sample to synthesize a new texture exemplar. Nevertheless, the common goal is to produce an output image of arbi­trary size that is tileable and shows the same characteristics as its corresponding input. A human observer should not be able to notice any visible artifacts, obvi­ously repeating patterns or seams within the synthesized result. However, while most synthesis approaches focus on this similarity between input and synthesized output, it might also be of some interest to modify a texture's underlying struc­ture. For example, if a texture was found that matches the desired look of an object, however, its coarse structure does not, there is currently no way to interac­tively change a textures pattern while simultaneously preserving its general visual properties.

This thesis presents an approach to synthesize cellular textures based on one or multiple input exemplars. Thereby, the input first gets analyzed for parameters that describe its underlying cell structure. Subsequently, when generating a new output texture, these parameters can be modified by a user in order to change this structure while keeping the visual characteristics of the original input exemplars.

The next chapter will introduce existing research related to this thesis by present­ing various approaches including their advantages and disadvantages. Chapter 3 presents background knowledge of techniques and methods used or referenced during research and the development process. After that, the used methods and their implementation are described in Chapter 4 and followed by an evaluation of the generated results in Chapter 5. Finally, Chapter 6 will conclude the develop­ment process and give a short overview of work and improvements which can be done in the future.

Chapter 2

Related Work

In the last decades, a lot of research has been done in the area of texture analysis and synthesis. However, most of this research has a very similar origin. In this chapter, some of these approaches will be presented.

Traditionally, textures are categorized as either stochastic or regular [HB95; EL99]. Regular or deterministic textures show consistent, regular structures or repeating patterns of single primitives like tiles on a tiled floor o r b ricks i n a p erfectly ar­ranged brick wall. On the other hand, stochastic textures are characterized by fine, randomly placed details. Often, they do not show a lot of variation throughout the image. Typical examples for stochastic textures are sand, grass or granite. Never­theless, almost all real-world textures show characteristics of both categories and are therefore defined in a s pectrum somewhere b etween these two c lasses. For more information on textures in general, see Chapter 3.

Similar to the variety of texture types, there exist several different synthesis ap­proaches. In fact, most existing work focuses on one specific class o f textures. Nevertheless, the overall goal of texture synthesis in almost every case is to gen­erate a new texture that either meets the properties of an underlying mathematical model or shows similar visual characteristics to a given input example. Thereby, the generated result should usually be of arbitrary size and tileable.

2.1 Texture Perception

First attempts in describing and therefore modeling textures were made in the 1960's by Julesz [Jul62]. In his work, he tried to distinguish important from neg­ligible characteristics in terms of perception. In other words, given two images, in which properties do they have to differ in order to be distinguishable?

His work was focused on the n-th order statistics of images. 1st order statistics for example can be computed from the image's histogram and measure the likelihood of observing a certain color or gray value at a random pixel location. Statistics of higher order also take a pixel's neighborhood into account [Jul62; BK02].

Julesz proposed that besides obvious structural differences and line discontinu­ities, textures are indistinguishable by a human being if they show the same 2nd- order statistics regarding the luminance histograms. Although this strict assump­tion was disproven later [JGSF73; JGV78], it opened the field of modeling tex­tures based on low-order statistics.

2.2 Stochastic Texture Synthesis

In statistical analysis and synthesis approaches, textures get indirectly described by non-deterministic, stochastic properties such as the distributions and relations between gray levels of an image or the values of its single color channels.

One of the most popular texture modeling techniques is the utilization of Markov Random Fields (MRFs). Markov Random Fields are probabilistic models where textures are considered to be stochastic, two-dimensional random fields that fulfill certain properties [CJ83; MJ92; JW97]. The most important condition of a MRF is markovianity, which states that a random variables within a local neighborhood of arbitrary size directly interact with each other. Besides this, the variable is inde­pendent from all other variables in the field. In the context of textures, each pixel of an image is considered to be a random variable. Unless the image only consists of random noise, this is a plausible assumption, as pixels tend to have similar color values to their adjacent neighbors. It also hints to the fact that stochastic textures such as sand or grass require smaller neighborhoods than textures that show more regular patterns such as brick walls.

Based on these premises, the goal is to estimate the parameters of a probabil­ity distribution that determines the values of each pixel in an input texture from its neighborhood [ZWM98; Pag04; Li09]. Subsequently, this allows for synthesis of new textures by sampling from the estimated probability functions. However, the computational complexity generating an appropriate stochastic model and es­timating its parameters is the primary problem of these approaches. Furthermore, due to re-sampling pixels based on a statistical model, it can not be guaranteed that regular structures will be preserved and re-created in the synthesized result.

2.3 Non-parametric Texture Synthesis

Non-parametric sampling methods avoid generating a parameter-based, explicit probability model. Instead, they highly focus on local pixel neighborhoods and try to indirectly reproduce image describing statistics. The goal is to analyze and re-create image statistics without knowing any underlying distribution models up­front. As a result, less complex calculations are required which leads to faster synthesis algorithms than proposed by previously mentioned work.

One of the first non-parametric approaches was published by Heeger and Bergen in 1995 [HB95]. It bases on a combination of simple, fast image processing op­erations such as sub- and, upsampling, histograming, convolution and some non­linear transformations. Their algorithm starts with a sample texture and an image consisting of white noise. The goal is to transform the noise image in a way that it has a very similar appearance compared to the texture sample. In order to achieve this, the input and output (noise) images get represented by an input and output image pyramid, respectively. The underlying assumption is that some image char­acteristics are more recognizable at certain frequencies or resolutions than other. For more information regarding image pyramids, see Section 3.3.

Heeger and Bergen made use of a steerable image pyramid to synthesize new textures. Once the image pyramids are created, the algorithm iteratively refines the noise image by matching the histogram of each input pyramid level with the corresponding histogram of the output pyramid. This method produces reasonable results because local changes in the image's subbands, represented by the single levels of the pyramid, cause spatially correlated changes when reconstructing the final image afterwards.

However, it turned out that the chosen statistics, that is the histograms of the single pyramid levels, are not powerful enough to represent regular, long-ranged spatial characteristics. Because of that, this technique works well for highly stochastic textures, but fails on more regular ones such as brick walls.

2.4 Pixel-based Non-parametric Texture Synthesis

Similar to stochastic synthesis techniques, pixel-based synthesis algorithms also base on the idea that each pixel is related to its local neighborhood. However, instead of trying to estimate the whole image behavior by one complex model, pixel-based synthesis algorithms generate new images pixel by pixel where each pixel value is determined through a corresponding pixel neighborhood.

The work proposed by De Bonet [De 97] can be seen as one of the first pixel­based approaches. His technique also utilizes image pyramids but offers some improvements over the method by Heeger and Bergen [HB95]. First, each pyra­mid level of the input texture sample is convoluted with an array of band pass filters, a so called filter bank, which allows for a more detailed feature detection. Additionally, the relation of each pixel to corresponding pixels in lower resolu­tions is stored in a parent vector. During synthesis, the levels of the output pyra­mid, starting at the lowest resolution, get created successively pixel by pixel by sampling from the input pyramid. Each pixel value is chosen from a set of pixels with similar parental structures. This approach produces results of higher quality than [HB95], however, its main focus still lies on near-stochastic textures while strongly regular structures cannot be reproduced in general.

Similar to methods based on MRFs, Efros and Leung [EL99] assumed that the color value of a pixel depends on its local neighborhood and is independent of the rest of the image. However, instead of considering a pixel's history regarding the different levels of an image pyramid, the local neighborhood of a pixel is modeled as all pixels inside a squared window around that pixel. The final result is synthe­sized pixel by pixel where the best matching pixel to a pixel in the output is found by comparing its neighborhood with all possible neighborhoods of the same size in the input texture. Since only already synthesized pixels can be considered in the comparison, the output image is initialized with a 3 x 3 random sample from the input image. After that, the algorithm grows pixels around that sample layer by layer. In order to generate highly regular images, the window size should be big enough to capture the biggest regular feature in the input sample. Otherwise, the algorithm might get stuck in a wrong part of the search space and produces random garbage or unrealistic copies of the same part over and over again. Apart from that, a large number of texture types can be synthesized with this method.

Wei and Levoy [WL00] presented a very similar approach. They also generate new images pixel by pixel based on spatial neighborhood comparison. During this process, the output image is considered toroidal, which means that the neigh­borhood is allowed to cross the boundaries of the output image in a modulo way. Because of that, pixels along the image boundaries influence pixels at the oppos­ing border which leads to a tileable output texture. As already stated by Efros and Leung [EL99], textures containing large scale structures require large neigh­borhoods, which in itself leads to higher computation time. This was dealt with by constructing Gaussian image pyramids for the input and output, respectively. Thereby, the pixel values at each pyramid level get sampled similar to the process described before. The only modification is that each neighborhood contains pix- els from the current as well as lower resolutions. Since image pyramids are an efficient way to preserve large scale structures at low resolutions, the algorithm is able to achieve high quality results for regular textures using only small neigh­borhoods. Later, this approach was extended with some optimizations [Wei99; WL02].

Due to their simplicity and efficiency while simultaneously achieving high qual­ity results, these techniques opened the field for further research and were adapted and extended multiple times.

2.5 Patch-based Synthesis

A reasonable extension to pixel-based techniques is patch-based sampling. Here, instead of successively sampling each individual pixel, whole blocks of multiple adjacent pixels get synthesized at once.

Liang et al. [LLX+01] use square patches of the input image as building blocks in the synthesis process. Each patch is surrounded by a boundary zone and a new patch is chosen from the input image by comparing the difference between the boundary zones of each patch candidate and already synthesized patches. Once, a best matching patch was found and copied into the output image, blending is used to smooth the transitions between overlapping patches. Although this approach is faster than pixel-based sampling, there are some disadvantages. First, using rectangular patch shapes might not be the best choice depending on the regularity structures of the input texture. Furthermore, similar to the neighborhood size in previously mentioned approaches, the size of the patches strongly correlates with the size of structures in the input. Another aspect is that blending of overlapping areas might cause blurred, visual artifacts.

Efros and Freeman [EF01] developed a more sophisticated attempt to handle con­flicting regions. By applying dynamic programming, they try to find a minimum error boundary cut within overlapping areas to stick two neighboring patches to­gether. Since overlapping areas are still defined by rectangular shapes, the optimal cutting path is also restricted to lie inside these rectangular boundaries.

In order to find better global solutions, Kwatra et al. [KSE+03] introduced a non- causal graph cut algorithm. Their algorithm first finds a best matching rectangular candidate that could be copied to the output image in a similar way as described before. However, in a second step, an optimal, irregularly shaped part within the entire patch is computed via graph cut techniques and only pixels inside this por- tion are copied to the the output texture. Since optimal cuts are not restricted by small overlapping areas, patches can be separated more freely resulting in better and more flexible solutions than the previously described approaches [LLX+01; EF01].

2.6 Multi-Exemplar Synthesis

The previously described methods mainly work on homogeneous textures, i.e. textures that show the same type of structure or color distribution throughout the whole image. However, many reals world surfaces show mixtures of several tex­ture types since they can consist of varying materials, densities or patterns. Es­pecially for large surfaces, such as terrains, a combination of multiple textures can result in a visually more appealing and realistic appearance. Zhang et al. [ZZV+03] presented an approach to synthesize textures that can model local tex­ture variations like differences in shape, color and orientation based on two given homogeneous textures. In order to achieve smooth blending between the input exemplars, a feature-based blending technique was applied that makes use of an image mask, which marks important features and transitions, provided by the user.

Park et al. [PBK13] extended this approach allowing for more than two input exemplars. In a pre-processing step, they generate a set of transition textures be­tween each input exemplar for a specific direction. Here, image pyramids are used to efficiently match dominant image structures at different resolutions. When syn­thesizing the final result, the user has to provide a weight map to determine the location for each input exemplar in the output texture. Based on this, the best matching parts of the transition textures are found using a graph-cut approach and blended for the final result.

2.7 Synthesis Output Manipulation

The methods presented in the previous sections focus on the generation of output textures that have a very similar visual appearance to a particular input exemplar. For some applications, however, it might be useful to maintain visual similarity while simultaneously modifying a texture's underlying structure. Especially when considering textures from a more regular spectrum, structural modifications can be required to adapt a texture to a certain environment or to visually reflect the properties of an object the texture is to be mapped to. Multi-exemplar synthesis is a first attempt to address these requirements, however, in non-blending zones, the generated textures still maintain the properties of one of the given input exemplars.

Liu et al. [LCT03] tried to get a better understanding of regular textures and pro­posed an algorithm to detect and model symmetry and periodic pattern in a 2D plane. Their work is based on the mathematical crystallographic group theory which states that periodic patterns in 2D space can be classified into 17 distinct symmetry groups [Pol24; GS86]. By analyzing a given image for these groups, it is possible to find a lattice that models the periodic and near-periodic patterns in regular textures like patterned wallpapers. A problem of their method is that the individual patterns are only allowed to show small variations in form of noise to get detected correctly. Distortions, deformations, different illumination or ge­ometric variations are not handled. Furthermore, a single pattern has to show a wide variation of color or gray level values in order to produce correct results. For images of, for example, brick walls or tiled floors, the algorithm might be error- prone due to small color variations throughout the images.

Later, Liu et al. [LLH04] proposed a simple parametric model to reliably syn­thesize near-regular textures while controlling the appearance and regularity of the synthesized result. They treat near-regular textures as statistical distortions of an underlying congruent structure whose structural tiling again can be classified by the 17 symmetry groups. Based on [KCD+02], they claim that for any near­regular texture there is an underlying lattice that has a minimum distance from a related well-defined regular lattice. The process of finding this regular lattice, a so called deformation field, is assisted by the user. After that, the deformation field is used as input for an analysis step to capture deformations, irregularities and even lighting and color variations. As a result, the user is able to modify the regularity and coloring of the output texture during synthesis. Nevertheless, this method only works for textures that are of such regularity that it is possible to find an underlying pattern of a specific type.

To provide more degrees of freedom and cover a wider range of texture types, Kim et al. [KLF12] tried to measure approximate structures by representing tex­ture regularities in a symmetry space [KLF12]. The algorithm optimizes a func­tion representing structural information, the low frequencies of an image, until its difference from the target representation is minimal. At the same time, high fre­quencies are preserved as accurately as possible. As a result, a given texture can be modified in a way that it matches a target symmetry, for example that of another texture, while maintaining important original characteristics. However, this only works if the target symmetry can be represented by frequencies lower than those of the source texture. Furthermore, it is not guaranteed that transformation of low frequencies will always maintain high frequencies at the same time. Nonetheless, this techniques is applicable to many real world texture examples.

A very different approach was recently presented by Dong et al. [DWLS17]. They criticize that texture synthesis and modification most of the time are based on mathematical models and parameters that are difficult to use by a user without special knowledge. In contrast, they provide a framework to generate textures based on semantic descriptions such as “cellular”, “rough”, “granular” or “ran­dom”. These semantic descriptions are used to identify a corresponding mathe­matical texture generation model and appropriate parameters. The mapping be­tween description and model was realized by setting up a database where each texture and its corresponding generation model gets assigned several describing labels. Now, if the user wants to generate a new texture whose appearance fits a specific description, the best matching generation model and parameters are cho­sen by nearest neighbor search in this database. Although this is a good way to make texture generation accessible to users without special knowledge, the out­come is somewhat limited since it bases purely on mathematical models where real world texture samples are not taken into account.

2.8 Implications for this Thesis

As described in this chapter, a lot of research has been done in the field of texture analysis and synthesis during the last few decades. Starting with considerations on general texture perception in the 1960's, texture modeling approaches were de­veloped shortly after. These in turn paved the way for numerous texture synthesis techniques. Nevertheless, most of these techniques try to optimize the basic goal of texture synthesis that is generating a new, larger and tileable texture with simi­lar appearance to a given input sample. Impressive results were shown especially in recent years, however, a plausible next step is to give a user more control over the synthesis outcome for example by changing a texture's underlying structure. In Section 2.7, some approaches were presented that focus on synthesis output modification. However, this domain is still relatively young compared to pure texture synthesis and therefore offers potential for further research.

Inspired by ideas and techniques presented in this chapter, this thesis proposes an approach to analyze cellular textures (a group of textures located in the near­regular spectrum) and to synthesize new exemplars while being able to modify the original underlying structure.

Chapter 3

Fundamentals

This chapter will give a basic overview of the technologies and topics relevant to this thesis. It is written as a rather loose collection of different methods and related domains and is therefore primarily intended as a reference chapter to provide more detailed information on the approaches and background knowledge referenced throughout the following chapters.

3.1 Textures

In the previous chapters of this thesis, textures were roughly introduced as images that get mapped onto an objects surface to enhance its visual appearance. In this section, textures will be explained in more detail by presenting different texture types and their application areas.

3.1.1 Texture Types

In the majority of rendered virtual three-dimensional scenes today, objects are represented as polygonal meshes. A polygonal mesh is a combination of vertices, edges and faces that form the surface shape of a virtual 3D object [AHH08]. Be­cause of that, one way to add surface details to an object is to model them using geometric primitives. However, besides this becoming less practical the more de­tailed a surface should be represented, this can lead to highly complex polygonal meshes whose processing and computation becomes more and more expensive. An alternative is to use textures that are mapped onto a low polygonal representa­tion of an object that provide the desired surface details instead.

Abbildung in dieser Leseprobe nicht enthalten

Figure 3.1: A normal map can be used to make a surface look less flat and therefore more realistic by encoding normal information for each pixel which can be accessed during lighting calculations.

There are 1D, 2D and 3D textures, however, due to the topic of this thesis, this section focuses on two dimensional textures. In this context, textures are either synthetic or digitized images showing the desired object details. As an example, an image of a brick wall can be mapped onto a virtual vertical plane to represent this brick wall in 3D space. Especially for objects that are rendered at a certain distance from the viewer, this already looks satisfying. However, if the viewer is close to the virtual wall, it might lack some realism. The wall probably looks extremely flat and the whole area appears equally illuminated, regardless of the viewing angle and materials displayed.

In this case, additional textures can be used to achieve a more realistic look. For example, specular maps or gloss maps are images that for each pixel encode, how glossy or shiny a certain part of the image should be [AHH08]. In rendering, this additional texture is used to modify the color values of the original texture de­pending on the lighting conditions, e.g. by making them brighter if the light hits an object at a certain angle. In the example of the brick wall, they could be used to make the mortar appear matte in contrast to glossy bricks. In addition, bump maps or normal maps can be applied to simulate three-dimensional details in order to make a surface look rough and irregular (see Figure 3.1). Similar to gloss maps, they encode normal information that, when rendered, influence the lighting cal­culation. There are several other special textures to achieve certain effects whose explanation would go beyond the scope of this thesis.

Abbildung in dieser Leseprobe nicht enthalten

(a) Regular (b) Near-Reg. (c) Irregular (d) Near-Stoch. (e) Stochastic

Figure 3.2: Some examples for each class of the texture spectrum.

As described above, textures have a strong influence on the visual quality of a vir­tual scene. Because of that, textures itself have to be of a particular quality. One source of photo-realistic textures is actual photographs. However, often they are of inadequate size and generating images of a surface in neutral lighting conditions and without perspective distortions requires either a great effort in preparation or post-processing. Drawing textures by hand can result in aesthetically pleasing tex­tures, however, is a tedious and time-consuming task that also requires a certain amount of artistic skill.

As seen in Chapter 2, synthetically generating new textures, either based on math­ematical models or on given input samples, has therefore become an active field of research. It was also mentioned, that textures can be classified based on their structural properties. To provide some visual examples, Figure 3.2 roughly gives an overview of the full texture spectrum.

3.1.2 Texture Mapping

In the previous section, different classifications and types of textures and how they can be used to achieve interesting visual effects were described. Now, this sub­section will explain some basics on how to map an image onto a polygonal mesh.

In most applications, 2D texture are square images defined in an own coordinate system to address their color values. The axes of this coordinate system are called u,v-axes and the values along each axis are typically in the range of [0, 1] where the origin (0, 0) is located at the lower left corner of the texture. The pixels of a texture found in this coordinate system are called texels.

Abbildung in dieser Leseprobe nicht enthalten

Figure 3.3: When a texture is mapped onto a triangle mesh, u, v-coordinates are cal­culated for each point inside one of the mesh's triangles with the help of barycentric coordinates. If the resulting coordinates do not directly address a specific texel, the final color value is calculated via bilinear interpolation of the four surrounding texels.

The smallest elements, polygonal meshes used to model virtual objects nor­mally consist of, are triangles. In order to map a particular part of a texture onto one of the triangles, each vertex of the triangle gets assigned a fix u,v-coordinate. Now, when a triangle gets rendered onto a screen, also the correct texture values for all pixels that are covered by the triangle have to be determined. This can be done with the help of barycentric coordinates, that define points relative to the vertices of a triangle A, B and C [Cox69]. Barycentric coordinates are triplets of weights (a, ß, Y) for a triangle's vertices. A point P in the plane embedding the triangle is defined as P = aA + ßB + yC, where y = (1 — a — ß ). Thus, the vertices of the triangle are given by (1, 0, 0), (0, 1,0), and (0, 0, 1) and if a point P is inside the triangle, the values for a, ß and Y are in the range of [0, 1].

In the rendering process, barycentric coordinates can be calculated for each pixel P inside the triangle. The u, v-coordinates of P are the weighted sum of the u, v-coordinates assigned to the vertices ofthe triangle, where the weights are the barycentric coordinates of P. After that, the calculated u, v-coordinates can be used to lookup a color value for the corresponding pixel in the texture. If the u, v-coordinates do not directly address a specific texel, but lies in between them, the final color value is determined by interpolating the color values of the four surrounding texels, called bilinear interpolation. See Figure 3.3foran illustration of this process.

3.2 Image Filters

In image processing, image filters can be used to apply a wide range of effects to a given image [GW06]. In most cases, certain effects can be achieved by convo­lution between a kernel matrix and corresponding pixels of the target image.

The kernel or filter matrix is a small matrix containing weights for the currently considered pixel and its neighbors. During filtering, the pixel values of the filtered output image are defined by the sum of products of an pixel's original color value with the corresponding weight of the filter matrix. The center of the filter matrix thereby contains the weight for the currently considered pixel. As an example, us­ing the kernel K = 00 01 00 would have no effect on the appearance of the output image and is therefore called the identity filter. Other filters, that have an actual effect are shown in Figure 3.4.

Another type of widely used filters are morphological image filters, i.e. non-linear operations to modify the shape or morphology of features in an image. The basic operations are erosion and dilation. In a given radius, erosion shrinks the elements of a binary image by removing a layer of pixels from its boundaries. Because of that, small details can be eliminated and separations between different elements become larger. Dilation works the other way around and adds a new layer ofpixels to the boundary ofan binary element. In many cases, a combination ofboth oper­ations is applied to an image, called opening and closing depending on the order ofoperations. Forexample, one use case ofopening is to remove small artifacts of an image while preserving the size of important elements. During erosion, small artifacts completely disappear and are therefore not considered in the subsequent dilation process.

Abbildung in dieser Leseprobe nicht enthalten

(a) Original (b) Gaussian Blur 3 x 3 (c) Edge Detection (Sobel)

Figure 3.4: Gaussian Blur and Edge Detection filters independently applied to the “Lena”-Image [HS73], which in this case has a resolution of 300 x 300 pixels.

3.3 Image Pyramids

Inspired by the field of signal processing where a signal can be decomposed into its sub-bands, image pyramids are a hierarchical representation of an image in different resolutions. The most popular pyramid types are Gaussian [ABBO84], Laplacian [BA83] and steerable pyramids [SF95]. In general, all types make use of the same basic operations that are expand and reduce. The expand operation is used to generate the next pyramid level with higher resolution by upsampling and interpolating the current one. In contrast, the reduce operation creates the next coarser resolution by filtering and downsampling the image representing the current pyramid level. Up- and downsampling usually happens with a factor of 2 or 0.5, respectively. What distinguishes the methods is the filtering part before downsampling when generating lower resolutions.

Abbildung in dieser Leseprobe nicht enthalten

Figure 3.5: Example of a Gaussian Pyramid (top) and a Laplacian Pyramid (bottom) created for the “Lena”-Image.

3.3.1 Gaussian Pyramids

In Gaussian pyramids, each level gets filtered using a Gaussian average before it gets scaled down. In doing so, each pyramid level acts as a low pass filtered version of the next higher resolution. That is, every image is a compressed rep­resentation of its predecessor where high frequencies and small details but also redundant information get reduced while keeping low frequencies representing major structures.

3.3.2 Laplacian Pyramids

Laplacian pyramids are based on Gaussian pyramids. However, instead of simply blurring images, each level stores the difference image between two consecutive low pass images, except for the lowest resolution. Here, the subtraction can not be applied because there is no corresponding lower level, so it gets stored as is. The advantage of this is that each level is a bandpass filter of a particular frequency instead of a low pass filter. A bandpass filter retains frequencies of a specific range while attenuating higher and lower frequencies. Because of that, they can be used to enhance image features like edges while reducing noise at the same time. Furthermore, they allow for modifications in a certain frequency band and even the original image can be reconstructed completely by summing up each layer of the Laplacian pyramid.

3.3.3 Steerable Pyramids

Similar to Laplacian pyramids, steerable pyramids decomposes an image into its frequency bands. Additionally, steerable filters allow for capturing differently oriented frequency bands at each level. This enables detection of oriented image features that can not be captured by Laplacian pyramids.

3.4 Worley-Noise

In the field of texture synthesis, textures can be generated on the base of a math­ematical description. These textures are called procedural textures and in most cases base on some kind of noise function like Perlin Noise [Per85] or Simplex Noise [Per01]. Especially for textures near the stochastic end of the texture spec­trum, like textures that represent marble, water or clouds in the sky, this is a useful alternative to manual texture creation.

Worley noise, also called cellular noise, is a noise function introduced by Steven Worley in 1996 [Wor96]. The idea is to first randomly distribute feature points in two- or three-dimensional space. After that, when evaluating the function value of an arbitrary point x located in the same space, the (Euclidean) distances from this point to the feature points is calculated. Here, F1 (x) is defined as the distance from a point x to its closest feature point. Analogously Fi(x) defines the distance to the i-th closest feature point. When generating a new texture, the values of this noise function can be mapped into a color space or used for normal displace­ment. In combination with different distance norms (e.g. Manhattan or Maximum norm), visually interesting effects can be achieved (see Figure 3.6).

Abbildung in dieser Leseprobe nicht enthalten

(a) Used Distance Function: (b) Used Distance Function: (c) Maximum Norm and al- F1 (x) F3 (x) - F2 (x) ternating Aspect Ratios

Figure 3.6: Some textures generated based on (modified) Worley noise. Standard tileable Worley Noise can be seen in a). In b), a different distance function was used whose values were mapped into a color space. The image seen in c) bases on the Maximum Norm using alternating aspect ratios.

3.5 Voronoi Diagrams

Traditionally, a Voronoi diagram is the partitioning of a two-dimensional space into multiple regions based on the distance of all points in that space to a set of n seed points Si [AK00]. Each region i belongs to one of the n seed points and is defined as the sub-space where all points in that space are closer to seed point i than to the other ones. The polygons resulting from these regions are represented by the edges that separate each region from another and are sometimes called Voronoi cells (see Figure 3.7). So, an edge between two regions with seed points S i and Sj is defined by all points Pij, where ||Si — P j || = \\ S j — P j ||. Although this definition is very similar to Worley noise, Voronoi diagrams were already ex­amined by Rene Descartes in 1644 [Des44] and Dirichlet in 1850 [Lej50]. Later, they were further investigated by Georges Voronoi and extended to higher dimen­sions, hence the present naming [Vor08]. The fields of application of Voronoi diagrams are diverse. Besides in computer science, they are used in crystallogra­phy, geophysics, meteorology and even epidemilogy, e.g. to model the spread of certain pathogens.

When speaking of Voronoi diagrams, one also has to mention its dual relative. Delaunay triangulation is the associated dual tesselation of Voronoi diagrams that was also studied by George Voronoi and later extended by Boris Delone [Del34]. A Delaunay triangulation is a triangulation of a set of vertices, where the circum­circle of each triangle contains no other vertex of this set. This is called the empty circle property. With the help of Voronoi diagrams, such a triangulation can be

Abbildung in dieser Leseprobe nicht enthalten

Figure 3.7: A Voronoi diagram (left) and its dual Delaunay Triangulation (right).

obtained by using the Voronoi seeds as vertices and connecting each vertex of neighboring cells with an edge. The triangle mesh created this way shows some useful properties.

On the one hand, for each triangle of the mesh the largest possible inner angles are generated. Furthermore, all triangles of the mesh have very similar shapes resulting in high similarity and regularity of the mesh. These are important char­acteristic for meshes that are subject to further calculations. For example, if a mesh is used in scientific simulations such as finite element methods, research has shown that simulation errors occur if the interior angle between two edges of a mesh element is very sharp or reflex. Livesu et al. [LSVT15] even state that only a single concave or inverted mesh element makes a mesh unusable for simula­tions. Because of that, interior angles are often restricted to lie between 30° and 120°. These criteria are also influenced by the overall symmetry and aspect ratio of a single element and hence also affect the simulation quality. Another aspect is that the convex hull of the set of Voronoi seeds is a part of its dual structure, so that the calculation of the dual structure of a Voronoi diagram provides a method for determining the convex hull of a set of points.

3.6 Principal Component Analysis

Figure 3.8: Plot of measured data (a) and the Eigenvectors found bei PCA (b).

When analyzing measuredorgathereddataonaspecific topic, one of the key tasks is to detect and separate important from less important patterns within the data, e.g. to draw conclusions about correlations and implications. Often, the goal is to group and visualize the most important features afterwards. If a data set only has three dimensions it can already be challenging to visualize the data in an efficient way and it becomes impossible to do this for several hundred dimensions.

Abbildung in dieser Leseprobe nicht enthalten

Principal Component Analysis (PCA) is a multivariate, statistical procedure, to reduce the dimensionality ofadata set [Jol02]. Although it was first introduced by Pearson in 1901 [FRS01], its popularity highly increased with the advent of electronic computers. The idea is to transform a large number of interrelated variables into a smaller number of uncorrelated variables, called principal compo­nents, which still capture essential information of the original data. Each principal component tries to find the largest variation of data for one of its dimensions. In general, there is a principal component for each dimension of the data set, where each principal component is orthogonal to all the other ones. Moreover, they are typically ranked according to the variance ofdataalong them. Forbrevityreasons, only the basic concepts of PCA will be explained by the following example in­stead ofgiving an in-depth explanation ofthe underlying mathematical principles.

Imagine the Chair of computer graphics at a university has lost some of its em­ployees due to a tragic accident in virtual reality and is therefore looking for new assistants. Furthermore, there are already 15 student applicants for the vacancy. During a job interview, the applicant's qualities were measured in sympathy and expertise, both of which are considered equally important. After that, the scores of all students were plotted, resulting in Figure 3.8.

One way to find the best candidates for the job is to apply PCA to find the axis of the largest and second largest variation in the data. These axes are returned in terms ofEigenvectors, pointing into the direction ofthefound variance, and corre­sponding Eigenvalues, indicating how much variance there is along that direction. The found directions can be used to form a new coordinate system and re-frame the measured data (see Figure 3.9).

[...]

Excerpt out of 100 pages

Details

Title
Parameter Analysis and Synthesis of Cellular Textures
College
RWTH Aachen University  (Visual Computing Institute)
Grade
1,0
Author
Year
2018
Pages
100
Catalog Number
V1194367
ISBN (Book)
9783346638748
Language
English
Tags
Texture, Texture Analysis, Synthesis, Image Processing, Cellular Texture, Voronoi, Optimization, User Driven, Computer Graphics, Gradient Descent
Quote paper
Thomas Conraths (Author), 2018, Parameter Analysis and Synthesis of Cellular Textures, Munich, GRIN Verlag, https://www.grin.com/document/1194367

Comments

  • No comments yet.
Read the ebook
Title: Parameter Analysis and Synthesis of Cellular Textures



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free