Developing a Dental Surgery Simulator for Teeth Drilling and Mandible Cutting


Bachelor Thesis, 2015

50 Pages, Grade: 1.2


Excerpt


CONTENTS

1 Introduction ... 3

2 Preliminaries ... 5
2.1 Object Representation ... 5
2.1.1 Voxels ... 6
2.1.2 Marching Cubes Algorithm ... 6
2.1.3 Point Shells ... 6
2.1.4 Polygonal Meshes ... 7
2.1.5 Distance Fields ... 7
2.2 Haptic Rendering ... 7
2.2.1 Collision Detection ... 8
2.2.2 Collision Response ... 8
2.2.3 Direct Rendering vs. Virtual Coupling ... 9
2.3 Rendering Frequencies ... 10

3 Related Work ... 11

4 Methods and Implementation ... 16
4.1 Object Representation ... 17
4.1.1 Dental Tools ... 17
4.1.2 Volume Object ... 18
4.1.3 Implementation ... 19
4.1.3.1 Dental Tools ... 19
4.1.3.2 Volume Object ... 20
4.2 Collision Detection ... 23
4.2.1 Spatial Hashing ... 23
4.2.2 Dental Bur ... 25
4.2.3 Dental Saw ... 26
4.2.4 Collision Response ... 28
4.2.5 Handling Collisions ... 28
4.2.6 Erosion ... 29
4.2.7 Haptic Device ... 30

5 Evaluation ... 32

6 Conclusion and Future Work ... 44
6.1 Conclusion ... 44
6.2 Future Work ... 45

INTRODUCTION

Medical operations and surgical interventions require an enormous amount of skill and experience. The traditional ways for medical students to obtain their surgical skills are either practicing on cadavers, animals or patients directly. However, all of these training methods bear several disadvantages. Cadavers are usually very expensive and can not be reused. Furthermore, students have to deal with different tissue properties and a lack of hemodynamic features. Animals anatomy commonly differs from human anatomy and practicing on patients bears the risk of endangering the patient’s health.

With ongoing developments in the area of virtual reality (VR), virtual surgery simulators turned out to be a safe and repeatable alternative to the traditional training methods. A medical simulator provides a virtual training environment in which the trainee is able to interact with virtual objects and obtain physically plausible force feedback via a haptic feedback device connected to the simulator. In this context, the calculation of haptic feedback resulting from virtual object manipulation is called haptic rendering. In order to provide a realistic feedback of the performed user actions, real-time performance of collision detection and collision response is indispensable.

This thesis focuses on virtual solid body manipulation with haptic feedback at real-time frame rates, such as teeth drilling and bone sawing. When setting up the simulation, appropriate representations for the objects of study and the medical tools have to be found. To provide a realistic feedback when using the simulator, collision detection and collision response have to run at a frequency of around 1000 Hz. Therefore, these components have to be separated from the visual component of the simulator which usually runs at a frequency of around 30 Hz or higher. Because of this, it is very important to think about efficient collision detection techniques and concurrency of the different threads of the simulator to create a smooth synergy between all relevant components.

The next chapter will present background knowledge of techniques and methods used or referenced to while developing the simulator. Chapter 3 is about introducing existing research related to this thesis by presenting various approaches including their advan- tages and disadvantages. After that, the used methods and their implementation are described (chapter 4) and followed by an evaluation of the simulated scenarios (chapter 5). Finally, chapter 6 will conclude the development process and give a short overview of work and improvements which can be done in the future.

PRELIMINARIES

This chapter will give a basic overview of the technologies and topics relevant to this thesis. It will introduce the most common 3D object representation methods, collision detection basics and aspects which are crucial in the context of interactive real-time applications, especially for haptic feedback.

2.1 Object Representation

Visual representation of a simulation environment is one of the main components of a virtual reality based surgery simulator. Typically, a virtual 3D world is defined by a Cartesian coordinate system allowing orientation and object positioning in this world (World Space). Because of their complexity, virtual 3D objects are normally defined by their own Cartesian coordinate system (Local Space). In order to position and move objects in the virtual world, the object’s local coordinates are mapped to the world’s coordinates by translation, rotation or scaling. In a virtual scene, objects are often separated into static (non-moving) and dynamic (moving) objects. For object repre- sentation, two fundamental classes of object modeling techniques can be distinguished: volume representations and surface representations [2]. In this context, voxels, point shells, polygonal meshes and distance fields are important ways to represent objects or object surfaces, especially for collision detection purpose.

2.1.1 Voxels

Volumetric models are able to represent inner object structures. One option to build these models out of volumetric structuring elements is using so called voxels. Voxels are values on a three-dimensional regular grid which store specific information such as the density of the structure they represent. Seeing a voxel-grid as a whole, all voxels of a certain value belong to one object or at least they belong to a specific part of an object. For medical purposes, volumetric models, such as bones or organs, can be created from Computer Tomography (CT) and Magnetic Resonance Imaging (MRI) scans [18].

2.1.2 Marching Cubes Algorithm

Since volumetric models do not contain explicit surface representations which could be used for graphics rendering, these surfaces can be extracted out of a voxel grid by utilizing the Marching Cubes algorithm [12]. The Marching Cubes algorithm divides a voxel grid into logical cubes consisting of eight adjacent voxels. Depending on an object specific threshold, the algorithm compares the values of the cube’s vertices with this threshold and determines whether one vertex is inside or outside the represented object. Based on this comparison, a cube intersecting surface is calculated (Figure 2.1). After that, the calculation goes on with the next neighboring cube. Once the computation is finished, a surface representation of the object is created. Additionally, some enhanced algorithms allow a normal vector calculation for each vertex of the surface.

These images are not part of the preview.

Figure 2.1: Example of surface calculation by Marching Cubes algorithm. The blue vertices represent voxels with a value below the specified threshold. Thus, they belong to the inner part of the object. Image based on: [12]

2.1.3 Point Shells

A point shells is a set of points laying on the surface of an object and therefore is another option to create a surface representation for an object. One way of creating a point shell

representation is to use a collection of all surface voxel centers of an object which then define the position of each point shell point [15]. Hence, the surface resolution depends on the resolution of the voxel grid. Furthermore, each point is commonly assigned a surface normal vector pointing inwards when it is possible. Among others, these normals can be used for force computation in a haptic rendering process (see section 2.2).

2.1.4 Polygonal Meshes

When utilizing a surface representation directly, a 3D object can be approximated by a set of interconnected polygons, called polygonal mesh. Most of the time, triangles are used that can be described by the coordinates of their vertices. Moreover, triangles always define a flat surface without curves and bendings which is a computational ad- vantage. Depending on the mesh resolution, the level of detail of a virtual object can be quite high. Moreover, polygon rendering works with relatively low computational time because of hardware acceleration. However, surface-based models cannot represent inner object structures wherefore they are not suitable for cutting or drilling simulations.

2.1.5 Distance Fields

The distance field representation of digital images and objects was first introduced by Rosenfeld et al. [17]. For each point within a distance field a value is specified which states the closest distance from this point to the surface of an object. When it is necessary to differentiate whether a point is inside or outside of an object, a distance field can also be signed.

2.2 Haptic Rendering

One concept of virtual reality based applications is immersion. Beside creating a re- alistic virtual environment, it is vital to receive realistic feedback when manipulating virtual objects. The process of generating feedback forces when interacting in a virtual environment in order to create the illusion of touching virtual objects can be defined as haptic rendering [11]. The forces that appear while interacting in a virtual environment can either be perceived directly, between the skin and the environment, or through an object manipulated by the user which comes into contact with the environment. In the case of a dental surgery simulator, the perception of contact forces through a virtual tool is the one to be considered. For the purpose of obtaining a realistic haptic feedback,

the haptic rendering process can be separated into two sub-processes: collision detection and collision response.

2.2.1 Collision Detection

Collision detection describes the problem of determining if an object had intersected, contacted or converged to another object or overlapped relevant background scenery [6]. Especially for real-time applications, such as virtual surgery simulators, efficient and fast collision detection methods are very important. In the context of haptic ren- dering, collision detection is the process of testing whether a virtual tool has violated or encountered environmental constraints [11]. Checking the polygons of two objects for an intersection typically has an O(n 2) complexity. Because of this, virtual objects are often encapsulated by simple geometrical shapes, so called bounding volumes. The most simplest bounding volumes are: Axis-aligned Bounding Boxes (AABBs), Oriented Bounding Boxes (OBBs), which align their orientation to the orientation of the object they encapsulate, and spheres. Once all bounding volumes are calculated, the collision detection test starts with checking them for intersection. Only if the bounding volumes of two objects overlap, the objects themselves are further tested. Furthermore, grouping bounding volumes and forming bounding volume hierarchies (BVHs) in a preprocessing step can further improve collision detection performance.

Because of their structure, distance fields are also suitable for collision detection. Once the bounding volume of two objects collided and one of the related objects is represented by a distance field, the representation of the other object (e.g. point shell points) can easily be checked against this distance field. For example, if the used distance field is signed and its values are negative for areas inside the representing object, the two objects collided if the extracted distance value at the point of collision is negative. Furthermore, distance fields can be used to obtain collision related penetration depths.

2.2.2 Collision Response

In the context of haptic rendering, collision response is the process of computing force feedback of colliding objects related to the environmental constraints given by the col- lision detection process. The commonly used approaches for collision response are con- straint, impulse or penalty-based methods. In a constraint-based collision response approach, a moving object is subject to certain restrictions. To be more precisely, penetrations between objects are not allowed. All objects are assigned object specific constraints and their behavior and movement is influenced by explicitly applying ex- ternal forces. Thus, environmental constraints are enforced and penetrations can be prevented. In contrast, in impulse-based methods all contacts are handled via opposing

impulses applied to the colliding objects to prevent penetrations [14]. Thereby, face or vertex normals can be considered to determine the impulse direction. Because no explicit constraints were considered, one object lying on top of another object experiences many rapid, tiny collisions which are resolved by impulses occurring at a very high frequency. In order to avoid impulse calculations for infinitely many contact points, it is sufficient to only consider points that form the convex hull of the colliding objects [7]. Similar to this, in a penalty-based collision response method penetrations are not prevented but penalized by using a spring-damper system which behaves like a harmonic oscillator [7]. At each contact point between two objects, a spring tries to remove the penetration. A repulsive penalty force is applied to the colliding object which increases with increasing penetration. Usually, a spring damping and a spring stiffness factor also influence the calculation. Thus, the contact force of each colliding point is calculated as a spring force.

2.2.3 Direct Rendering vs. Virtual Coupling

As already indicated above, object manipulation and perception of the related feedback forces can happen in an indirect way namely through a haptic device. A haptic device is a special hardware equipped with sensors and motors that is designed to translate user movements to a virtual scene. Furthermore, it also displays haptic feedback to the user, that results form interactions with virtual objects in this scene. They are used in a wide range of applications such as entertainment, industrial, scientific or medical areas [4]. In the context of medical applications, haptic devices usually consist of a stylus which can be held by the user and is used for interaction purpose (Figure 2.2)1.

These images are not part of the preview.

Figure 2.2: A PHANTOM Omni Haptic Device that makes it possible for users to touch and manipulate virtual objects.

Commonly, there are two main approaches of connecting the haptic device to a virtual tool. These approaches are direct rendering and virtual coupling [11]. When applying direct rendering, the configuration of the haptic device is assigned directly to the virtual tool. Once collision detection is done and the resulting contact forces are calculated, the feedback is directly sent to the haptic device. The simplicity of this method is accompa- nied by several drawbacks. Especially when using penalty-based approaches, penetration values may exhibit strong variations and therefore lead to instability and vibration of the haptic device. In contrast, virtual coupling means separating the haptic device and virtual tool configurations to ensure system stability. After separation, device and tool are commonly connected by a viscoelastic spring. By evaluating the translational and rotational misalignment between virtual tool and haptic device configuration, the con- tact forces are calculated and transmitted to the user. Because of this it is possible to generate a so called proxy-object (virtual tool). The proxy-object then respects the environmental constraints while the configuration of the haptic device penetrates the virtual environment. This approach allows a filtration of the calculated contact forces to transmit a stable feedback to the user.

2.3 Rendering Frequencies

The consideration of update rates of the different threads running a simulator is an crucial aspect of quality assurance of the simulation. Once a virtual environment is prepared, it is important for the creation of a credible simulation that the 3D scene and movements within the scene are presented to the user smoothly. Research [3, 8] on this topic showed that users will not perceive a virtual simulation as smooth and continuous if the virtual scene and its visualization is updated less than about 30 times per second. Because of that, in most simulators a visual thread updating at 30 Hz is implemented. Furthermore, physically correct interactions between objects can only be approximated in a virtual world. Because human tactile perception is extremely sensitive to vibrations in a range of 100 Hz to 1 KHz, it is necessary to choose an appropriate update rate for the haptic rendering thread [11]. Therefore, the haptic thread typically runs at a fast update rate of 1 KHz. Otherwise, virtual interactions can lead to an unstable and vibrating haptic device.

RELATED WORK

In the last decades, a lot of research has been done in the area of virtually simulating medical operations. Especially in the field of dental treatment such as teeth drilling and jawbone correction, there exists quite a number of prototype simulators and approaches as well as studies on how to implement such a simulator. In this chapter, some of these approaches will be presented.

The approach of Kechagias et al. [9] uses three types of volumes to represent the drilling tool (dental bur). A spherical, a cylindrical and a cylinder-conical volume represent a wide tool-variety and therefore a more realistic scenario than only using a sphere volume representation. In order to represent the teeth, they followed the volumetric voxel approach to simulate a solid object. For input they used a simple computer mouse so no haptic feedback can be obtained. When pointing onto the tooth, the drilling operation is simulated and the 3D model is modified. This is done by analyzing the drilling direction of the bur and removing or adding 3D structuring elements (voxels) from or to the volume in the calculated direction. While using the simulation, the user is limited to moving the dental bur as only the virtual teeth can be rotated. This means that changing the viewpoint of the scenery can only be achieved by rotating the teeth object with the computer mouse or changing the perspective angle by an edit dialog box. The drilling effect is then applied at the cursor’s location.

Marras et al. [13] developed a virtual teeth drilling system for training purposes. In order to achieve a more realistic scenario than [9], they created a 3D head and oral cavity surface model based on anatomical and CT data of a male cadaver. The volumetric (voxel-based) representation of a tooth is calculated by analyzing a series of slice images where the structures of interest (external tooth surface and root canals) have been segmented [13]. They needed both, the surface and volumetric object representation, to handle collision detection. For the purpose of achieving a correct 3D erosion, they

used the collision point of the tip of the drilling tool and the surface representation of the tooth to find the corresponding position in the volumetric representation. Once the erosion process is fulfilled and the voxels are cut out, the Marching Cubes algorithm is applied locally to update the visual surface representation. To provide force feedback during the operation, a Phantom haptic device can be used. In their paper Marras et al. only dealt with the visual part of the simulation. Haptic feedback and force computation and therefore visual and haptic thread synchronization were only mentioned for future work.

Since spherical shapes are not suitable for every dental tool, Kim et al. [10] presented a simulation method that can handle any arbitrarily shaped tool. To this end, signed distance fields were used to represent the dental tools. The bone was represented by volume data (voxels). For each of these voxels the remaining bone is described by a density property which will be affected by the drilling process. For collision detection and haptic rendering purposes, the bone volume is replaced by a point shell representation which covers all voxels on the bone surface and contains a normal vector for every point shell point. Furthermore, a visual representation is obtained by also applying the Marching Cubes algorithm to create a triangle mesh for the bone volume. In order to handle collision detection, a bounding sphere for the bur is generated. After that, each point shell point inside the drill bit’s bounding sphere, which belongs to the bone, is queried against the signed distance field of the drill which describes the distance from a surface. If the extracted distance value is negative, there was a collision. Once a voxel is removed, a boundary-fill algorithm [10] covers the hole of the point shell. Because all point shell points are updated every haptic cycle (1000 Hz), there is no need for a synchronization between a visual and a haptic loop. Based on the work of [3], they used a penalty-based point-contact model to resolve collisions and calculate the haptic force feedback. A penalty force is computed for every point in contact and is composed of the contact stiffness value, the signed distance field value and the negative point’s inward normal. To mitigate problems of an unstable haptic device caused by a limited device stiffness, the calculated forces are not directly sent to the haptic device. Instead, the virtual drill configuration (rotation and position) and the haptic manipulandum position are separated and interconnected with virtual springs (virtual coupling). These springs try to align the virtual haptic object to the manipulandum and are used to filter the feedback force before sending to the haptic device.

Excerpt out of 50 pages

Details

Title
Developing a Dental Surgery Simulator for Teeth Drilling and Mandible Cutting
College
RWTH Aachen University
Grade
1.2
Author
Year
2015
Pages
50
Catalog Number
V1194359
ISBN (eBook)
9783346639462
ISBN (Book)
9783346639479
Language
English
Keywords
Informatik, Computer Science, Virtual Reality, Collision Detection, Haptic Feedback, Surgery, Simulator, Voxel, medicine, dental
Quote paper
Thomas Conraths (Author), 2015, Developing a Dental Surgery Simulator for Teeth Drilling and Mandible Cutting, Munich, GRIN Verlag, https://www.grin.com/document/1194359

Comments

  • No comments yet.
Look inside the ebook
Title: Developing a Dental Surgery Simulator for Teeth Drilling and Mandible Cutting



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free