My sincere thanks to Prof. M.J.M RAO, Head of the Department of Electronics and Instrumentation Engineering, Gandhi Institute of Engineering and Technology, Gunupur for his encouragement and valuable suggestions during period of my Thesis work.
I would like to express my gratitude to my thesis guide Prof. Subodh Kumar Panda for his guidance, advice and constant support throughout my thesis work. I would like to thank him for being my advisor here at Gandhi Institute of Engineering and Technology, Gunupur.
Next I would like to express my gratitude to my Co-guide Prof.(Dr.) Dulu Patnaik, Dean Academics & Vice principal, GITA , Bhubaneswar for his guidance, advice and constant support throughout my thesis work.
I express my heartfelt gratitude to Principal, GIET, GUNUPUR & Principal, GITA, BHUBANESWAR for permitting me to carry out this Thesis work.
I would like to thank all faculty members and staff of the Department of Electronics and Instrumentation Engineering, G.I.E.T. Gunupur for their generous help in various ways for the completion of this thesis.
I would like to thank all my friends and my classmates for all the thoughtful and mind stimulating discussions, which prompted me to think beyond the obvious. I have enjoyed their companionship so much during my stay at GIET, Gunupur.
I am highly indebted to my parents for their love, sacrifice, support and inspiration.
SANTOSH KUMAR SAHOO
With increasing number of population and higher rate of development the problem of road accident is also increasing rapidly. So the basic concept is to develop a model that can be useful as a security system in the society and can monitoring the vehicle speed.
A License Plate Recognition (LPR) System is one kind of an Intelligent Transport monitoring System and is of considerable interest because of its potential applications in highway electronic toll collection and traffic monitoring systems. This type of applications puts high demands on the reliability of an LPR System. A lot of work has been done regarding LPR systems for Korean, Chinese, European and US license plates that generated many commercial products. However, little work has been done for Indian license plate recognition systems.
The purpose of this thesis was to develop a real time application which recognizes license plates from cars at a gate, for example at the entrance of a parking area or a border crossing. The system, based on regular PC with video camera, catches video frames which include a visible car license plate and processes them. Once a license plate is detected, its digits are recognized, displayed on the User Interface or checked against a database. The focus is on the design of algorithms used for extracting the license plate from a single image, isolating the characters of the plate and identifying the individual characters.
The proposed system has been implemented using Vision Assistant 7.1 & LabVIEW 7.1. The performance of the system has been investigated on real images of about 100 vehicles. The recognition of about 98% vehicles shows that the system is quite efficient.
Organization of Thesis
The first chapter briefly reviews the literature and the previous work done.
Chapter 2:- The second chapter gives a brief introduction to the system elements its applications, working and structure of proposed system.
Chapter 3:- The third chapter gives a detailed description of analysis and processing tools available in application software Vision Assistant 7.1, on which our work is focused.
Chapter 4:- The fourth chapter gives the problem definition and proposed solution.
Chapter 5:- The fifth chapter discusses the implementation of various tools of application software for simulation and testing part of thesis.
Finally, concluding thesis in sixth chapter with future scope.
Chapter 1 :Literature Review
License plate recognition systems have received a lot of attention from the research community. Much research has been done on Korean, Chinese, Dutch and English license plates. A distinctive feature of research work in this area is being restricted to a specific region, city, or country. This is due to the lack of standardization among different license plates (i.e., the dimension and the layout of the license plates). This section gives an overview of the research carried out so far in this area and the techniques employed in developing an LPR system in lieu of the following four stages: image acquisition, license plate extraction, license plate segmentation and license plate recognition phases. In the next section various existing or novel methods for the image acquisition phase are presented.
1.2 Image Acquisition
Image Acquisition is the first step in an LPR system and there are a number of ways to acquire images, the current literature discusses different image acquisition methods used by various authors. Yan et. al.  used an image acquisition card that converts video signals to digital images based on some hardware-based image preprocessing. Naito et. al. [13,14,16] developed a sensing system, which uses two CCDs (Charge Coupled Devices) and a prism to split an incident ray into two lights with different intensities. The main feature of this sensing system is that it covers wide illumination conditions from twilight to noon under sunshine, and this system is capable of capturing images of fast moving vehicles without blurring. Salgado et. al.  used a Sensor subsystem having a high resolution CCD camera supplemented with a number of new digital operation capabilities. Kim et. al.  uses a video camera to acquire the image. Comelli et. al.  used a TV camera and a frame grabber card to acquire the image for the developed vehicle LPR system.
1.3 License Plate Extraction
License plate extraction is the most important phase in an LPR system. This section discusses some of the previous work done during the extraction phase. Hontani et. al.  proposed a method for extracting characters without prior knowledge of their position and size in the image. The technique is based on scale shape analysis, which in turn is based on the assumption that, characters have line-type shapes locally and blob-type shapes globally. In the scale shape analysis, Gaussian filters at various scales blur the given image and larger size shapes appear at larger scales. To detect these scales the idea of principal curvature plane is introduced. By means of normalized principal curvatures, characteristic points are extracted from the scale space x-y-t. The position (x, y) indicates the position of the figure and the scale t indicates the inherent characteristic size of corresponding figures. All these characteristic points enable the extraction of the figure from the given image that has line-type shapes locally and blob-type shapes globally. Kim et. al.  used two Neural Network- based filters and a post processor to combine two filtered images in order to locate the license plates. The two Neural Networks used are vertical and horizontal filters, which examine small windows of vertical and horizontal cross sections of an image and decide whether each window contains a license plate. Cross-sections have sufficient information for distinguishing a plate from the background. Lee et. al.  and Park et. al.  devised a method to extract Korean license plate depending on the color of the plate. A Korean license plate is composed of two different colors, one for characters and other for background and depending on this they are divided into three categories. In this method a neural network is used for extracting color of a pixel by HLS (Hue, Lightness and Saturation) values of eight neighboring pixels and a node of maximum value is chosen as a representative color. After every pixel of input image is converted into one of the four groups, horizontal and vertical histogram of white, red and green (i.e. Korean plates contains white, red and green colors) are calculated to extract a plate region. To select a probable plate region horizontal to vertical ratio of plate is used. Dong et. al  presented histogram based approach for the extraction phase. Kim G. M  used Hough transform for the extraction of the license plate. The algorithm behind the method consists of five steps. The first step is to threshold the gray scale source image, which leads to a binary image. Then in the second stage the resulting image is passed through two parallel sequences, in order to extract horizontal and vertical line segments respectively. The result is an image with edges highlighted. In the third step the resultant image is then used as input to the Hough transform, this produces a list of lines in the form of accumulator cells. In fourth step, the above cells are then analyzed and line segments are computed. Finally the list of horizontal and vertical line segments is combined and any rectangular regions matching the dimensions of a license plate are kept as candidate regions. The disadvantage is that, this method requires huge memory and is computationally expensive.
This section discusses previous work done for the segmentation of characters. Many different approaches have been proposed in the literature and some of them are as follows, Nieuwoudt et. al.  used region growing for segmentation of characters. The basic idea behind region growing is to identify one or more criteria that are characteristic for the desired region. After establishing the criteria, the image is searched for any pixels that fulfill the requirements. Whenever such a pixel is encountered, its neighbors are checked, and if any of the neighbors also match the criteria, both the pixels are considered as belonging to the same region. Morel et. al.  used partial differential equations (PDE) based technique, Neural network and fuzzy logic were adopted in for segmentation into individual characters.
This section presents the methods that were used to classify and then recognize the individual characters. The classification is based on the extracted features. These features are then classified using either the statistical, syntactic or neural approaches. Some of the previous work in the classification and recognition of characters is as follows, Hasen et. al.  discusses a statistical pattern recognition approach for recognition but their technique found to be inefficient. This approach is based on the probabilistic model and uses statistical pattern recognition approach. Cowell et. al.  discussed the recognition of individual Arabic and Latin characters. Their approach identifies the characters based on the number of black pixel rows and columns of the character and comparison of those values to a set of templates or signatures in the database. Cowell et. al.  discusses the thinning of Arabic characters to extract essential structural information of each character which may be later used for the classification stage. Mei Yu et. al.  and Naito et. al.  used template matching. Template matching involves the use of a database of characters or templates. There is a separate template for each possible input character. Recognition is achieved by comparing the current input character to each of template in order to find the one which matches the best. If I(x,y) is the input character, T„(x,y) is template n, then the matching function s(l,Tn) will return a value indicating how well template n matches the input. Hamami et. al.  adopted a structural or syntactic approach to recognize characters in a text document, this technique can yield a better result when applied on the recognition of individual characters. This approach is based on the detection of holes and concavities in the four directions (up, down, left and right), which permits the classification of characters into different classes. In addition, secondary characteristics are used in order to differentiate between the characters of each class. The approaches discussed in this paragraph are based on the structural information of the characters and uses syntactic pattern recognition approach. Hu  proposed seven moment that can be used as features to classify the characters. These moments are invariant to scaling, rotation and translation. The obtained moments acts as the features, which are passed to the neural network for the classification or recognition of characters. Zemike moments have also been used by several authors [4,2,3] for recognition of characters. Using zemike moments both the rotation variant and rotation invariant features can be extracted. These features then uses neural network for the recognition phase. Neural network accepts any set of distinguishable features of a pattern as input. It then trains the network using the input data and the training algorithms to recognize the input pattern (In this case characters).
1.6 Commercial Products
The various products in the market today are described briefly below.
1.6.1 IMPS (Integrated Multi-Pass System)
An IMP  is a Singaporean commercially developed license plate recognition system. It is a high performing robust system that gives consistent results under all weather conditions. Using advanced image processing and artificial intelligent techniques such as AI best first breadth-wise search algorithm, combined template and neural network recognizers, fuzzy logic and an arsenal of image processing tools, it automatically locates vehicle license plates and reads the numbers accurately each time every time.
Perceptics  is the world leader in license plate reader technology. Current LPR system read Latin (A-Z) and Korean (Hangul) letter and Arabic number (0-9); however, the LPR can be programmed to read any language or symbol in any alphanumeric combination or context on both retro and non-retro reflective plates. With milliseconds the LPR system locates, captures and identifies a vehicle's license plate data and makes a read decision. The system's reliability and flexibility allow it to accommodate some of the most stringent needs in some of the worst conditions. Features of this LPR technology includes.
- Automatic and within milliseconds.
- Reads accurately in most weather conditions.
- Reads accurately at highway speeds.
- Works 24 hours a day, 7 days a week.
1.6.3 Vehicle Identification System for Parking Areas (VISPA)
PPI's Vehicle Identification System for Parking Areas (VISPA) , uses video imaging for better recognition, identification and improved security. VISPA provides for state-of-the-art video technology, easy installation and has accessories and features for most parking security surveillance needs.
- Open architecture to most common video-systems.
- Compatible with standard hardware and software.
- Can be customized according to specific user needs.
VISPA is available in two forms
Basic Version: - An image of the car and/or the driver (depending on the location of your camera) will be taken as soon as the car approaches the triggering device. The image will be linked to the ticket. The basic system version connects to 4 cameras and can be upgraded to 8 cameras.
Enhanced Version:- License Plate Identification, The VISPA controller with an integrated frame grabber card for 4, 8, or 16 cameras automatically identifies the license plate from the video image and stores it in a database. The license plate can then be encoded on the ticket.
1.6.4 Hi-Tech Solution
Hi-Tech Solutions  is a system and software company that develops cutting edge optical character recognition (OCR) solutions by implementing the company's unique image processing software and hardware in a wide range of security and transportation applications. There technology is based on computer vision, the system read the camera images and extract the identification data from the images. The recognition result is then logged together with the images. This is the main advantage of vision based recognition, the records include both the image plus the extracted result. There product includes,
See Car License Plate Recognition:- Detects and reads Vehicle license plates for parking, access control, traffic surveillance, law enforcement and security applications. Available as a complete system which is based on a background Windows application, Windows DLL or Linux library, as a stand-alone turn-key version, or in form of different special-task systems.
See Container Identification System:- Tracks and reads Shipping container identification marking, and transmits the ID string to the port or gate computer, or to a client process. Available as complete systems, such as See Gate - a recognition system for the Tracks and Containers, or See Crane - crane mounted Container recognition system.
This chapter reviewed material relevant to the license plate recognition system. The relevant techniques used in the four phases of an LPR system were discussed. Several commercially available and developed LPR systems is also presented. In the case of image acquisition, a sensing system using two Charge Coupled Devices along with a prism gives better input to the system. Because the main feature of this sensing system is that it covers wide illumination conditions from twilight to noon under sunshine, and this system is capable of capturing images of fast moving vehicles without blurring video camera with a frame. In the case of license plate extraction, Hough transform was used to extract the license plate by using storing the horizontal and vertical edge information. But the disadvantage is that, this method requires huge memory and is computationally expensive. Various segmentation techniques were presented in the segmentation stage. Then the literature for recognition of characters using various approaches was also discussed. Lastly, some of the number plate recognition systems which have been developed commercially were presented.
License plate recognition (LPR) is an image-processing technology used to identify vehicles by their license plates. This technology is gaining popularity in security and traffic installations. Much research has already been done for the recognition of Korean, Chinese, European, American and other license plates, This thesis presents a license plate recognition system as an application of computer vision. Computer vision is a process of using a computer to extract high level information from a digital image. This chapter will set the scene by first presenting some applications of a license plate recognition system. Next, we discuss the elements that are commonly used in a license plate recognition system. Following this, the working of a typical LPR system is described. Next, we present the structure of proposed license plate recognition system. Finally, the objectives of the work are stated. The chapter ends with a brief overview of the rest of this thesis.
2.2 Applications of LPR Systems
Vehicle license plate recognition is one form of automatic vehicle identification system. LPR systems are of considerable interest, because of their potential applications to areas such as highway electronic toll collection, automatic parking attendant, petrol station forecourt surveillance, speed limit enforcement, security, customer identification enabling personalized services, etc. Real time LPR plays a major role in automatic monitoring of traffic rules and maintaining law enforcement on public roads. This area is challenging because it requires an integration of many computer vision problem solvers, which include Object Detection and Character Recognition. The automatic identification of vehicles by the contents of their license plates is important in private transport applications. There are many applications of such recognition systems, some of them are discussed below.
Law Enforcement :- The plate number is used to produce a violation fine on speeding vehicles, illegal use of bus lanes, and detection of stolen or wanted vehicles. License plate recognition technology has gained popularity in security and traffic applications as it is based on the fact that all vehicles have a license plate and there is no need to install any additional tracking apparatus. The main advantage is that the system can store the image record for future references. The rear part of the vehicle is extracted off the filmed image and is given to the system for processing. The processed result is fed into the database as input. The violators can pay the fine online and can be presented with the image of the car as a proof along with the speeding information.
Parking The LPR system is used to automatically enter pre-paid members and calculate parking fee for non-members (by comparing the exit and entry times). The car plate is recognized and stored and upon its exit the car plate is read again and the driver is charged for the duration of parking.
Automatic Toll Gates :- Manual toll gates require the vehicle to stop and the driver to pay an appropriate tariff. In an automatic system the vehicle would no longer need to stop. As it passes the toll gate, it would be automatically classified in order to calculate the correct tariff.
Border Crossing :- This application assists the registry of entry or exits to a country, and can be used to monitor the border crossings. Each vehicle information is registered into a central database and can be linked to additional information.
Homeland Security :- The LPR system's ability to read strings of alphanumeric characters and compare them instantaneously to Hot Lists allows a Command Center to organize and strategize efforts in reaction to the information captured. Fixed LPR systems, which can be mounted to bridges, gates and other high traffic areas can help keep a tight watch on entire cities, ports, borders and other vulnerable areas. Every LPR camera is capturing critical data such as color photos, date and time stamps, as well as GPS coordinates on every vehicle that passes or is passed. This incredible database provides a wealth of clues and proof, which can greatly aid Law Enforcement with
- Pattern recognition
- Placing a suspect at a scene
- Watch list development
- Identifying witnesses
- Possible visual clues revealed within the image of a car's immediate environment
2.3 Elements of Typical LPR System
LPR systems normally consist of the following units:
Camera :- Takes image of a vehicle from either front or rear end.
Illumination :- A controlled light that can bright up the plate, and allows day and night operation. In most cases the illumination is Infra-Red (IR) which is invisible to the driver.
Frame Grabber :- An interface board between the camera and the PC that allows the software to read the image information.
Computer :- Normally a PC running Windows or Linux. It runs the LPR application that controls the system, reads the images, analyzes and identifies the plate, and interfaces with other applications and systems.
Software :- The application and the recognition package.
Hardware :- Various input/output boards used to interface the external world (such as control boards and networking boards).
Database :- The events are recorded on a local database or transmitted over the network. The data includes the recognition results and (optionally) the vehicle or driver face image file.
2.4 Working of Typical LPR System
When the vehicle approaches the secured area, the LPR unit senses the car and activates the illumination (invisible infra-red in most cases) as shown in Figure below. The LPR unit takes the pictures from either the front or rear plates from the LPR camera. The image of the vehicle contains the license plate. The LPR unit feeds the input image to the system. The system then enhances the image, detects the plate position, extracts the plate, segments the characters on the plate and recognizes the segmented characters, Checks if the vehicle appears on a predefined list of authorized vehicles, If found, it signals to open the gate by activating its relay. The unit can also switch on a green "go-ahead" light or red "stop" light. The unit can also display a welcome message or a message with personalized data. The authorized vehicle enters into the secured area. After passing the gate its detector closes the gate. Now the system waits for the next vehicle to approach the secured area.
Abbildung in dieser Leseprobe nicht enthalten
Figure 2.1 A car approaching a license plate recognition system
2.5 Structure of the Proposed System
The system presented is designed to recognize license plates from the front and rear of the vehicle. Input to the system is an image sequence acquired by a digital camera that consists of a license plate and its output is the recognition of characters on the license plate. The system consists of the standard four main modules in an LPR system, viz. Image acquisition, License plate extraction, License plate segmentation and License plate recognition. The first task acquires the image. The second task extracts the region that contains the license plate. The third task isolates the characters, letters and numerals (total of 10 digits), as in the case of Indian License Plates. The last task identifies or recognizes the segmented characters.
2.5.1 Image Acquisition
This is the first phase in an LPR system. This phase deals with acquiring an image by an acquisition method. In our proposed system, we used a high resolution digital camera to acquire the input image. The input image is 1200 x 1600 pixels.
2.5.2 License Plate Extraction
License Plate Extraction is a key step in an LPR system, which influences the accuracy of the system significantly. This phase extracts the region of interest, i. e., the license plate, from the acquired image. The proposed approach involves "Masking of a region with high probability of license plate and then scanning the whole masked region for license plate”.
2.5.3 License Plate Segmentation
License Plate Segmentation, which is sometimes referred to as Character Isolation takes the region of interest and attempts to divide it into individual characters. In the proposed system segmentation is done in the OCR section which will be described in next chapters.
2.5.4 License Plate Recognition
The last phase in LPR system is to recognize the isolated characters. After splitting the extracted license plate into individual character images, the character in each image can be identified. There are many methods used to recognize isolated characters. In the proposed system we are using Optical Character Recognition which is an inbuilt feature in Vision Assistant 7.1 . Optical Character Recognition is described in detail in next chapters.
The work presented here aims at the following aspects.
- Study the existing license plate recognition systems,
- Develop a new technique or enhance existing techniques for each phase in a license plate recognition system
- Compare the various techniques at hand with the proposed system, and
- Build a system that delivers optimal performance both in terms of speed and accuracy.
Chapter-3 : Software Development
3.1 Digital Images
This section contains information about the properties of digital images, image types, file formats, the internal representation of images in IMAQ Vision, image borders, and image masks.
3.1.1 Definition of a Digital Image
An image is a 2D array of values representing light intensity. For the purposes of image processing, the term image refers to a digital image. An image is a function of the light intensity.
where f is the brightness of the point (x, y), and x and y represent the spatial coordinates of a picture element, or pixel. By convention, the spatial reference of the pixel with the coordinates (0, 0) is located at the top, left corner of the image. Notice in Figure 3.1 that the value of x increases moving from left to right, and the value of y increases from top to bottom.
illustration not visible in this excerpt
Fig 3.1 Special reference of the (0,0) pixel.
3.2 Vision Assistant: An overview
A detailed overview of vision assistant 7.1 is given as under.
3.2.1 Acquiring Images
Vision Assistant offers three types of image acquisitions: snap, grab, and sequence. A snap acquires and displays a single image. A grab acquires and displays a continuous set of images, which is useful while focusing the camera. A sequence acquires images according to settings that are
specified and sends the images to the Image Browser. Using Vision Assistant, images can be acquired with various National Instruments digital and analog IMAQ devices. Vision Assistant provides specific support for several Sony, JAI, and IEEE 1394 cameras. IMAQ devices can be configured in National Instruments Measurement & Automation Explorer (MAX). The sequence can be stopped at any frame, capture the image, and send the image to the Image Browser for processing.
(A) Opening the Acquisition window
Complete the following steps to acquire images.
1. Click Start » Programs » National Instruments » Vision Assistant 7.1.
2. Click Acquire Image in the Welcome screen to view the Acquisition functions.
If Vision Assistant is already running, click the Acquire Image button in the toolbar. We must have one of the following device and driver software combinations to acquire live images in Vision Assistant.
- National Instruments IMAQ device and NI-IMAQ 3.0 or later
- IEEE 1394 industrial camera and NI-IMAQ for IEEE 1394 Cameras 1.5 or later Click Acquire Image. The Parameter window displays the IMAQ devices and channels installed on the computer.
(B) Snapping an image
1 .Click File » Acquire Image.
2. Click Acquire Image in the Acquisition function list.
3. Select the appropriate device and channel.
4. Click the Acquire Single Image button to acquire a single image with the IMAQ device and display it. pj^"
illustration not visible in this excerpt
5. Click the Store Acquired Image in Browser button to send the image to the Image Browser.
6.Click Close to exit the Parameter window.
7.Process the image in Vision Assistant.
3.2.2 Managing Images
1. Select Start » Programs » National Instruments » Vision Assistant 7.1
2. To load images, click Open Image in the Welcome screen.
Abbildung in dieser Leseprobe nicht enthalten
Fig: 3.2 Image Browser
3. Navigate to select the image which we want to process. If analysis is to be done on more than one image then there is Select All Files option also available in Vision Assistant. It previews the images in the Preview Image window and displays information about the file type.
4. Click OK. Vision Assistant loads the image files into the Image Browser, as shown in Figure 5.2. The Image Browser provides information about the selected image such as image size, location, and type. We can view new images in either thumbnail view, as shown in Figure 3.2 or in full-size view, which shows a single full-size view of the selected image.