Submit manuscript...
eISSN: 2576-4543

Physics & Astronomy International Journal

Review Article Volume 7 Issue 1

Automatic vehicle number plate recognition system

Manyika Kabuswa Davy,1 Peter Jairous Banda,1 Agripa Hamweendo2

1Mulungushi University, School of Natural and Applied Sciences, Department of Physics, Zambia
2Mulungushi University, School of Engineering and Technology, Department of Engineering, Zambia

Correspondence: Manyika Kabuswa Davy, Mulungushi University, School of Natural and Applied Sciences, Department of Physics, Zambia

Received: March 21, 2023 | Published: March 28, 2023

Citation: Davy MK, Banda PJ, Hamweendo A. Automatic vehicle number plate recognition system. Phys Astron Int J. 2023;7(1):69-72. DOI: 10.15406/paij.2023.07.00286

Download PDF

Abstract

In simple terms, Automatic Vehicle Number Plate Recognition System (AVNPRS) can be described as a computer vision image processing technology. This technology captures images of vehicles and recognizes their number plates. As the world is becoming more sophisticated on each new day, security as well as management of transportation system have become a vital aspect in controlled places. Such places include modern campuses, shopping malls and other institutions. With today’s rate at which motor vehicles are increasing in numbers, there is need for an effective, affordable and efficient AVNPRS. Thus, this project proposes, outlines and discusses an efficient AVNPRS. The system under consideration in this project can be installed at main entrances of modern institutions. This is because the AVNPRS ensures that only authorized vehicles can automatically have access to such institutions.

The AVNPRS ensures that it captures the image of the number plate once the input sensor detects the vehicle. The Sobel edge detection and Laplacian edge detection techniques are used at this critical stage. Thereafter, the Bounding box technique is used to find the number plate leading to character segmentation. After capturing, an image undergoes extraction and character investigation via the Optical Character Recognition (OCR). In addition, in achieving character recognition, matching between the computer template and segmented image is done via the OCR method. It is vital to mention that this system is sustainable as it successfully detects, recognizes and processes vehicle number plates on real images. The AVNPRS can be used for both traffic control and security. Therefore, the main aim of this piece of writing is to develop from a theoretical perspective an AVNPR system that can detect and capture vehicle number plate images. 

Keywords: Automatic Vehicle Number Plate Recognition System, Optical Character Recognition, Image Detection and Processing, Motor Vehicle Number Plate

Introduction

Automatic Vehicle Number Plate Recognition System (AVNPRS) is a world recognized way that utilizes the technology of Optical Character Recognition (OCR) on images. This technology helps to read vehicle registration number plates resulting in a creation of a vehicle location data base. This data base can be used to store images captures by cameras including the writing on the number plates. In addition to this, in our day to day lives, some of such systems are configured to store photographs of motor vehicle drivers. This AVNPRS technology is now used by several state agencies especially in the security wings like the police. This system is used for various reasons which include laws enforcement purposes, motor vehicle registration and license verification. The system uses infrared lighting that allows the cameras to take pictures at any time while taking into account number plate variations of different motor vehicles.1

An AVNPRS has and is still an area of interest in research under image monitoring and processing systems.2 With its sophisticated high-tech cameras, an AVNPRS has a variety of applications varying from parking lots to traffic management and border crossing security.3-5 Other applications include automated parking attendant, stolen vehicles identification and many more.6,7 Despite differences in appearance of these systems today, most of their basic algorithms remain similar.8 In general, an AVNPRS comprises of three major components which include vehicle number plate detection, a number plate character segmentation section and an OCR.9-11 There are a number of processes that take place within these three components. These include localization of vehicle number plates, orientation calibration, segmentation of characters and an OCR.12,13

Detection and image capturing

Detection and image capturing are crucial techniques used to process images before extraction takes its course.13 Detection is a crucial aspect of an AVNPRS. It is at this point that all processes are kicked into action. This is one of the most difficulty tasks when it comes to computer vision and processing of images.14 Prior to the processing of images, the input image is converted to grayscale and best suited with adaptive banalization methods such as the Niblack’s method.

On one hand, during the process of detecting and capturing images, it is vital to use acquisition techniques where images that are retrieved have sufficient light from the cameras in order to obtain clear images of vehicle number plates. On the other hand, environmental conditions may to a larger extent affect the quality of captured images due to open-air environment.15 During detection and image capturing, it is vital to consider factors like the edge of the motor vehicle number plate, pixel intensity texture distribution as well as colour of characters. This helps well in vehicle number plates that may be tilted or deformed.

Hardware configuration

An AVNPRS generally consists of two segments. These include the hardware and software configurations. The hardware component comprises of the single board computer (SBC). In this paper, the hardware configuration is done on a Raspberry Pi 2 B+ board. The system is using the camera to capture images just when an infrared sensor (IR) recognises the presence of a motor vehicle within range. Furthermore, detection of the presence of a motor vehicle is done by using an IR sensor which in turn switches on the camera. At this stage, the inputs to the system are the IR and the camera while the output is the extraction and recognition processes of the motor vehicle number plate characters as shown in the Figure 1.

Figure 1 Illustration of the AVNPRS.

As part of the inputs, cameras play a vital role in an AVNPRS. Thus, their installation is a crucial step and it comes with different environmental challenges. For this reason, unavoidable challenges like the position of the sun during the day or some night when it is too dark must be taken into consideration. There are several ways in which cameras can be positioned in an AVNPRS. This paper suggests the one illustrated in the Figure 2 as one of the best fit as it gives minimized environmental challenges. The boost to this would be to put the infrastructure under a shelter of good standard for an improved performance. On one hand, just like any other system, the hardware configuration has a flowchart of processes as shown in Figure 3. The IR detects the motor vehicle and automatically the camera switches on and captures an image. The image is then saved and processed for recognition. On the other hand, Figure 4 illustrates the layout of the hardware circuit diagram. As per usual practice, green and red lights are connected to the layout in order to indicate authorization or denied entry respectively.

Figure 2 Camera positioning in an AVNPRS.

Figure 3 Flowchart of processes.

Figure 4 Layout of hardware circuit diagram.

Software configuration

Processes involved under software configuration include motor vehicle identification and number plate detection. To achieve these vital processes of the AVNPRS, a simulation is carried out in order to identify each step of a configuration using MATLAB. There are processes that take place prior to the actual processing of images. Image capturing kick starts image processing leading to the edge detection phase. At this point, grayscale computations from RGB are done before contrast and intensity adjustments processes take course. This is done in order to eliminate huge data and saturation of images. Equation 1 shows how to convert RGB to grayscale where RGB images comprise of 30% red, 60% green and 11% blue. Furthermore, intensity and contrast adjustments are done primarily to reduce image noise. This vital task is achieved via histogram equalization technique.

Grayscale=0.209R+0.5870G+0.1140B MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaOaam4rai aadkhacaWGHbGaamyEaiaadohacaWGJbGaamyyaiaadYgacaWGLbGa eyypa0JaaGimaiaac6cacaaIYaGaaGimaiaaiMdacqGHxiIkcaWGsb Gaey4kaSIaaGimaiaac6cacaaI1aGaaGioaiaaiEdacaaIWaGaey4f IOIaam4raiabgUcaRiaaicdacaGGUaGaaGymaiaaigdacaaI0aGaaG imaiabgEHiQiaadkeaaaa@5334@   (1)

The process of detecting the vehicle number plates consists of two components. The first one is the detection of the rectangular images in line with the size of the number plates. The second aspect is to compare the size of the number plate if it matches the image of the number plates stored in the system. The dimensions and objects in the image are identified by the drastic change in color and texture of an edge of an image. In this work, we use the Sobel edge operator which is a first order differential operator within the frame work of edge extraction techniques. The Sobel operator is used to distinguish object boundaries due to its quicker speed resulting in thick edge operations which include horizontal, diagonal and straight directions. The Sobel operator gives and advantage for brightness resulting in good boundaries and background of images. Figure 5 shows the convolution masks for the Sobel operator where Gy illustrates the image of Gx that is rotated by 90o.

Figure 5 Sobel operator convolution masks.

From Figure 5, we can combine both Gx and Gy to obtain the magnitude of each point by using the relation in Equation 2.

| G |= G x 2 + G y 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfa4aaqWaae aacaWGhbaacaGLhWUaayjcSdGaeyypa0ZaaOaaaeaacaWGhbWaa0ba aeaacaWG4baabaGaaGOmaaaacqGHRaWkcaWGhbWaa0baaeaacaWG5b aabaGaaGOmaaaaaeqaaaaa@41B9@   (2)

In order to approximate the magnitude, we sum the individual magnitudes of Gx and Gy as shown in Equation 3.

| G |=| G x |+| G y | MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfa4aaqWaae aacaWGhbaacaGLhWUaayjcSdGaeyypa0ZaaqWaaeaacaWGhbWaaSba aeaacaWG4baabeaaaiaawEa7caGLiWoacqGHRaWkdaabdaqaaiaadE eadaWgaaqaaiaadMhaaeqaaaGaay5bSlaawIa7aaaa@4673@   (3)

The fact that the camera cannot be entirely placed vertically straight with the position of the number plate for each motor vehicle entails that there is always an angle of orientation that can be determined using Equation 4.

θ=arctan( G x / G y ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaOaeqiUde Naeyypa0JaciyyaiaackhacaGGJbGaaiiDaiaacggacaGGUbWaaeWa aeaadaWcgaqaaiaadEeadaWgaaqaaiaadIhaaeqaaaqaaiaadEeada WgaaqaaiaadMhaaeqaaaaaaiaawIcacaGLPaaaaaa@4447@   (4)

Besides the Sobel operator, this paper also explores the Canny edge operator which evolves under multi-stage processes. Under this operator, the image is first made smooth using the Gaussian convolution. This is followed by highlighting critical regions in the images and then determine the edge gradient and direction using the same Sobel operator in Equation 2. Thereafter, in order to completely remove false responses, the non-maximum suppression is applied and finally we use hysteresis to track the edges of images and characters. In addition, in order to address image noise issues as well as edge detection, the second order derivative of the Laplacian of Gaussian (LoG) is used. In the process, the two-dimension operator produces the scalar function as per Equation 5.

Δf( x,y )= 2 f x 2 + 2 f y 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaOaeyiLdq KaamOzamaabmaabaGaamiEaiaacYcacaWG5baacaGLOaGaayzkaaGa eyypa0ZaaSaaaeaacqGHciITdaahaaqabeaacaaIYaaaaiaadAgaae aacqGHciITcaWG4bWaaWbaaeqabaGaaGOmaaaaaaGaey4kaSYaaSaa aeaacqGHciITdaahaaqabeaacaaIYaaaaiaadAgaaeaacqGHciITca WG5bWaaWbaaeqabaGaaGOmaaaaaaaaaa@4BF3@   (5)

In order to determine zero crossing, pixels on the images with locally maximum gradient are examined. Mathematically, this is done by using the LoG function with a Gaussian standard deviation as per Equation 6.

LoG( x,y )= 1 π σ 4 [ 1 ( x 2 + y 2 ) 2 σ 2 ] e ( x 2 + y 2 )/ 2 σ 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaOaamitai aad+gacaWGhbWaaeWaaeaacaWG4bGaaiilaiaadMhaaiaawIcacaGL PaaacqGH9aqpcqGHsisldaWcaaqaaiaaigdaaeaacqaHapaCcqaHdp WCdaahaaqabeaacaaI0aaaaaaadaWadaqaaiaaigdacqGHsisldaWc aaqaamaabmaabaGaamiEamaaCaaabeqaaiaaikdaaaGaey4kaSIaam yEamaaCaaabeqaaiaaikdaaaaacaGLOaGaayzkaaaabaGaaGOmaiab eo8aZnaaCaaabeqaaiaaikdaaaaaaaGaay5waiaaw2faaiaadwgada ahaaqabeaadaWcgaqaamaabmaabaGaamiEamaaCaaabeqaaiaaikda aaGaey4kaSIaamyEamaaCaaabeqaaiaaikdaaaaacaGLOaGaayzkaa aabaGaaGOmaiabeo8aZnaaCaaabeqaaiaaikdaaaaaaaaaaaa@5C31@   (6)

Unlike the Sobel operator, the Laplacian edge detector is able to compute second order derivative mask by using only one mask called the two-dimensional convolution kernel. After image conversion to grayscale, a combination technique is introduced through an operation that combines both edge detection techniques in order to create the final detected image. Here the combination are Canny and Laplacian edge detectors and the Sobel operator and the Laplacian edge detector.

Character segmentation and recognition

In an AVNPRS, the process of isolating each character on the motor vehicle number plate is called segmentation. To achieve this, the image is divided into different sub-parts using a technique called Bounding box illustrated in Figure 6. The Bounding box technique comprises of the process of labelling enclosed regions with rectangular boxes which is followed by the determination of the corner coordinates, the height as well as the width. After each character has been identified, they are cropped and sent further to undergo segmentation. From segmentation, character recognition follows. The main goal at this stage is to convert image texts into characters. This is achieved by using the OCR algorithm by inverting the cropped image as illustrated in Figure 7. At this stage, the OCR is used to compare each character to the alphanumerical data base and the results are stored in a string format using correlation method.

Figure 6 Bounding box technique.

Figure 7 OCR using eigenvectors.

System flowchart

The flowchart for the entire AVNPRS is illustrated in Figure 8. Once the vehicle is in the vicinity, it is detected and the camera switches on automatically to capture the image. The pre-processing stage is carried out once the image is captured followed by the detection of the motor vehicle number plate. After detection, character segmentation takes place followed by the recognition of characters. At this stage, the motor vehicle number plate is stored and compared with the template in the database to trigger entry authorization or denied access.

Figure 8 Flowchart for the entire system.

Conclusion

On one hand, an AVNPRS plays a vital role in security as a deterrent. The fact that criminals are aware that they are being recorded stops the behaviour in advance. This system is also very useful for the police as they can go through databases and check suspicious motor vehicles. On the other hand, it being an automated system plays a vital role in parking solutions in very congested areas. Finally, there are a number of real-time benefits that this system provides us with. For example, would be violators of traffic laws are cornered and charged within two days or so due to its fast nature of the cameras information processing.

Acknowledgments

This project was financially supported by Mulungushi University under the Directorate of Research and Postgraduate Studies – Mulungushi University Research Grant.

Conflicts of interest

None.

References

  1. Ibrahim S Du, Shehata M, Badawy W. Automatic License Plate Recognition (ALPR): A State–of–the–Art Review, IEEE Transactions on Circuits and Systems for Video Technology. 2017. 
  2. Qadri MT, Asif M. Automatic number plate recognition system for vehicle identification using optical character recognition. International Conference on Education Technology and Computer. 2009;335–338.
  3. Ondrej M, Zboril V Frantisek, D Martin. Algorithmic and mathematical principles of automatic number plate recognition systems, BRNO University of Technology. 2007.
  4. Chang SL, Chung YC, Chen SW. Automatic license plate recognition, IEEE Transactions on Intelligent Transportation Systems. 2004.
  5. Wen Y, Lu Y, Yan J, et al. An algorithm for license plate recognition applied to intelligent transportation system, IEEE Transactions on Intelligent Transportation Systems. 2011.
  6. Rahman CA, Badawy W, Radmanesh A. A Real Time Vehicle’s License Plate Recognition System. Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance. 2003.
  7. Sirithinaphong T, Chamnongthai K. The recognition of car license plate for automatic parking system, 5th. International Symposium on Signal Processing and Its Applications. 1999.
  8. Du S, Ibrahim M, Shehata M, Badawy W. Automatic license plate recognition (ALPR): A state–of–the–art review, IEEE Transactions on Circuits and Systems for Video Technology. 2013.
  9. Saleem N, Muazzam H, Tahir HM. Automatic license plate recognition using extracted features.4th. IEEEInternational Symposium on Computational and Business Intelligence. 2016.
  10. Balamurugan G, Punniakodi Rajeswari SK, Arulalan V. Automatic number plate recognition system using super–resolution techniqu, International Conference on Computing and Communications Technologies (ICCCT). 2015.
Creative Commons Attribution License

©2023 Davy, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.