Submit manuscript...
Advances in
eISSN: 2378-3168

Obesity, Weight Management & Control

Research Article Volume 12 Issue 5

Diabetic retinopathy diagnosis using image processing methods

Ali Bakhshi,3,6 Kobra Hajizadeh,1,3,4,5 Mohammad Reza Tanhayi,2,3,5 Reza Jamshidi1,3,5

1Department of Physics, Islamic Azad University, Iran
2Department of Science, Islamic Azad University, Iran
3School of physics, Institute for Research in Fundamental Sciences (IPM), Iran
4Research center for modeling and optimization in science and engineering, Islamic Azad University, Iran
5Nanotechnology Research Center, Islamic Azad University, Iran
6Nanotechnology Department, Iran University of Science & Technology (IUST), Iran

Correspondence: Kobra Hajizadeh, Department of Physics, South Tehran Branch, Islamic Azad University, Tehran, Iran

Received: July 12, 2022 | Published: September 8, 2022

Citation: Bakhshi A, Hajizadeh K, Tanhayi MR, et al. Diabetic retinopathy diagnosis using image processing methods. Adv Obes Weight Manag Control 2022;12(5):132-134. DOI: 10.15406/aowmc.2022.12.00375

Download PDF


Diabetes mellitus is common disease nowadays which could cause blindness. Earlier detection of diabetes signs from retina fundus images could help predicting and preventing the damages. Image processing methods could process the matrix data of pictures as blood vessel segmentation and exudate detection. In this research, the CLAHE algorithm with morphological transformations are used to blood vessel segmentation and determination of the Hessian matrix of images are utilized to detect the exudate blobs.

Keywords: image processing, diabetic retinopathy, blood vessel, exudate


Diabetes mellitus could cause blindness by damaging the small blood vessels and neurons of the retina.1 Main types of diabetes are type 1, type 2 and gestational diabetes.2 The exact reason of diabetes are not fully discovered but some factors have effect on developing of them. The loss of beta cells in pancreas could cause insulin production failure which appears as type 1 diabetes.3 Type 2 diabetes are common in adults due to the obesity and less physical activity which causes cells failure to respond the produced insulin normally.4 In initial conditions, vision effects may not be noticeable by patient (non-proliferative) which indicates patient needs fundus photography for detection of outward bulging called microaneurysms or fluorescein angiography in reduced vision issues. Blood vessel contents could permeates through microaneurysms spots and cause the macular edema. In further conditions, new fragile blood vessels appears in choroid called neovascularization which further bleeding results vitreous hemorrhages (proliferative).5 Fast and earlier detection of these signs from retina fundus images could prevent further damages and vision issues.

In diabetic retinopathy detection, image classification could be based on fundus image and retina vessel segmentation. Image processing techniques are used in blood vessel segmentation, artery/vein segmentation, optical disc localization, retina diameter and image denoising. Also, artificial neural networks from deep learning methods like convolutional neural networks (CNNs) (e.g. AlexNeT, GoogLeNet, VGGNet, FCN) have been utilized for retinopathy classification and segmentations as an automated and efficient methods.6,7

In this research, image processing methods like morphological transforms with contrast limited adaptive histogram equalization (CLAHE) are utilized to extract vessels and exudates from the fundus images. We used the APTOS 2019 dataset for diabetic retina fundus images.

Image processing method

Image properties are important in image processing steps. Among common formats like jpeg, png, tiff, bmp, svg, gif, webp and etc. which are available for image compressing, high quality jpeg and png are frequently used in the retina fundus databases. The jpeg lossy compression algorithm is suitable for images with smooth variation of tone and color as the pixels are the true color (24-bit) rated as 8 bit/color which means between 0 and 255 in each (red, green, blue) to give 16,777,216 color variation. The fundus images from camera could have artifacts like eye reflecting, motion blur, tilting and rotating that in this cases, preprocessing is needed.

True colors could be saved in an array of color channels which could be processed to get desired features from color gradients. In order to subtract local mean color from image, image was processed through equation1.8 and saturated. With splitting the color channels, the green channel could be used for CLAHE processing or the image could be converted to a gray-scale/binary image. Local contrast improving enhances the edge definitions with limited amplification by redistributing the exceeded part from clip limit in several histogram equalization.9 Morphological transformations like opening and closing are applied on images with a structuring element or kernel of elliptical array as a sequential filtering. The resulted image includes blood vessels and hard exudates with a further CLAHE reprocessing. Image matrix could be converted to a sharpened binary image by replacing the pixels values which are greater than a limiting constant with 255.

Blurred image = image*alpha + Gaussian blurred image*beta + gamma MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaabkeacaqGSb GaaeyDaiaabkhacaqGYbGaaeyzaiaabsgacaqGGaGaaeyAaiaab2ga caqGHbGaae4zaiaabwgacaqGGaGaaeypaiaabccacaqGPbGaaeyBai aabggacaqGNbGaaeyzaiaabQcacaqGHbGaaeiBaiaabchacaqGObGa aeyyaiaabccacaqGRaGaaeiiaiaabEeacaqGHbGaaeyDaiaabohaca qGZbGaaeyAaiaabggacaqGUbGaaeiiaiaabkgacaqGSbGaaeyDaiaa bkhacaqGYbGaaeyzaiaabsgacaqGGaGaaeyAaiaab2gacaqGHbGaae 4zaiaabwgacaqGQaGaaeOyaiaabwgacaqG0bGaaeyyaiaabccacaqG RaGaaeiiaiaabEgacaqGHbGaaeyBaiaab2gacaqGHbaaaa@6F01@  eq.1

Which alpha and beta are gain and gamma is bias weight to blend pictures linearly.

Algorithms like Harris Laplace, Hessian feature detector (blob/corner detection)[10], Hough transform (line detection) and etc. could be used to find contours of shapes. Determination of Hessian (DoH) identifies the blobs by finding the maximums in the DoH of the image which hard exudates as bright/dark blobs could be located in a dark/bright background respectively. Blob detection with hessian matrix could be considered by calculating the local maximum of scale-normalized determinant of the Hessian matrix10

det(H norm L)= t 2 (L XX L YY L 2 XY ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaabsgacaqGLb GaaeiDaiaabIcacaqGibWaaSbaaSqaaiaad6gacaWGVbGaamOCaiaa d2gaaeqaaOGaaeitaiaabMcacaqG9aGaaeiiaiaabshadaahaaWcbe qaaiaaikdaaaGccaqGOaGaaeitamaaBaaaleaacaWGybGaamiwaaqa baGccaWGmbWaaSbaaSqaaiaadMfacaWGzbaabeaakiabgkHiTiaadY eadaahaaWcbeqaaiaaikdaaaGcdaWgaaWcbaGaamiwaiaadMfaaeqa aOGaaiykaaaa@4F94@   ( x ^ , y ^ ; t ^ ) =local argmax(det (H norm L)) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaabmaabaGabm iEayaajaGaaiilaiqadMhagaqcaiaacUdaceGG0bGbaKaaaiaawIca caGLPaaaqaaaaaaaaaWdbiaacckacqGH9aqpcaWGSbGaam4Baiaado gacaWGHbGaamiBaiaacckaciGGHbGaaiOCaiaacEgacaGGTbGaaiyy aiaacIhacaGGOaGaaiizaiaacwgacaGG0bWdaiaabIcacaqGibWaaS baaSqaaiaad6gacaWGVbGaamOCaiaad2gaaeqaaOGaaeitaiaabMca caqGPaaaaa@55FC@  eq.2

( x ^ , y ^ ; t ^ ) =local argmax(det (H norm L)) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaabmaabaGabm iEayaajaGaaiilaiqadMhagaqcaiaacUdaceGG0bGbaKaaaiaawIca caGLPaaaqaaaaaaaaaWdbiaacckacqGH9aqpcaWGSbGaam4Baiaado gacaWGHbGaamiBaiaacckaciGGHbGaaiOCaiaacEgacaGGTbGaaiyy aiaacIhacaGGOaGaaiizaiaacwgacaGG0bWdaiaabIcacaqGibWaaS baaSqaaiaad6gacaWGVbGaamOCaiaad2gaaeqaaOGaaeitaiaabMca caqGPaaaaa@55FC@  eq.3

Also, soft exudates are distinguishable by applying a color filter like yellow. We could resize (down sample) the image and utilize the Hessian approximation11 for a faster processing in limited resources conditions. The diameter of fundus image could be measured as the biggest diameter of which color changes from black into fundus color and conversely. Searching the image matrix for a bright area could lead us to the optical disc location which could be labeled as right/left by checking the state of the optical disc location with respect to the fundus image diameter.

Results and discussion

Optical disc detection

The original fundus image of human eye from camera is shown in Figure 1a. First steps to process the retina image are locating the optical disc and measuring the diameter of retina. The measured diameter used to crop the retina circle as shown in Figure 1b. Optical disc was determined by finding the high intensity spot of the image that has a green circle around in Figure 1b.

Figure 1 Image of a) retina b) detected optical disc.

Optic localization is a primary processing to find the fovea and further blood vessel segmentations with a constant distance ratio. There are time consuming and costly processing methods to find the optical disc borders as the images could be corrupted by artifacts and diseases. In a continual imaging conditions, optical disc is related to the diameter of retina fundus image and when the optic disc is obtained differently, the circular Hough transform could be performed to find the real optical disc diameter.12

Blood vessel segmentation

As shown in Figure 2a, applying the CLAHE algorithms on Gaussian blurred image could improve the local contrast which the blood vessels, hard/soft exudates and neovasculars could be distinguished easily. After applying the opening and closing morphological transformations on Figure 2a, the hard exudates and blood vessels remain smoother and connected as shown in Figure 2b. As the iteration of morphological increases, the resulted vessels get more apart and smoother. The sharpened binary image of Figure 2b could give a better look for identifying the hard exudates (Figure 2c).

Figure 2 Images of a) CLAHE applied on Gaussian Blur, b) extracted blood vessels, c) high contrast blood vessels.

Neovasculars lead to a high ratio of white pixels which by subtracting the hard/soft exudates with blob detection algorithms could be an approximate sign of diabetic retina. The images with neovasculars could be a candidate of further processes. The key advantage of this method is a faster determination in limited resources which could classify the images in early processes.

Exudate detection

The identified spots was sorted by calculating the radius of spots which could be used for diabetic severity classification. The close spots with overlaps are removed by comparing the (x,y) and radius of related blobs. The control of blobs size for detection, minimum standard deviation could be calculated for Gaussian Kernel to compute Hessian matrix.13

Detection of the hard exudates with bared eye get problematic when the images have severe artifacts. Independence of the hard exudate detection could be the key advantage of this method as shown in Figure 3. The detected exudates could be a guess points (cluster center) of another methods like support vector machine (SVM) or a candidate spots for deep learning algorithms like CNN.

Figure 3 Image of a) retina and b) detected hard exudates.

Images with hard exudate and disordered blood vessels could be classified as proliferative diabetic retina and the severity classification needs more consideration from databases and experts.


Blending the retina fundus image with Gaussian blurred image could subtract local mean color which increases the contrast of image and clarifies the spots and blood vessels. Morphological transformations like opening and closing makes the blood vessels smoother and connected to prevent the fragmentation of vessels. The DoH method could detect the exudate spots in high contrast image efficiently by adjusting the parameters.

The optical disc location could be determined as the brightest spot and alongside the diameter of retina image could be a screening factor. The detected hard exudates indicate the diabetic condition of patient and could be processed further as a clustering center to train a machine learning algorithm. The facility of utilized method makes them a promising procedure for image classification in early detections with limited resources.



Conflicts of interest



  1. World Health Organization. Prevention of Blindness from Diabetes Mellitus: Report of a WHO Consultation in Geneva, Switzerland; 2006.
  2. World Health Organization. Classification of diabetes mellitus. 2019.
  3. Bart O Roep, Sofia Thomaidou, René van Tienhoven, et al. Type 1 diabetes mellitus as a disease of the β-cell (do not blame the immune system?). Nat Rev Endocrinol. 2021;17(3):150–161.
  4. Al-Goblan AS, Al-Alfi MA, Khan MZ. Mechanism linking diabetes mellitus and obesity. Diabetes Metab Syndr Obes. 2014;7:587–591.
  5. Elishai Ezra, Eliezer Keinan, Yossi Mandel, et al. Non-dimensional analysis of retinal microaneurysms: critical threshold for treatment. Integr Biol (Camb). 2013;5(3):474–480.
  6. HarryPratt, Frans Coenen, Deborah M Broadbentc, et al. Convolutional Neural Networks for Diabetic Retinopathy. Procedia Computer Science. 2016;90:200–205.
  7. Alyoubi WL, Shalash WM, Abulkhair MF. Diabetic retinopathy detection through deep learning techniques: A review. Informatics in Medicine Unlocked. 2020;20:100377.
  8. Xie L. Towards implementation of AI in New Zealand national diabetic screening program: Cloud-based, robust, and bespoke. PLoS One. 2020;15(4):e0225015.
  9. Shome SK, Vadali SRK. Enhancement of Diabetic Retinopathy Imagery Using Contrast Limited Adaptive Histogram Equalization. International Journal of Computer Science and Information Technologies. 2011;2(6):2694–2699.
  10. Lindeberg T. Scale Selection Properties of Generalized Scale-Space Interest Point Detectors. Journal of Mathematical Imaging and Vision. 2013;46(2):177–210.
  11. Bay H, T Tuytelaars, L Van Gool. SURF: Speeded up robust features. Computer Vision and Image Understanding. 2006;110(3):346–359.
  12. Halil Murat Ünver, Yunus Kökver, Elvan Duman, et al. Statistical Edge Detection and Circular Hough Transform for Optic Disk Localization. Applied Sciences. 2019;9(2).
  13. Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort, et al. Scikit-learn: Machine learning in Python. The Journal of machineLearning research. 2011;12:2825–2830.
Creative Commons Attribution License

©2022 Bakhshi, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.