·Brief
Report·
Current Issue· ·Achieve· ·Search Articles· ·Online Submission· ·About IJO· PMC
Citation: Naqvi SA, Zafar HMF, Haq I. Hard exudates
referral system in eye fundus utilizing speeded up robust features. Int J
Ophthalmol 2017;
10(7):1171-1174
Hard exudates referral system in eye fundus utilizing speeded up robust
features
Syed Ali Gohar Naqvi, Hafiz Muhammad Faisal Zafar,
Ihsanul Haq
International
Islamic University (IIUI), H-10, Islamabad, Pakistan
Correspondence
to: Syed Ali Gohar Naqvi. International Islamic University (IIUI),
H-10, Islamabad, Pakistan. syed.phdee37@iiu.edu.pk
Received:
2016-06-30
Accepted: 2016-12-07
In
the paper a referral system to assist the medical experts in the
screening/referral of diabetic retinopathy is suggested. The system has been
developed by a sequential use of different existing mathematical techniques.
These techniques involve speeded up robust features (SURF), K-means clustering
and visual dictionaries (VD). Three databases are mixed to test the working of
the system when the sources are dissimilar. When experiments were performed an
area under the curve (AUC) of 0.9343 was attained. The results acquired from
the system are promising.
KEYWORDS:
referral system; speeded up robust features; eye; fundus; visual dictionaries
DOI:10.18240/ijo.2017.07.24
Citation: Naqvi SA, Zafar HMF, Haq I. Hard exudates
referral system in eye fundus utilizing speeded up robust features. Int J
Ophthalmol 2017;
10(7):1171-1174
Hard
exudate is the fundamental artifact which exists in diabetic patients most of
the times. In many cases, the manifestation of these artifacts confirms that
the patient should seek medical help from a medical expert. Manifestation of
hard exudates may prove helpful in screening of diabetic retinopathy. The
patients left untreated may encounter blurred vision or blindness. To assist
the overloaded medical experts in screening, a referral system for hard
exudates is required.
The
presented system uses speeded up robust features (SURF)[1]
to acquire basic features from images, K-means clustering[2] for developing visual dictionaries (VD)[3] and for classification support vector machine (SVM)[4] is employed.
Sopharak
et al[5] developed a system only utilizing
basic image processing techniques such as filtering and contrast enhancement.
For their system to perform optimally the pixels of both classes i.e.
artifact and normal must have significant difference in their intensities. García et al[6]
proposed a system based on features like average and standard deviation of
artifact and normal classes. They also utilized various classifiers and machine
learning techniques. Sopharak et al[7] and
Dupas et al[8] used fuzzy clustering along
with carefully chosen features like standard deviation of intensities and hue etc
for detection purposes. Dynamic thresholding and different statistical
techniques were used by Sánchez et al[9] for
the same problem. Another system proposed by Welfer et al[10] used morphological operations and watershed transform
in LUV colorspace for the same purpose. In LUV color space, L is the luminance
component and U and V components provide color information. Sanchez et al[11] proposed another system which required patient’s
contextual information and SVM as a classifier. Chen et al[12] proposed an algorithm in which different histogram
and morphological operations were suggested. Garcia et al[13] employed logistic regression along with radial basis
function classifier for addressing the problem. Kayal and Banerjee[14] also suggested basic image processing techniques in
his method. Naqvi et al[15] used
scale-invariant feature transform (SIFT) feature and SVM for extraction of hard
exudates. In the method no preprocessing is required.
Suggested
Technique The training and
testing phases of the suggested system is given in the following sections.
Training
phase The first step
involved in the training phase is the extraction of point of interests (POIs).
For this purpose SURF is employed. SURF can extract relatively huge number of
POIs from an image. In image processing systems the features in the vicinity of
POIs are more advantageous than the global features of the image[16]. In Figure 1A and 1B, a fundus image along with few
POIs detected on the same image is displayed. Before the training phase three
medical experts annotated the images to point out the artifact and normal
regions. They also annotated the optic disc region in the training images.
Utilizing SURF, a number of descriptors can be found from a fundus image. These
descriptors act as low level features (LLF) as they are in raw form and cannot
be fed into the classifier. Let an arbitrary training image Ii
where i∈{1, 2, 3, ..., m} and m is the total number of
training images. da and dn are the
descriptors or LLF of Ii found through SURF. It should be
noted that da are the LLF of regions of image Ii
containing artifact while dn represents the LLF from regions
of image Ii considered normal by the experts. Here a∈{1,
2, 3, ..., q} and n∈{1, 2, 3, ..., p}, q and p
are total LLF gathered from training images and da, dn∈y
exists in y-dimensional space. Utilizing the LLF, da
and dn visual dictionary V={v1, v2,
v3, …, vk} is constructed through K-means
clustering. vk refers to a signal visual codeword from V.
Following steps involve the quantization and spooling of Ii based
on V. For this, first each da, dn∈y
is mapped onto the V. This transforms the low level da
and dn onto a representation bases upon visual codewords of V.
Mathematically this can be represented as f: y→k,
f(da)=μa and f(dn)=
μn. μ’s are acquired through the ‘hard assignment’[17] of LLF to the nearest codeword of V i.e.:
μqκ=1 if q=
arg minκ ||vκ-d1||2
else q=0
Figure
1 ROC for different VDs in mixture of image databases using SVM within (A) RS1
(B) RS2 (C) RS3.
Here
μq,k is the qth
component of mid-level feature (MLF) that is now obtained and d={da,
dn}, μ={μa, μn}, l=n+p.
For feeding these features into SVM the spooling step is still required. These
obtained features are considered as MLF.
In
sum spooling the high level feature vector τ is found i.e.:
where τ∈κ
The
features gathered at this point, in the form of τ’s are useful for the
classifier and the classifier reports its decisions based on these high level
features (HLF). The results are checked on a linear SVM[4].
Testing
phase In this phase the
steps mentioned in training phase are again repeated but with the test images.
However no new VDs are developed and the VDs of training phase are employed for
the testing procedure.
Databases
and Experiments
Choice
of database To test the suggested
system three databases have been employed i.e. Diaretdb1[18], DR1[19], DR2[19]. The salient features of the databases are tabulated
in Table 1.
Table
1 Databases utilized in the work
Database |
Useful
images (containing
hard exudates) |
Resolution
(pixels) |
Developer |
DR1 |
234 |
640×480 |
Federal
University of Sao Paulo (UNIFESP) |
DR2 |
79 |
867×575 |
Federal
University of Sao Paulo (UNIFESP) |
Diaretdb1 |
46 |
1500×1152 |
Kuopio
University Hospital |
Experiments In the experiments the images of the
three databases i.e. DR1, DR2 and Diaretdb1 are mixed. This is done to
test the system in a more challenging situation where the sources of the images
are dissimilar and the images are taken in different conditions. In the
experiments 100 artifact images and 100 normal images are used for training
purpose while the remaining is utilized in testing phase. Overall 359
(234+79+46) artifact and 359 randomly selected normal images are involved in
the experiments. Mixing of images may induce a biasing effect on the system;
therefore the method of random subsampling[20] is
utilized for testing the system. Three random sets named as RS1, RS2 and RS3
are used in the random subsampling. The system was also evaluated at different
sizes of VDs i.e. VD50, VD100, VD150 to VD400.
The
results of the suggested system are shown in terms of sensitivity, specificity
and accuracy[21]. These results are obtained on
three different RS and different sizes of the VDs. The area under the curve (AUC)
of radio operating curve (ROC) has also been obtained. In the tests the average
AUC within the VD has also been computed. Apart from this the standard
deviation within the VD has also been calculated.
The
top accuracy of 91.89% is recorded for VD350 and RS2. The maximum average AUC
of 0.8942 (89.42%) is attained on VD350. The highest AUC of 0.9343(93.43%) is
recorded for RS2 when VD350 is used. Figure 1 shows a view of the results of
the experiments. The maximum sensitivity, specificity and accuracy of suggested
system has been shown in Table 2.
Table
2 Comparison with other methods
%
Authors |
Sensitivity |
Specificity |
Accuracy |
Ricci
et al[22] |
- |
- |
96.46 |
Al-Diri
et al[23] |
72.82 |
95.51 |
- |
Marin
et al[24] |
- |
- |
72.82 |
Lam
et al[25] |
- |
- |
94.74 |
Naqvi
et al[15] |
92.70 |
81.02 |
87.23 |
Presented
work |
93.82
(max) |
96.53
(max) |
91.83
(max) |
From
overall results gathered from the system, it is clear that the suggested system
displays almost stable AUC on all RS for experiments. However, sometimes
minutely poor performance can be observed on RS3 as compared to RS1 and RS2.
The random mixing of the images gathered from different datasets is the cause
for the observation. For both categories of experiments the value of average
AUC also remains almost stable when the size of VD increases from VD50 to
VD400. In experiments minor fluctuations are observed in the values of standard
deviation with in the VDs. This is again due to the selection of different RS
by the computer. The mentioned facts and statistics obtained from experiments
are further elucidatein detail through graphical view of Figure 2.
Figure
2 Statistics obtained from experiments A: AUC for RS1, RS2 and RS3 in mixture
of image databases using SVM; B: Average AUC for RS1, RS2 and RS3 in mixture of
image databases using SVM; C: Standard deviation for different VD sizes in
mixture of image databases using SVM.
In
the paper, it has been elaborated that the patients of diabetes are increasing
day by day. To remove the enormous load from the medical experts a referral
system is suggested and is developed by making use of various mathematical
techniques. To better evaluate the system, when the images belong to various
sources, it has also been checked by combining various databases. The working of
the system is evaluated with different RS and various sizes of VDs. The
suggested system shows promising results. A maximum AUC of 0.9343 (93.43%) is
noted with VD350.
Conflicts
of Interest: Naqvi SA, None; Zafar HMF, None; Haq I,
None.
1 Bay H, Tuytelaars T, Gool LV. SURF:
speeded up robust features. European Conference on Computer Vision
2006:404-417. [CrossRef]
2 Lloyd S. Least squares quantization in PCM. IEEE Trans Inf Theory 1982;28(2):129-137. [CrossRef]
3 Winn J, Criminisi A, Minka T. Object
categorization by learned universal visual dictionary. Proceedings of the
Tenth IEEE International Conference on Computer Vision 2005;2:1800-1807. [CrossRef]
4 Bishop C. Pattern recognition
and machine learning. 1st ed. Springer, 2006.
5 Sopharak A, Uyyanonvara B, Barman S, Williamson T. Automatic detection
of diabetic retinopathy exudates from non-dilated retinal images using
mathematical morphology methods. Comput
Med Imaging Graph 2008;32(8):720-727. [CrossRef] [PubMed]
6 García M, Sánchez CI, López MI, Abásolo D, Hornero R. Neural network
based detection of hard exudates in retinal images. Comput Methods Programs Biomed 2009;93(1):9-19. [CrossRef] [PubMed]
7 Sopharak A, Uyyanonvara B, Barman S. Automatic exudate detection from
non-dilated diabetic retinopathy retinal images using fuzzy C-means clustering.
Sensors (Basel) 2009;9(3):2148-2161.
[CrossRef] [PMC free article] [PubMed]
8 Dupas B, Walter T, Erginay A, Ordonez R, Deb-Joardar N, Gain P, Klein
JC, Massin P. Evaluation of automated fundus photograph analysis algorithms for
detecting microaneurysms, haemorrhages and exudates, and of a computer-assisted
diagnostic system for grading diabetic retinopathy. Diabetes Metab 2010;36(3):213-220. [CrossRef] [PubMed]
9 Sánchez CI, García M, Mayo A, López MI, Hornero R. Retinal image
analysis based on mixture models to detect hard exudates. Med Image Anal 2009;13(4):650-658. [CrossRef] [PubMed]
10 Welfer D, Scharcanski J, Marinho DR. A coarse-to-fine strategy for
automatically detecting exudates in color eye fundus images. Comput Med Imaging Graph 2010;34(3):228-235.
[CrossRef] [PubMed]
11 Sanchez CI, Niemeijer M, Suttorp Schulten MSA, Abramoff M, van
Ginneken BV. Improving hard exudate
detection in retinal images through a combination of local and contextual
information. Proceedings of the 2010 IEEE International Conference on
Biomedical Imaging 2010:5-8. [CrossRef]
13 Garcia M, Valverde C, Lopez MI, Poza J, Hornero R. Comparison of
logistic regression and neural network classifiers in the detection of hard
exudates in retinal images. Conf Proc
IEEE Eng Med Biol Soc 2013; 2013:5891-5894. [CrossRef]
14 Kayal D, Banerjee S. A new
dynamic thresholding based technique for detection of hard exudates in digital
retinal fundus image. International Conference on Signal Processing and
Integrated Networks 2014:141-144. [CrossRef]
15 Naqvi SA, Zafar MF, Haq Iu. Referral system for hard exudates in eye
fundus. Comput Biol Med 2015;64:217-235.
[CrossRef] [PubMed]
16 Lowe D. Distinctive image features from scale-invariant keypoints. Int J Comput Vis 2004;60(2):91-110. [CrossRef]
20 Picard R, Cook RD. Cross-validation regression models. J Am Stat Assoc 1984;79. [CrossRef]
21 Vidakovic B. Sensitivity,
specificity, and relatives. Statistics for Bioengineering Sciences,
Springer Texts in Statistics 2011:109-130. [CrossRef]
22 Ricci E, Perfetti R. Retinal blood vessel segmentation using line
operators and support vector classification. IEEE Trans Med Imaging 2007;26(10):1357-1365. [CrossRef] [PubMed]
23 Al-Diri B, Hunter A, Steel D. An active contour model for segmenting
and measuring retinal vessels. IEEE Trans
Med Imaging 2009;28(9): 1488-1497. [CrossRef] [PubMed]
24 Marin D, Aquino A, Gegundez M, Bravo J. A new supervised method for
blood vessel segmentation in retinal images by using gray-level and moment
invariants-based features. IEEE Trans Med
Imaging 2011;30(1):146-158. [CrossRef] [PubMed]
25 Lam BS, Gao Y, Liew AW. General retinal vessel segmentation using
regularization-based multiconcavity modeling. IEEE Trans Med Imaging 2010;29(7):1369-1381. [CrossRef] [PubMed]