NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Koprowski R, Wróbel Z. Image Processing in Optical Coherence Tomography: Using Matlab [Internet]. Katowice (Poland): University of Silesia; 2011.

Cover of Image Processing in Optical Coherence Tomography

Image Processing in Optical Coherence Tomography: Using Matlab [Internet].

Show details

Chapter 4ANALYSIS OF POSTERIOR EYE SEGMENT

The second part of this monograph presents the issues of posterior eye segment with special emphasis on automated methods for individual layers detection. Also the optic nerve head and the degree of retinal detachment will be fully automatically analysed. The measurements performed provide a possibility of not only obtaining quantitative data but also of automated determination of individual layers thickness maps.

Image ch4fu1

4.1. Introduction to the fundus of the eye analysis

The analysis of the fundus of the eye in its initial part is similar to the analysis of the anterior eye segment [5], [11], [12], [13]. This applies to the DICOM image acquisition and entering to the Matlab space as well as to acquiring the header and comprised by it patient and other data. Methods and tools intended for that have been discussed in detail in the first section of this monograph. The methodology for the image analysis has been presented below assuming that it already had been introduced to the Matlab space.

The input images LGRAY acquired e.g. from an optical tomograph SOCT Copernicus of the following parameters: the light source wavelength: 840nm, spectrum width of 50nm, axial (longitudinal) resolution: 6μm, transverse resolution: 12-18 μm, tomogram window width: 2mm, measurement rate: 25,000 A scans per second, the maximum scanning width: 10mm, the maximum number of A scans falling per a B scan: 10’500, were saved as grey levels of M×N = 722×928 resolution, where 8 bits falls per each pixel.

The identification of individual layers position, starting from the nerve fibre layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), inner and outer segment of photoreceptors (IS/OS) and ending at retinal pigment epithelium (RPE) and choriocapillaris (CC) situated between the inner limiting membrane (ILM) and the CC has been shown in Fig. 4-1 and Fig. 4-2.

Fig. 4-1. Diagram of individual layers cross-section with marked characteristic measured areas, where: Tr – traction, NFL neural fibre layer – internal retina Bondary, RPE retinal pigment epithelium.

Fig. 4-1

Diagram of individual layers cross-section with marked characteristic measured areas, where: Tr – traction, NFL neural fibre layer – internal retina Bondary, RPE retinal pigment epithelium.

Fig. 4-2. Example image acquired from SOCT Copernicus.

Fig. 4-2

Example image acquired from SOCT Copernicus.

Na Fig. 4-3 shows layers put on the LM image detected by means of algorithm described in this monograph, i.e. NFL, ONL and RPE. The position of those layers provides the grounds for further methodology, described in this paper.

Fig. 4-3. Example tomograph image with marked layers NFL – red, ONL - green, RPE – blue.

Fig. 4-3

Example tomograph image with marked layers NFL – red, ONL - green, RPE – blue.

Further considerations will refer to methods automatically determining the boundaries of layers visible in Fig. 4-1 i.e.: tractions, internal retina boundary, RNFL/GCL boundary, IS/OS boundary, OS/RPE and RPE boundary preceded by the analysis of results obtained using known algorithms [1], [3], [17], [19], [22], [33], [36], [43].

4.2. Algorithm for Automated Analysis of Eye Layers in the Classical Method

The algorithm proposed by the authors, presented below, has a modular (block) structure, where selected blocks can operate independently of each other - Fig. 4-39.

Fig. 4-39. Demonstrative diagram of iterative process of contour components determination.

Fig. 4-39

Demonstrative diagram of iterative process of contour components determination.

Fig. 4-4. Block diagram of fundus of the eye analysis algorithm.

Fig. 4-4Block diagram of fundus of the eye analysis algorithm

The block diagram presented in Fig. 4-39 divides the algorithm operation into five stages:

  • Preprocessing – median filter filtration and normalisation.
  • Determination of RPE layer position and then, using a modified active contour method, of ONL and IS.
  • Determination of NFL internal retina boundary position and then of GCL areas (usually two).
  • Correction of layers obtained with regard to the analysis area – considering the quality by areas of the object presented.
  • Determination, based on the image qualitative analysis, of ‘holes’, local brightness minima.

These stages will be the subject of considerations in the next sections.

4.2.1. Preprocessing

Preliminary algorithms for image processing include filtration with a median filter of square mask, 21×21 in size, to eliminate noise and small artefacts introduced by the measuring system during the image acquisition. The mask size was selected arbitrarily. In addition, the image was cut at the bottom to correct erroneous instrument readings for the last two lines of the image, i.e.:

    [Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]);
    Lgray=Lgray(1:850,:);
    Lgray=ind2gray(Lgray, map);
    Lgray=double(Lgray)/255;Lorg=Lgray;
    Lmed=medfilt2(Lorg,[5 5]);

The second component consisted of normalisation from the range of minimum and maximum pixel brightness to a full range between 0 and 1, i.e.:

Lmed=mat2gray(Lmed);
figure;
imshow(Lmed)

The LGRAY images converted this way were analysed using available algorithms, which in this case – the necessity to detect discontinuous line ranges – did not provide satisfactory results.

4.2.2. Detection of RPE Boundary

The RPE layer is the first and the simplest to determine in an automated determination on an OCT image. It is perfectly visible on the OCT image as the brightest area for each column. This property has been used to create the first part of the algorithm.

The analysis of LGRAY images after images preprocessing (filtration and normalisation, obtaining LMED) was started analysing the position of maximum for consecutive columns. If m and n denote rows and columns of image matrix, then the new image:

LBIN_RPE(m,n)={1dlaLMED(m,n)>maxm{1,2,,M}(LMED(m,n))pr0dlapozostale
11

dla n ∈ {1,2,3,…,N-1,N}

where pr – parameter of decimal-to-binary conversion threshold, assumed as 0.9 (90%).

The LBIN_RPE image contains values ‘1’ in places, where pixels in a given column are brighter than 90% of the maximum occurring brightness for this column. Values ‘0’ occur in the other places. The image obtained this way is shown below.

Fig. 4-5. Sum of LBIN_RPE images with weight 50% and LMED with 50%; a) image with properly detected Ip area and b) image, where RPE area is discontinuous in ranges.

Fig. 4-5Sum of LBIN_RPE images with weight 50% and LMED with 50%; a) image with properly detected Ip area and b) image, where RPE area is discontinuous in ranges

In the next stage the position of the longest section centre for each column of LBIN_RPE image was calculated, obtaining yRPE, i.e.:

yRPE(n)=m=1MyW(m,n)/m=1MLBIN_RPE(m,n)
12

where:

yW(m,n)={mdlaLBIN_RPE(m,n)00dlaLBIN_RPE(m,n)=0
13

n∈ {1,2,3,…,N-1,N}

The obtained course of yRPE and the source code are shown below:

x=(1:size(Lmed,2))’;
yyy=(1:size(Lmed,1))’;
yrpe=[];
Lk=zeros(size(Lmed));
for ik=1:size(Lmed,2)
    xx_best=[];
    Llabp=bwlabel(Lmed(:,ik)>(max(Lmed(:,ik))*0.9));
    Lk(:, ik)=Llabp;
    for tt=1:max(Llabp)
      xxl=yyy (Llabp==tt);
      xx_best=[xx_best; mean(xxl)];
    end
    if ∼isempty(xx_best)
        yrpe(ik)=max(xx_best);
    else
        yrpe(ik)=0;
    end
end
figure; imshow(mat2gray(Lk*0.5+Lmed)); hold on;
plot(yrpe,‘r*-’)
Fig. 4-6. Sum of LBIN_RPE images with weight 50% and LMED with 50% and marked course of yRPE.

Fig. 4-6Sum of LBIN_RPE images with weight 50% and LMED with 50% and marked course of yRPE

Fig. 4-7. Sum of LBIN_RPE images with weight 50% and LMED with 50% and marked course of yRPES.

Fig. 4-7Sum of LBIN_RPE images with weight 50% and LMED with 50% and marked course of yRPES

The course of yRPE function is further analysed for clusters using k-means method, obtaining yRPES(k) for each k-cluster. Then (yRPES(k1,k2)) is approximated by a 3rd order polynomial for each pair yRPES(k1) and yRPES(k2) for k1≠k2. All obtained polynomial functions yRPES(k1,k2) determined for all possible cluster pairs (k1, k2) are shown in Fig. 4-8 and an appropriate part of algorithm is given below:

Fig. 4-8. 3rd order functions yRPES(k1,k2) for all possible cluster pairs.

Fig. 4-8

3rd order functions yRPES(k1,k2) for all possible cluster pairs.

    yg=gradient(yrpe);
    ygg=ones([1 length(yrpe)]); ygg(abs(yg)>20)=0;
    ygl=bwlabel(ygg);
    figure; imshow(mat2gray(Lbinrpe*0.5+Lmed)); hold on;
    palett=jet(max (ygl));
    for iiih=1:max(ygl(:))
            plot (x (ygl==iiih),
    yrpe (ygl==iiih),‘Color’,palett (iiih,:),‘LineWidth’,4);
    end
    pam_dl=[];
    figure; imshow(mat2gray(Lbinrpe*0.5+Lmed)); hold on
    for iiik=1:max(ygl(:))
        for iiikk=iiik:max(ygl(:))
            if iiik<=iiikk
                ygk=[yrpe(ygl==iiik),yrpe(ygl==iiikk)];
                xgk=[x(ygl==iiik); x(ygl==iiikk)];
            else
                ygk=[yrpe (ygl==iiikk),yrpe(ygl==iiik)];
                xgk=[x(ygl==iiikk); x(ygl==iiik)];
            end
            if length(ygk)>10
                P = POLYFIT(xgk’,ygk,2); yrpes =
  round(POLYVAL(P,x));
                plot (yrpes, ‘g*-’)
                pam_dl= [pam_dl;[iiik iiikk sum(abs(yrpe-
  yrpes’)<20)]];
           end
       end
  end
Fig. 4-9. Enlarged fragment of image from Fig. 4-8.

Fig. 4-9Enlarged fragment of image from Fig. 4-8

The number of points yRPE(n)=m=1MyW(m,n)/m=1MLBIN_RPE(m,n) from the range ±15 pixels, i.e. pr1=15 and pr2=15 is determined for each function.

s(k1,k2)=yRPEB(k1,k2)(n)
14
yRPEB(k1,k2)(n)={1dlapr1<yRPES(k1,k2)(n)<pr20dlaother
15

Then this pair (k1,k2) is determined, for which:

s(k1,k2)=maxk1,k2(s(k1,k2))
16

The pair determined achieves the maximum value at selected yRPES(k1*,k2*) later on named simply yRPEC function. The implementation of the algorithm fragment described above is provided below:

pam_s=sortrows(pam_dl,−3);
if size(pam_s,1)==1
        ygk=[yrpe(ygl==pam_s(1,1))];
        xgk=[x(ygl==pam_s(1,1))];
else
        ygk=[yrpe(ygl==pam_s(1,1)),yrpe(ygl== pam_s(1,2))];
        xgk=[x(ygl==pam_s(1,1)); x(ygl==pam_s(1,2))];
end
   P = POLYFIT(xgk’,ygk,2); yrpes = round (POLYVAL (P,x));
   plot(x,yrpes, ‘w*-’);
yrpe=yrpe(:);
   plot(x,yrpe,‘m*-’);

In further considerations also these points of yRPE are important, which fall within the tolerance predetermined regarding yRPES(k1*,k2*), i.e.:

dx=x; dx(abs (yrpe-yrpes)>20)=[];
yrpe(abs(yrpe-yrpes)>20)=[];
dxl=bwlabel(diff(dx)<125);
pdxl=[];
for qw=1:max(dxl)
   pdxl=[pdxl;[qw,sum(dxl==qw)]];
end
pdxl(pdxl(:,2)<50,:)=[];
dxx=[]; dyy=[];
for wq=1:size(pdxl,1)
    dxx=[dxx; dx(dxl==pdxl(wq,1))];
    dyy=[dyy; yrpe(dxl==pdxl(wq,1))];
end
dx=dxx; yrpe=dyy;
    plot(dx,yrpe,‘c*-’);
figure
imshow(Lgray); hold on
    plot(dx,yrpe, ‘c*-’);

The results obtained are presented in the following figure (Fig. 4-10, Fig. 4-11).

Fig. 4-10. Function yRPEC satisfying the conditions given.

Fig. 4-10

Function yRPEC satisfying the conditions given.

Fig. 4-11. Enlargement of image from Fig. 4-10.

Fig. 4-11

Enlargement of image from Fig. 4-10.

The yRPEC values will further, in the next section, provide the basis to determine IS and ONL boundaries.

4.3. Detection of IS, ONL Boundaries

Boundaries of IS and ONL were determined on the basis of yRPEC limit. In both cases algorithms were very similar and in their largest fragment applied to the modified active contour method [29], [41]. This method was used to analyse the anterior eye segment in the first part of this monograph and the function intended for its proper operation noted as OCT_activ_cont. This operation could also be performed (obtaining similar results) using other methods, e.g. of the convolution with mask h presented below (Fig. 4-12) or of filtration by a median filter and calculating differences between pixels situated on the oy axis distant from each other by the number of mask rows.

Fig. 4-12. Mask h used for independent calculations of modified active contour method.

Fig. 4-12

Mask h used for independent calculations of modified active contour method.

The change of operation selectivity in the sense of individual layers distinction accuracy is obtained depending on the selection of parameters pyu and pyd. Such situation is illustrated by Fig. 4-13 where pyu and pyd were changing between 2 and 20 for an artificial image created as follows:

Fig. 4-13. Artificial input image with yIS courses for parameters pyu= pyd changing within the range from 2 – blue colour to 20 – red colour.

Fig. 4-13

Artificial input image with yIS courses for parameters pyu= pyd changing within the range from 2 – blue colour to 20 – red colour.

L1=rand([201 200]);
xx=−1:0.01:1;
y=gauss(xx+0.5,0.2)+0.5*gauss(xx-0.1,0.05);
Ly=y’*ones([1 200]);
Ly=mat2gray(Ly);
Lw1=L1.*Ly;
L1=rand([201 200]);
y=gauss(xx,0.2)+0.5*gauss(xx-0.4,0.05);
Ly=y’*ones([1 200]);
Ly=mat2gray(Ly);
Lw2=L1. *Ly;
Lw=[Lw1,Lw2];
Lw(:,300:350)=Lw(:,300:350)*.5;
Lw(:,50:100)=Lw(:,50:100)*.2;
Lw=imrotate(Lw,5,‘crop’);
figure; imshow(Lw)

where the gauss function has the following form:

function y = gauss (x,std)
y = exp(-x.^2/(2*std^2)) / (std*sqrt(2*pi));

The change of parameters pyu and pyd values affects the selectivity of algorithm operation. The remaining parameters, such as pu or pd, determine the range of search on the vertical axis. Parameters pxl and pxp are the range on the ox axis, from which values Lu and Ld are calculated. They have a direct influence on the algorithm behaviour in places, where shadows occur. Fig. 4-14 shows the influence of parameters pxl and pxp settings on the results obtained.

Fig. 4-14. Artificial input image with yIS courses for parameters pu= pd=50, pxud=∞ and pxl =pxp changing within the range from 1 – blue colour to 70 – red colour.

Fig. 4-14

Artificial input image with yIS courses for parameters pu= pd=50, pxud=∞ and pxl =pxp changing within the range from 1 – blue colour to 70 – red colour.

Images have been obtained at the following implementation in Matlab:

x=1:size(Lw,2);
y=round( [ones([1 size(Lw,2)/2])*size(Lw,1)/3 ones([1
size(Lw,2)/2])*size(Lw,1)/2] );
map=j et(70);
for pyud=1:4:70
pud=50;
pxud=2;
pxlp=1;
polaryzacj a=−1;
[yy,i]=OCT_activ_cont(Lw, x,y+20, pud, pyud, pxud, pxlp,
polaryzacja);
hold on
plot (x,yy,‘Color’,map(pyud,:), ‘LineWidth’,3)
pause(0.001)
end

As can be seen from Fig. 4-14 and Fig. 4-15 small values of pxl and pxp in the range from around 1÷10 result in the origination of large changes in positions of its consecutive values on the oy axis of yIO course. Values of parameters pxl and pxp changed in the range from around 10÷70 ‘stabilise’ the course of yIO due to which it becomes less sensitive to sudden changes of brightness (e.g. shadows) on the image.

Fig. 4-15. Artificial input image with yIS courses for parameters pu= pd=50, pxud=2 and pxl =pxp changing within the range from 1 – blue colour to 70 – red colour.

Fig. 4-15

Artificial input image with yIS courses for parameters pu= pd=50, pxud=2 and pxl =pxp changing within the range from 1 – blue colour to 70 – red colour.

The influence of parameters pxl, pxp and pyu, pyd can be best followed on the graph of error δIO(pxl=pxp, pyu=pyd) defined as:

δIO=(pxl=pxp,pyu=pud)=1Nn=1N|yIO(n)yIOW(n)yIOW(n)|100%
17

where yISW – a model course of yIS.

In accordance with the graph presented in Fig. 4-16 parameter pxl = pxp for pxud=∞ has the largest influence on the value of δIS error. Because of two characteristic areas visible on the LGRAY image (Fig. 4-16) the course of error has a local maximum for pxl=pxp≅40. The course of error δIS value for pxud=1 (Fig. 4-17) is similar, where parameter pxud had no significant impact on its value.

Fig. 4-16. Graph of error δIS values changes for changes of parameters pxl =pxp within the range 1-70 and pyu =pyd within the range 1-20 for pxud=∞.

Fig. 4-16

Graph of error δIS values changes for changes of parameters pxl =pxp within the range 1-70 and pyu =pyd within the range 1-20 for pxud=∞.

Fig. 4-17. Graph of error δIS values changes for changes of parameters pxl =pxp within the range 1-70 and pyu =pyd within the range 1-20 for pxud=1.

Fig. 4-17

Graph of error δIS values changes for changes of parameters pxl =pxp within the range 1-70 and pyu =pyd within the range 1-20 for pxud=1.

The graphs discussed were generated using the function:

L1=rand([201 200]);
xx=−1:0.01:1;
y=gauss(xx+0.5,0.2)+0.5*gauss(xx-0.1,0.05);
Ly=y’*ones([1 200]);
Ly=mat2gray(Ly);
Lw1=L1.*Ly;
Lw=Lw1;
Lw(:,50:100)=Lw(:, 50:100)*.2;
figure; imshow(Lw)
x=1:size(Lw,2);
y=round( [ones([1 size(Lw, 2)/2])*size(Lw,1)/3 ones([1
size(Lw,2)/2])*size(Lw, 1)/2] );
map=jet(70);
hold on
plot(x,y,‘r’,‘LineWidth’,3)
d3_wy=[];
         pub=50;
         pxud=1;
         polaryzacja=−1;
jj=1;
for pxlp=2:1:20
    ii=1;
    for pyud=1:2:70
        [yy,i]=OCT_activ_cont(Lw, x,y+20, pub, pyud, pxud,
pxlp, polaryzacja);
        d3_wy(ii, jj)=sum(abs(119–yy)./119)/length(yy)*100;
        ii=ii+1;
        [ii, jj]
    end
    jj=jj+1;

end
[XX,YY]=meshgrid(2:1:20, 1:2:70);
figure; mesh(XX,YY,d3_wy);
ylabel(‘p_{xl}=p_{xp}’, ‘FontSize’, 20)
xlabel(‘p_{yu}=p_{yd}’, ‘FontSize’, 20)
zlabel(‘\delta_{IO} [%]’,‘FontSize’,20)
colormap([0 0 0])
set(gca, ‘FontSize’,15)

The sensitivity to a Gaussian noise, which may appear on the image, is a totally different feature of the algorithm discussed. To evaluate the quality of algorithm proposed a Gaussian noise of variance σ changed between 0 and 0.9 was added to the LGRAY image.

Graphs in Fig. 4-18 and Fig. 4-19 show changes of error δIS values for changes of parameters pxl=pxp within the range 1-70 and of variance σ within the range 0÷0.9 for pxud=2 pixels and pxud=∞. For both graphs at the change of σ values within the range 0-0.3 and pxl=pxp within 50-70 pixels the δIO error does not exceed 5%. The dependence of error δIS value on pxud is insignificant, mainly due to its definitions ), where large changes of isolated points of yIS course have no significant impact on the δIS error. The nature of error δIS values changes shown in Fig. 4-16 and Fig. 4-17 as well as in Fig. 4-18 and Fig. 4-19 regarding changes of parameters pxl=pxp within their full range depends mainly on the nature and arrangement of objects on the scene and therefore it will not be discussed here. The form of algorithm intended to generate the above results is similar to the previous case.

Fig. 4-18. Graph of error δIS values changes for changes of parameters pxl =pxp within the range 1-70 and of variance σ within the range 0÷0.9 for pxud=2.

Fig. 4-18

Graph of error δIS values changes for changes of parameters pxl =pxp within the range 1-70 and of variance σ within the range 0÷0.9 for pxud=2.

Fig. 4-19. Graph of error δIS values changes for changes of parameters pxl =pxp within the range 1-70 and of variance σ within the range 0÷0.9 for pxud=∞.

Fig. 4-19

Graph of error δIS values changes for changes of parameters pxl =pxp within the range 1-70 and of variance σ within the range 0÷0.9 for pxud=∞.

4.4. Detection of NFL Boundary

The NFL boundary position was determined in two stages, of which the second stage of individual points positions correction is the most complicated and the analysis laborious.

The first stage comprises determination of decimal to binary conversion for each column of LMED image acc. to previously mentioned relationship for parameter pr assumed arbitrarily around 0.1 (10%). Then, for each column of LBIN_NFL image, the position of the first pixel for each kn- cluster of value ‘1’ for each column – n is calculated. Assuming further that each column n has Kn clusters it is possible to write:

yNFL_P(kn,m,n)=minm(1,M)(yNFL_W(m,n,kn))
18

where:

yNFL_W(m,n,kn)={MdlaLET_N(m,n)knmdlaLET_N(m,n)=kn
19

and LET_N - image formed as a result of labelling each cluster for each column irrespective of LBIN_NFL image for kn∈{1,2,3,…,Kn-1,Kn}.

Fig. 4-20 and Fig. 4-22 show LET_N images for artificial input image LMED without the added noise (Fig. 4-21) and with added Gaussian noise of variance σ=0.2 (Fig. 4-23).

Fig. 4-20. Image LET_N formed from the input LMED image shown in Fig. 4-21.

Fig. 4-20

Image LET_N formed from the input LMED image shown in Fig. 4-21.

Fig. 4-22. Image LET_N formed from the input LMED image shown in Fig. 4-23.

Fig. 4-22

Image LET_N formed from the input LMED image shown in Fig. 4-23.

Fig. 4-21. Image LMED resulting from the filtration, using a median filter, of artificial image LGRAY with marked blue points yNFL_P.

Fig. 4-21

Image LMED resulting from the filtration, using a median filter, of artificial image LGRAY with marked blue points yNFL_P.

Fig. 4-23. Image LMED resulting from the filtration, using a median filter, of artificial image LGRAY with added Gaussian noise and with marked blue points yNFL_P.

Fig. 4-23

Image LMED resulting from the filtration, using a median filter, of artificial image LGRAY with added Gaussian noise and with marked blue points yNFL_P.

The image from Fig. 4-23 originated at the following implementation:

    L1=rand([201 200]);
    xx=−1:0.01:1;
    y=gauss(xx+0.5, 0.2)+0.5*gauss(xx-0.1, 0.05);
    Lmed=y’*ones([1 200]);
    Lmed=mat2gray(Lmed);
Lmed(:,50:100)=Lmed(:,50:100)*.2;
Lmed = imnoise(Lmed,‘gaussian’,0.02);
Lmed=medfilt2(Lmed,[3 3]);
figure; imshow(Lmed); hold on
xyinfy=[];
xyinfdl=[];
for ik=1:size(Lmed,2)
        grL1=Lmed(:,ik)>(max(Lmed(:,ik))*0.1);
        lgrL1=bwlabel(grL1);
            for jju=1:max(lgrL1)
                 xyinfdl(jju, ik)=sum(lgrL1==jju);
                 cuu=1:length(lgrL1);
cuu(lgrL1∼=jju)=[];
                 xyinfy(jju, ik)=cuu(1);
                 plot(ik, cuu(1), ‘b*’)
            end
end

As shown in Fig. 4-20 - Fig. 4-23 the relationship (18) and (19) is very sensitive to noise and to small artefacts on the image, which are the reason for origination of additional erroneous points yNFL_P. In practice, however, this problem is not too arduous because even in the case of proper distribution of points yNFL_P the determination of NFL line is not an unambiguous and simple process, which is illustrated by Fig. 4-24.

Fig. 4-24. Image LMED of actual image LGRAY with marked blue corresponding points yNFL_P.

Fig. 4-24

Image LMED of actual image LGRAY with marked blue corresponding points yNFL_P.

Fig. 4-25. Enlarged LMED image from Fig. 4-25.

Fig. 4-25

Enlarged LMED image from Fig. 4-25.

This figure was obtained from the following algorithm.

    [Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]);
   Lgray=Lgray(1:850,:);
   Lgray=ind2gray(Lgray,map);
   Lgray=double(Lgray)/255; Lorg=Lgray;
   Lmed=medfilt2(Lorg,[5 5]);
   Lmed=mat2gray(Lmed);
   figure; imshow(Lmed)
   grad_y_punkt=30;
   figure; imshow(Lmed); hold on
   [xNFL, yNFL, xyinfdl, xyinfy, ggtxnn, ggtynn, ggdlnn, xyinfdl_o
 ld, xyinfy_old]=OCT_NFL_line(Lmed, grad_y_punkt);
   plot(xNFL,yNFL,‘r’,‘LineWidth’,2)

where function OCT_NFL_line is intended to analyse the course of NFL line and is described below.

The second stage of NFL line determination is related to the analysis of yNFL_P points on the ox axis. For the next yNFL_P points a derivative for the ox axis was calculated and then the clusters analysis was performed, obtaining this way km clusters and yNFL_D where for each km∈{1,2,3,…,Km-1,Km} the following condition is satisfied:

yNFL_D(km,m,n)=|yNFL_P(kn,m,n)n|<prd
20

where prd is the threshold limiting the maximum value of the derivative for consecutive points on the ox axis. This threshold is directly responsible for the obtained number of clusters and thereby the number of sections analysed in further part of the algorithm.

Clusters containing too small number of elements (less than 20% of the largest cluster) are automatically cut off. Instead, the others are analysed in terms of arrangement on the image (coordinate m) and of the number of pixels existing in a specific cluster (yNFL_H).

yNFL_H(km)=n=1Nm=1M(yNFL_D(km,m,n))
21

So analysing the position of individual yNFL_S points and the number of pixels in specific yNFL_H cluster for which they were determined it is possible to create weights yW for analysed clusters (points groups), i.e.:

yW(km)=yNFL_H(km)/maxkm(1,Km)(yNFL_H(km))εP+yNFL_S(km)/maxkm(1,Km)(yNFL_S(km))ε
22

where ɛS and ɛP are constants arbitrarily selected from the 0-1 range and

yNFL_S(km)=maxm(1,M),n(1,N)(yNFL_D(km,m,n))
23

In the next stage this cluster km is selected, which has the largest weight km*. Later on it is used as a start vector for the modified active contour method described in section 0. This way the results presented in Fig. 4-26 are obtained.

Fig. 4-26. Image LMED with marked red yNFL points for the best, with respect to the criterion set, cluster km* (turquoise) and with results obtained for the active contour method (red).

Fig. 4-26

Image LMED with marked red yNFL points for the best, with respect to the criterion set, cluster km* (turquoise) and with results obtained for the active contour method (red).

Fig. 4-26 shows points for the best, with respect to the criterion set, cluster km* in turquoise and yNFL results obtained for the active contour method in red.

Taking into account the above analysis the final shape of OCT_NFL_line function was formulated as follows:

   function
[xNFL,yNFL,xyinfdl,xyinfy,ggtxnn,ggtynn,ggdlnn, xyinfdl_old,
xyinfy_old]=OCT_NFL_line(Lmed,grad_y_punkt)

xyinfy=[];
xyinfdl=[];
for ik=1:size(Lmed,2)
        grL1=Lmed(:,ik)>(max(Lmed(:,ik))*0.1);
        lgrL1=bwlabel(grL1);
            for jju=1:max(lgrL1)
                 xyinfdl(jju,ik)=sum(lgrL1==jju);
                 cuu=1:length(lgrL1);
cuu(lgrL1∼=jju)=[];
                 xyinfy(jju,ik)=cuu(1);
                 plot(ik,cuu(1),‘b*’)
             end
end
xyinfdl_old=xyinfdl;
xyinfy_old=xyinfy;
     ggtxnn=[];
     ggtynn=[];
     ggdlnn=[];
     while sum(sum(xyinfy(:, 1:(end-1))))∼=0
         ggtx=[];
         ggty=[];
         for hvi=1:(size(xyinfy, 2)−1)
             if sum(xyinfy(:,hvi))∼=0
                break
             end
         end
         for hv=hvi:(size(xyinfy, 2)−1)
             if (min(abs(xyinfy(1,hv)-xyinfy(:,hv+1))
     )<grad_y_punkt)&(xyinfy(1, hv)∼=0)
                vff=1:size(xyinfy,1); vff(abs(xyinfy(1,hv)-
     xyinfy(:,hv+1))>=grad_y_punkt)=[]; vff=vff(1);
     xypam=xyinfy(1,hv);
                vff__=1:size(xyinfy,1); vff__(vff)=[];
                xyinfy(1:end,hv+1) =[xyinfy(vff,hv+1);
     xyinfy(vff__,hv+1)];
                xyinfdl(1:end,hv+1)=[xyinfdl(vff,hv+1);
     xyinfdl(vff__,hv+1)];
                xyinfy(1:end, hv) =[xyinfy(2:end, hv); 0];
                xyinfdl(1:end, hv)=[xyinfdl(2:end, hv); 0];
                ggtx=[ggtx,hv];
                ggty=[ggty,xypam];
             else
                xyinfy(1:end,hv)=[xyinfy(2:end,hv); 0];
                xyinfdl (1:end,hv)=[xyinfdl(2:end,hv); 0];
                break
             end
         end
         if length(ggty)>10
             ggtxnn(size(ggtxnn,1)+1,1:length(ggtx))=ggtx;
             ggtynn(size(ggtynn,1)+1,1:length(ggty))=ggty;
             ggdlnn=[ggdlnn;[length(ggty)min(ggty)]];
         end
     end
     ggdlnn_leng=ggdlnn(:,1);
     ggdlnn=[(1:size(ggdlnn,1))’,ggdlnn];
     ggdlnn(:,2)=ggdlnn(:,2)-min(ggdlnn(:,2));
     ggdlnn(:,2)=ggdlnn(:,2)./max(ggdlnn(:,2));
     ggdlnn_leng(ggdlnn(:,2)<(0.2),:)=[];
     ggdlnnTggdlnn(:,2)<(0.2),:)=[];
     for bniewazne=1:(size(ggdlnn,1).^2)
     if size(ggdlnn,1)>=2
     usun_=zeros([1 size(ggdlnn,1)]);
         nr1=ggdlnn(1,1);
         x11=ggtxnn(nr1,:);
         y11=ggtynn(nr1,:);
         x11(y11==0)=[];
         y11(y11==0)=[];
     for nr_=2:size(ggdlnn,1)
         nr2=ggdlnn(nr_,1);
         x22=ggtxnn(nr2,:);
         y22=ggtynn(nr2,:);
             x22(y22==0)=[];
             y22(y22==0)=[];
         for iy=1:length(x11)
             xbn=1:length(x22);
             xbni=xbn(x22==x11(iy));
             if ∼isempty(xbni)
                if y11(iy)<y22(xbni(1))
                     usun_(nr_)=usun_(nr_)+1;
                end
             end
         end
     end
     if sum(usun_)∼=0
         ggdlnn(usun_>(ggdlnn_leng’*0.2),:)=[];
         ggdlnn_leng(usun_>(ggdlnn_leng’*0.2))=[];
         ggdlnn=[ggdlnn(2:end,:); ggdlnn(1,:)];
         ggdlnn_leng=[ggdlnn_leng(2:end); ggdlnn_leng(1,:)];
     else
         ggdlnn=[ggdlnn(2:end,:); ggdlnn(1,:)];
         ggdlnn_leng=[ggdlnn_leng(2:end); ggdlnn_leng(1,:)];
     end
end
end
ggdlnn_s=sortrows(ggdlnn,−2);
if size(ggdlnn_s,1)==2
     xNFL1=ggtxnn(ggdlnn_s(1,1),:);
     yNFL1=ggtynn(ggdlnn_s(1,1),:);
     xNFL2=ggtxnn(ggdlnn_s(2,1),:);
     yNFL2=ggtynn(ggdlnn_s(2,1),:);
     xNFL1(xNFLl==0)=[];
     yNFL1(yNFL1==0)=[];
     xNFL2(xNFL2==0)=[];
     yNFL2(yNFL2==0)=[];
     yNFL1_poczg=yNFL1(1)+std(yNFL1);
     yNFL1_poczd=yNFL1(1)-std(yNFL1);
     yNFL2_poczg=yNFL2(1)+std(yNFL2);
     yNFL2_poczd=yNFL2(1)-std(yNFL2);
     if min(xNFLl)<min(xNFL2)
         if (abs(yNFL1(end)-
yNFL2_poczd)<std(yNFL1))|(abs(yNFL1(end)-
yNFL2_poczg)<std(yNFL1));
                xNFL=[xNFL1 xNFL2];
         else
             if length(yNFL1)>length(yNFL2)
                     xNFL=[xNFL1];
                else
                     xNFL=[xNFL2];
                end
             end
         else
             if (abs(yNFL2(end)-
yNFL1_poczd)<std(yNFL2))|(abs(yNFL2(end)-
yNFL1_poczg)<std(yNFL2));
                xNFL=[xNFL2 xNFL1];
             else
                if length(yNFL1)>length(yNFL2)
                     xNFL=[xNFL1];
                else
                     xNFL=[xNFL2];
                end
             end
         end
else
         xNFL=ggtxnn(ggdlnn_s(1,1),:);
         xNFL(xNFL==0)=[];
end
filtr_med=50;
[xNFL, yNFL]=OCT_NFL_line_end(xNFL,xyinfdl_old,xyinfy_old,gr
ad_y_punkt,filtr_med);
przyci_po_obu_x_proc=0.2;
y_dd=abs(diff(yNFL));
y_dd_lab=bwlabel(y_dd<(grad_y_punkt)/2);
num_1=y_dd_lab(round(length(y_dd_lab)*przyci_po_obu_x_proc)
);
num_end=y_dd_lab(round(length(y_dd_lab)*(1-
przyci_po_obu_x_proc)));
x_sek=1:length(y_dd_lab);
x_sek_1=x_sek(y_dd_lab==num_1); x_sek_1=x_sek_1(1);
x_sek_end=x_sek(y_dd_lab==num_end);
x_sek_end=x_sek_end(end);
xNFL=xNFL(x_sek_1:x_sek_end);
yNFL=yNFL(x_sek_1:x_sek_end);

and function OCT_NFL_line_end intended for filtration of the left and right side of the course:

function
[xNFL,yNFL]=OCT_NFL_line_end(xNFL_old,xyinfdl,xyinfy,grad_y
_punkt,filtr_med)
x_start=xNFL_old(round(end/2));
xNFL=[];
yNFL=[];
xyinfy(1,:)=medfi1t2(xyinfy(1,:),[1 filtr_med]);
    for hv=x_start:(size(xyinfy,2)−1)
    if (min(abs(xyinfy(1,hv)-xyinfy(:,hv+1))
)<grad_y_punkt)&(xyinfy(1,hv)∼=0)
            vff=1:size(xyinfy,1); vff(abs(xyinfy(1,hv)-
xyinfy(:,hv+1))>=grad_y_punkt)=[]; vff=vff(1);
xypam=xyinfy(1,hv);
            vff__=1:size(xyinfy,1); vff__(vff)=[];
            xyinfy(1:end, hv+1) = [xyinfy(vff,hv+1);
xyinfy(vff__,hv+l)];
            xyinfdl(1:end,hv+1)= [xyinfdl(vff,hv+1);
xyinfdl(vff__,hv+1)];
            xyinfy(1:end, hv)=[xyinfy(2:end,hv); 0];
            xyinfd1(1:end, hv)=[xyinfdl (2:end,hv); 0];
            xNFL=[xNFL; hv];
            yNFL=[yNFL; xypam];
        else
            xyinfy(1:end,hv)=[xyinfy(2:end,hv); 0];
            xyinfdl(1:end,hv)=[xyinfdl(2:end,hv); 0];
            break
        end
    end
    for hv=(x_start-1):−1:2
        if (min(abs(xyinfy(1,hv)-xyinfy(:,hv-1))
)<grad_y_punkt)&(xyinfy(1,hv)∼=0)
            vff=1:size(xyinfy,1); vff(abs(xyinfy(1,hv)-
xyinfy(:,hv-1))>=grad_y_punkt)=[]; vff=vff(1);
xypam=xyinfy(1, hv);
            vff__=1:size(xyinfy, 1); vff__(vff)=[];
            xyinfy(1:end, hv-1) = [xyinfy(vff,hv-1);
xyinfy(vff__, hv-1)];
            xyinfdl(1:end,hv-1)= [xyinfdl(vff,hv-1);
xyinfdl(vff__, hv-1)];
            xyinfy(1:end, hv)=[xyinfy(2:end,hv); 0];
            xyinfdl(1:end, hv) = [xyinfdl(2:end,hv); 0];
            xNFL=[hv; xNFL];
            yNFL=[xypam; yNFL];
        else
            xyinfy(1:end, hv)=[xyinfy(2:end,hv); 0];
            xyinfdl(1:end, hv)=[xyinfdl(2:end,hv); 0];
            break
        end
    end
xNFL=round(xNFL);
yNFL=round(yNFL);

Unfortunately, the method described provides the expected results not in all analysed cases. The situation presented in Fig. 4-27 is an example here, fortunately seldom occurring in practice. Such situations occur for actual images if there is a lot of noise on them or if large eye pathologies exist or shadows are strongly visible. Such cases (where even for an OCT operator it is difficult to answer clearly a question where an individual layer starts and ends) occur pretty seldom in practice.

Fig. 4-27. Demonstrative images of layers arrangement on an OCT image, for which the algorithm described operates improperly.

Fig. 4-27

Demonstrative images of layers arrangement on an OCT image, for which the algorithm described operates improperly.

4.5. Correction of Layers Range

yIO, yRPE, yNFL obtained at earlier stages will be now subject to common analysis to eliminate additional disturbances and to improve their quality. The yIO, yRPE, yNFL courses must fulfil the following conditions resulting from medical premises of eye structure (the conditions will be given in a Cartesian coordinate system):

  • yRPE<yIO<yNFL for each x,
  • yIO - yRPE≈0.1 mm – being the initial value starting the operation of modified active contour method,
  • yNFL - yIO ≈ from 0 to 1 mm, for different x may be even yIO>yNFL or/and yRPE>yNFL.

The implementation of this moderately simple correction of layers arrangement we leave to the Reader.

4.6. Final Form of Algorithm

Based on considerations carried out in previous sections the final form of algorithm was formulated in the following form:

    [Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]);
   Lgray=Lgray(1:850,:);
   Lgray=ind2gray(Lgray,map);
   Lgray=double(Lgray)/255; Lorg=Lgray;
   Lmed=medfilt2(Lorg,[5 5]);
   Lmed=mat2gray(Lmed);
   [xRPE,yRPE,xRPEz,yRPEz]=OCT_global_line(Lmed);
   grad_y_punkt=30;
   [xNFL,yNFL,xyinfdl,xyinfy,ggtxnn,ggtynn,ggdlnn,xyinfdl_o
ld,xyinfy_old]=OCT_NFL_line(Lmed, grad_y_punkt);
   z_gd1=60;
   z_gd2=60;
   z_sr1=16;
   z_sr2=16;
   z_kat1=12;
   z_kat2=12;
   z_us_xy1=12;
   z_us_xy2=12;
   [yRPEd,ygRPEd]=OCT_activ_cont(Lmed,xRPE,yRPE+50,z_gdl,z_
srl,z_katl,z_us_xyl,−1);
   [yONL, ygONL]=OCT_activ_cont(Lmed,xRPE,yRPE-
50,z_gd2,z_sr2,z_kat2,z_us_xy2,1);
   figure; imshow(Lmed); hold on
   plot(xRPE,yRPE,‘-r*’,‘LineWidth’,2);
   plot(xRPEz,yRPEz,‘-g*’,‘LineWidth’,2);
   plot(xNFL,yNFL,‘b’,‘LineWidth’,2)
   plot(xRPE,yONL,‘y’,‘LineWidth’,2)
   plot(xRPE,yRPEd,‘m’,‘LineWidth’,2)

Consequently, the following results were obtained - Fig. 4-28 and Fig. 4-29.

Fig. 4-28. Image LMED with marked in colours layer boundaries yNFL, yRPE, yONL and yRPED as the limit of RPE layer analysis.

Fig. 4-28

Image LMED with marked in colours layer boundaries yNFL, yRPE, yONL and yRPED as the limit of RPE layer analysis.

Fig. 4-29. Enlarged image LMED from Fig. 4-28.

Fig. 4-29

Enlarged image LMED from Fig. 4-28.

In the source code presented the following functions have been used, previously presented OCT_activ_cont and OCT_global_line, which has the following form:

function [x,yrpes,dxx,dyy]=OCT_global_line(Lmed)
x=(1:size(Lmed,2))’;
yyy=(1:size(Lmed,1))’;
yrpe=[];
Lbinrpe=zeros(size(Lmed));
for ik=1:size(Lmed,2)
    xx_best=[];
    Llabp=bwlabel(Lmed(:,ik)>(max(Lmed(:,ik))*0.9));
    Lbinrpe(:,ik)=Llabp;
    for tt=1:max(Llabp)
       xxl=yyy(Llabp==tt);
       xx_best=[xx_best; mean(xxl)];
    end
    if ∼isempty(xx_best)
        yrpe(ik)=max(xx_best);
    else
        yrpe(ik)=0;
    end
end
figure; imshow(mat2gray(Lbinrpe*0.5+Lmed)); hold on;
plot(yrpe, ‘r*-’)
yg=gradient(yrpe);
ygg=ones([1 length(yrpe)]); ygg(abs(yg)>20)=0;
ygl=bwlabel(ygg);
figure; imshow(mat2gray(Lbinrpe*0.5+Lmed)); hold on;
palett=jet(max(ygl));
for iiih=1:max(ygl(:))
        plot(x(ygl==iiih),
yrpe(ygl==iiih), ‘Color’, palett(iiih,:), ‘LineWidth’, 4);
end
pam_dl=[];
figure; imshow(mat2gray(Lbinrpe*0.5+Lmed)); hold on
for iiik=1:max(ygl(:))
    for iiikk=iiik:max(ygl(:))
        if iiik<=iiikk
            ygk=[yrpe(ygl==iiik),yrpe(ygl==iiikk)];
            xgk=[x(ygl==iiik); x(ygl==iiikk)];
        else
            ygk=[yrpe(ygl==iiikk), yrpe(ygl==iiik)];
            xgk=[x(ygl==iiikk); x(ygl==iiik)];
        end
        if length(ygk)>10
            P = POLYFIT(xgk’, ygk, 2); yrpes =
round(POLYVAL(P, x));
            plot(yrpes,‘g*-’)
            pam_dl=[pam_dl;[iiik iiikk sum(abs(yrpe-
yrpes’)<20)]];
        end
    end
end
pam_s=sortrows(pam_dl,−3);
if size(pam_s,1)==1
        ygk=[yrpe(ygl==pam_s(1,1))];
        xgk=[x(ygl==pam_s(1,1))];
else
        ygk=[yrpe(ygl==pam_s(1,1)), yrpe(ygl==pam_s(1,2))];
        xgk=[x(ygl==pam_s(1,1)); x(ygl==pam_s(1,2))];
end
    P = POLYFIT(xgk’,ygk,2); yrpes = round(POLYVAL(P,x));
    plot(x, yrpes,‘w*-’);
yrpe=yrpe(:);
    plot(x,yrpe,‘m*-’);
dx=x; dx(abs(yrpe-yrpes)>20)=[];
yrpe(abs(yrpe-yrpes)>20) = [];
dxl=bwlabel(diff(dx)<125);
pdxl=[];
for qw=1:max(dxl)
   pdxl=[pdxl; [qw, sum(dxl==qw)]];
end
pdxl(pdxl(:,2)<50,:)=[];
dxx=[]; dyy=[];
for wq=1:size(pdxl, 1)
    dxx=[dxx; dx(dxl==pdxl(wq, 1))];
    dyy=[dyy; yrpe(dxl==pdxl(wq, 1))];
end

The result presented is affected mainly by the arguments of OCT_activ_cont function, which in accordance with the description quoted determine the type of layer recognised.

The algorithm presented makes a uniform whole related to the analysis of layers within the fundus of the eye on flat OCT images. The results obtained may be enhanced by an automated analysis of ‘holes’ on the image – presented below.

4.7. Determination of ‘Holes’ on the Image

To determine holes on the image a method of binary image LBIN_IP labelling was applied (11) obtaining image LET shown in Fig. 4-30.

Fig. 4-30. LET image.

Fig. 4-30

LET image.

Examples of results obtained are shown in the specification in Fig. 4-31. Each object (cluster) ko received a label and determined coordinates (mo, no) of its centre of gravity position. In addition, the area of surface Po is also calculated. The source code is provided below:

Fig. 4-31. Table of results obtained for consecutive clusters ko (position of the centre of gravity (mo, no) and area of surface Po).

Fig. 4-31

Table of results obtained for consecutive clusters ko (position of the centre of gravity (mo, no) and area of surface Po).

[Lgray, map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]);
Lgray=Lgray(1:850,:);
Lgray=ind2gray(Lgray,map);
Lgray=double(Lgray)/255; Lorg=Lgray;
Lmed=medfilt2(Lorg,[5 5]);
Lmed=mat2gray(Lmed);
[xRPE,yRPE,xRPEz,yRPEz]=OCT_global_line(Lmed);
Lll=filter2(ones(3),Lmed)/(3*3);
L12=imregionalmin(L11);
L13=∼imopen(L12,ones(9));
[Lbin,L18]=OCT_areaa(L13,xRPE,yRPE);
Let=bwlabel(Lbin);
Let_=Let;
Let_(edge(double(L13))==1)=max(Let(:))+1;
figure; imshow(Let_,[]); pall=jet(max(Let(:)));
colormap([pall; [1 1 1]]); colorbar; hold on
[XX, YY]=meshgrid(1:size(Let,2), 1:size(Let,1));
kmnp=[];
for ju=2:max(Let(:))
    Let4=Let==ju;
    Letx=Let4.*XX; Letx(Letx==0)=[];
    Lety=Let4.*YY; Lety(Lety==0)=[];

text(median(Letx), median(Lety), mat2str(sum(Let4(:))), ‘FontS
ize’,15, ‘Color’,[1 1 1])
    kmnp=[kmnp; [ju
median(Letx), median(Lety), sum(Let4(:))]];
end
kmnp

For diagnostic reasons the position of clusters analysed (given in Fig. 4-30) was narrowed to those, which position of the centre of gravity falls within the range between yRPE and yNFL.

4.8. Assessment of Results Obtained Using the Algorithm Proposed

An example of algorithm implementation intended for analysis of layers occurring on an OCT image has been presented. This methodology has been applied to the analysis of around 500 cases, where during verification it has erroneously determined layers for 5% of images. Examples of properly and improperly recognised layers are shown in Fig. 4-32 and Fig. 4-33.

Fig. 4-32. Examples of OCT resultant images with marked properly recognised yRPE, yIO, yNFL.

Fig. 4-32

Examples of OCT resultant images with marked properly recognised yRPE, yIO, yNFL.

Fig. 4-33. Examples of OCT resultant images with marked improperly recognised yRPE, yIO, yNFL.

Fig. 4-33

Examples of OCT resultant images with marked improperly recognised yRPE, yIO, yNFL.

The algorithm proposed was implemented in the Matlab environment and operates at a rate of one image per 15s for a P4 CPU 3GHz processor, 2GB RAM. Additionally, an application in language C was developed, which after time optimisation on the same computer analyses the same image within 0.85s.

The Reader implementing the above function must notice delays introduced by the graphic card during image displaying. In particular, the resultant images are the point here, for which results were presented in the form of graphs or points on a flat image in greyness levels.

4.9. Layers Recognition on a Tomographic Eye Image Based on Random Contour Analysis

4.9.1. Determination of Direction Field Image

Like in [25] and [40] the input image LGRAY is initially subject to filtration using a median filter of (Mh×Nh) size of h=3×3 mask. The obtained image LM is subject to the analysis presented in the next sections.

The first stage of the edge detection method used [14], [35], [41] consists of making a convolution of input image LM of MM×NM resolution, i.e.

LGX(m,n)=mh=Mh/2Mh/2nh=Nh/2Nh/2LM(m+mh,n+nh)hx(mh,nh)
24
LGY(m,n)=mh=Mh/2Mh/2nh=Nh/2Nh/2LM(m+mh,n+nh)hy(mh,nh)
25

with Gauss filters masks, e.g. of 3×3 size [14], [35], [41]. Based on that the matrix of gradient in both directions, necessary to determine the edges, has been determined in accordance with a classical dependence:

LGXY(m,n)=LGX(m,n)2+LGY(m,n)2
26

And in particular its normalised form, i.e.:

LG(m,n)=LGXY(m,n)maxm,n(LGXY(m,n))
27

The image of Lα direction field has been determined for each pair of pixels LGX(m,n) and LGY(m,n), and in general LGX and LGY images, i.e.:

Lα(m,n)=atan(LGY(m,n)LGX(m,n))
28

The implementation of the above relationships in Matlab looks as follows:

Lm=zeros(100); Lm(10:30,10:20)=1; Lm(40:80,50:70)=1;
Lm=imnoise(Lm,‘gaussian’,0.2);
Lm=medfilt2(Lm,[3 3]);
Lm=mat2gray(Lm);
figure; imshow(Lm,‘notruesize’)
     Nx1=5;
     Sigmax1=24;
     Nx2=5;
     Sigmax2=24;
     Thetal=pi/2;
     Ny1=5;
     Sigmay1=24;
     Ny2=5;
     Sigmay2=24;
     Theta2=0;
     alfa=0.15;
hx=OCT_NOISE_gauss(Nx1, Sigmax1, Nx2, Sigmax2, Theta1);
Lgx= conv2(Lm, hx,‘same’);
hy=OCT_NOISE_gauss(Ny1,Sigmay1,Ny2,Sigmay2,Theta2);
Lgy=conv2(Lm,hy,‘same’);
Lalp=atan2(Lgy,Lgx);
Lalp=Lalp*180/pi;
Lg=mat2gray(abs(Lgx)+abs(Lgy));
figure; imshow(Lg, [],‘notruesize’); colormap(‘jet’);
colorbar
figure; imshow(Lalp,[],‘notruesize’); colormap (‘jet’);
colorbar

where OCT_NOISE_gauss

function h = OCT_NOISE_gauss(n1,sigma1,n2,sigma2,theta)
r=[cos(theta) -sin(theta);
   sin(theta) cos(theta)];
for i = 1: n2
    for j = 1: n1
        u = r * [j-(n1+1)/2 i-(n2+1)/2]’;
        h(i, j) = gauss(u(1), sigma1)*OCT_gauss(u(2),sigma2);
    end
end
h = h / sqrt(sum(sum(abs(h).*abs(h))));

function y = OCT_gauss(x, std)
y = −x * gauss(x, std)/std^2;

function y = gauss(x, std)
y = exp(−x.^2/(2*std^2))/(std*sqrt(2*pi));

As a result the images presented below are obtained.

Fig. 4-34. Artificial Lm image.

Fig. 4-34Artificial Lm image

Fig. 4-35. Artificial LG image.

Fig. 4-35Artificial LG image

Fig. 4-36. Artificial Lα image.

Fig. 4-36Artificial Lα image

Those images, Lα and LG, are further used in the analysis, where the starting points random selection is the next step.

4.9.2. Starting Points Random Selection and Correction

Starting points, and – based on them – the next ones will be used in consecutive stages of algorithm operation to determine parts of layers contours. The initial position of starting points was determined at random. Random values were obtained from uniform range (0,1) for each point of image matrix Lo with image resolution LM, i.e.: M×N. For a created this way (random) image Lo a decimal to binary converion is carried out with threshold pr, which is the first and one of matched (described later) parameters of the algorithm, the obtained binary matrix Lu is described by the relationship:

Lu(m,n)={1dlaLG(m,n)>LM(m,n)pr0dlaother
29

In this case:

figure; imshow(Lg,[],‘notruesize’); hold on
pr=0.3;
Lrand=rand(size(Lg));
[n, m]=meshgrid(1:size(Lrand,2),1:size(Lrand, 1));
n(Lrand>(Lg*pr))=[];
m(Lrand>(Lg*pr))=[];
plot(n,m,‘r.’);

The result obtained is presented in the following figure (Fig. 4-37).

Fig. 4-37. Image LG with marked random selected points.

Fig. 4-37

Image LG with marked random selected points.

Starting points o*i,j (where index ‘i’ marks the next starting point, while ‘j’ subsequent points created on its basis) satisfy condition Lu(m,n)=1 – that is starting points are o*i,1. This way the selection of the threshold value pr within the range (0,1) influences the number of starting points, which number is the larger, the brighter is the grey level (contour) in the LG image. In the next stage the starting points' position is modified in the set area H of MH×NH size. Modification consists in the correction of points o*i,1 position of coordinates (m*i,1, n*i,1) to new coordinates (mi,1, ni,1), where shifts within the range mi,1= m*i,1±(MH)/2 and ni,1= n*i,1±(NH)/2 are possible. A change of coordinates occurs in the area of ±(MH)/2 and ±(NH)/2, in which the highest value is achieved LG(m*i,1±(MH)/2, n*i,1±(NH)/2), i.e.:

LG(mi,1,ni,1)=maxmi,1±MH2,ni,1±NH2(LG(mi,1±MH2,ni,1±NH2,))
30

Then the correction of repeating points is carried out – points of the same coordinates are removed. The source code looks here as follows:

H=ones(5);
[n,m]=OCT_NOISE_area(n,m,Lg,H);
plot(n,m,‘g.’); hold on

where OCT_NOISE_area

function [n,m]=OCT_NOISE_area(n,m,Lg,H)
xn=[];
yn=[];
[xr,yr]=meshgrid(1:size(H,2),1:size(H,1));
for iw=1:length(n)
    ddx=size(H,2)/2;
    ddy=size(H,1)/2;
    xp=round(n(iw)-ddx); xk=round(n(iw)+ddx-1);
    yp=round(m(iw)-ddy); yk=round(m(iw)+ddy−1);

    if (xp<1) | (yp<1) | (xk>size(Lg,2)) | (yk>size(Lg,1))
        xn(iw)=n(iw);
        yn(iw)=m(iw);
    else
        Lff=Lg(yp:yk, xp:xk);
        xr_=xr; yr =yr;
        xr_(Lff∼=max(max(Lff)))=[];
        yr_(Lff∼=max(max(Lff)))=[];
        xn(iw)=n(iw)+xr_(1)-ddx;
        yn(iw)=m(iw)+yr_(1)-ddy;
    end
end
n=round(xn); m=round(yn);
n(n<=0)=l; m(m<=0)=1;
n(n>size(Lg,2))=size(Lg,2);
m(m>size(Lg,1))=size(Lg, 1);

The obtained results are presented in Fig. 4-38.

Fig. 4-38. Image LG with random selected points marked red and their correction marked green.

Fig. 4-38

Image LG with random selected points marked red and their correction marked green.

4.9.3. Iterative Determination of Contour Components

To determine layers on an OCT image, contour components have been determined in the sense of its parts subject to later modification and processing in the following way. For each random selected point o*i,1 of coordinates (m*i,1, n*i,1) and then modified (in the sense of its position) to oi,1 of coordinates (mi,1, ni,1) an iterative process is carried out consisting in looking for consecutive points oi,2 oi,3 oi,4 oi,5 etc. and local modification of their position (described in the previous section) starting from oi,1 in accordance with the relationship:

{mi,j+1=mi,j+Ai,jsin(Lα(mi,j,ni,j))ni,j+1=ni,j+Ai,jcos(Lα(mi,j,ni,j))
31

A demonstrative illustration of the iterative process is shown in Fig. 4-39.

In the case of described iterative process of contour components determination it is necessary to introduce a number of limitations (next parameters), comprising:

  • jMAX – maximum iterations number – limitation aimed at elimination of algorithm looping if each time points oi,j of different position are determined and the contour will have the shape of e.g. a spiral.
  • Stopping the iterative process, if it is detected that mi,j = mi,j+1 and ni,j = ni,j+1. Such situation happens most often if Ai,j is close to or higher than MH or NH. Like in the case of starting points random selection and correction, also here a situation may occur that after the correction mi,j = mi,j+1 and ni,j = ni,j+1.

Stopping the iterative process if mi,j > MM or ni,j > NM that is in the cases, when indicated point oi,j will be outside the image.

Stopping the iterative process if |Lα(mi,j, ni,j) - Lα(mi,j+1, ni,j+1)|>Δα where Δα is the next parameter set for acceptable contour curvature.

At this stage consecutive contour components for set parameters are obtained. These parameters comprise:

  • mask size hx and hy ((24), (25)) closely related to the image resolution and to the size of areas identified, adopted for MM×NM = 864×1024 on MH×NH=23×23,
  • pr – threshold responsible for the number of starting points (29) – changed practically within the range 0-0.1,
  • jMAX – the maximum acceptable iterations number – set arbitrarily at 100,
  • Δα - angle range set within the range 10-70°,
  • MH×NH – size of the correction area, a square area, changed within the range from MH×NH=5×5 to MNH=25×25,
  • Aij – amplitude, constant for individual i,j, set at Ai,j=MH,
  • Δα - acceptable maximum change of angle between consecutive contour points, set within the range 10-70°.

For the artificial image presented in Fig. 4-40 an iterative process of contours determination has been performed, assuming pr=0.1, Δα=45°, MNH=5×5. The results obtained are presented in Fig. 4-40.

Fig. 4-40. Artificial input image with marked contour components.

Fig. 4-40

Artificial input image with marked contour components.

The source code of the iterative process of contour components determination is presented below:

Lz=zeros(size(Lalp));
Lz2=zeros(size(Lalp));
A=5;
delta_alph=50;
n_1=[T; m_1=[];
al_1=[];
for i=1:length(n)
    ns_=[];
    ms_=[];
    ks_=[];
    ns_(1)=[n(i)];
    ms_(1)=[m(i)];
    ii=1;
    alp_1=Lalp(ms_(ii),ns_(ii));
    al_1(i,1)=[alp_1];
    kat_r=0;
    while kat_r<delta_alph
        alp_1=Lalp(ms_(end),ns_(end));
        n_p1=round(ns_(end)+A*cos((alp_1+90)*pi/180));
        m_p1=round(ms_(end)+A*sin((alp_1+9 0)*pi/180));
        if
(n_p1<1)|(m_p1<1)|(n_p1>size(Lalp,2))|(m_p1>size(Lalp,1))
            break
        end
        [n_pp1,m_pp1]=OCT_NOISE_area(n_p1,m_p1,Lg,H);
        if
sum(sum([round(m_pp1)==ms_’,round(n_pp1)==ns_’],2)==2)>1
            disp (‘zabezpiecz’)
            break
        end
        if ii>100
            [i, ii]
            break
        end
        ii=ii+1;

[nss,mss]=OCT_NOISE_line([ns_(end),n_pp1],[ms_(end),m_pp1])
;
        ns_=[ns_;round(nss’)];
        ms_=[ms_;round(mss’)];
        ks_(ii)=alp_1;
        kat_r=abs(alp_1-Lalp(ms_(end),ns_(end))); if
kat_r>180; kat_r=180-kat_r; end
    end
        n_1(i,1:length(ns_))=ns_;
        m_1(i,1:length(ms_))=ms_;
        al_1(i,1:length(ks_))=ks_;
        for im=1:length(ns_)
            Lz(ms_(im),ns_(im))=Lz(ms_(im),ns_(im))+1;
        end
    plot(ns_, ms_,‘g-*’,‘LineWidth’,3)
    pause (0.00000001)
end
figure; imshow(Lz,[],‘notruesize’); colormap(‘jet’);
colorbar

where OCT_NOISE_line is a function intended for generation of discrete points on the section connecting the points given, i.e.:

function [n_,m_]=OCT_NOISE_line(n,m)
if (abs(n(1)-n(2))==0)&(abs(m(1)-m(2))==0)
    n =n; m =m;
else
    if abs(n(1)-n(2))<abs(m(1)-m(2))
        if m(1)<m(2)
            m =m(1):m(2);
        else
            m_=m(1):-1:m(2);
        end
        if n(1)∼=n(2)
            n_=n(1):((n(2)-n(1))/(length(m_)-1)):n(2);
        else
            n_=ones(size(m_))*n(1);
        end
        else
            if n(1)<n(2)
                n_=n(1):n(2);
            else
                n_=n(1):-1:n(2);
            end
            if m(1)∼=m(2)
                 m_=m(1):((m(2)-m(1))/(length(n_)−1)):m(2);
            else
                 m_=ones(size(n_))*m(1);
            end
        end
end

When analysing results presented in Fig. 4-40 it should be noticed that the iterative process is stopped only when mi,j = mi,j+1 and ni,j = ni,j+1 (as mentioned before). That is only if points oi,j and oi,j+1 have the same position. Instead, this condition does not apply to points oi,j which have the same coordinates but for different ‘i’ that is originated at specific iteration point from various starting points. Easing of this condition leads to origination of overlapping contour components (Fig. 4-41) which will be analysed in the next sections.

Fig. 4-41. Artificial input image with marked overlapping contour components – the number of overlapping points of the same coordinates is shown in pseudocolours.

Fig. 4-41

Artificial input image with marked overlapping contour components – the number of overlapping points of the same coordinates is shown in pseudocolours.

4.9.4. Determination of Contours from Their Components

As presented in Fig. 4-41 in the previous section, the iterative process carried out may lead to overlapping of points oi,j of the same coordinates originated from various starting points (mi,j, ni,j). This property is used for final determination of layers contour on an OCT image. In the first stage the image Lz from Fig. 4-41, is subject to decimal to binary conversion, i.e. the image that originated as follows:

LZ,j(m,n)={1jezelim=mi,jn=ni,j0other
32

For j=1,2,3,… and finally LZ(m,n):

LZ(m,n)=jLZ,j(m,n)
33
LZB(m,n)=LZ(m,n)>pb
34

Where LZB is a binary image originated from decimal to binary conversion of image Lz with threshold pb. The selection of threshold pb is a key element for further analysis and correction of the contour generated. In a general case a situation may occur, where despite relatively low value pr of threshold assumed a selected starting point oi,1 is situated outside the object’s edge. Then the next iterations may ‘connect’ it (in consecutive processes (32), (33), with the remaining part. In such case the process of protruding branches removing should be carried out – like branch cutting in skeletonisation. In this case the situation is a bit easier – there are two possibilities of this process implementation: increasing the threshold value pb or considering the brightness value LG(mi,j, ni,j) - Fig. 4-42.

Fig. 4-42. Artificial input image including an enlargement of example area with contour components marked in green, and preliminary random selected points – in red.

Fig. 4-42

Artificial input image including an enlargement of example area with contour components marked in green, and preliminary random selected points – in red.

4.9.5. Setting the Threshold of Contour Components Sum Image

The selection of threshold pb during image LZB receiving on the one hand for high values leads to obtaining those contour components, for which the largest number of points overlapped at various ‘i’ of oi,j points (Fig. 4-43, Fig. 4-44). On the other hand contour discontinuities may occur. Therefore the second mentioned method of obtaining the final form of contour, which consists of considering values LG(mi,j, ni,j) for Lz(mi,j, ni,j) = 1 and higher, was selected.

Fig. 4-43. Protruding contour branch (green) as an artefact occurring for the method described particularly visible for noise-affected images and results of removing the protruding branches (black).

Fig. 4-43

Protruding contour branch (green) as an artefact occurring for the method described particularly visible for noise-affected images and results of removing the protruding branches (black).

Fig. 4-44. Protruding contour branch as an artefact occurring for the method described in a real OCT image.

Fig. 4-44

Protruding contour branch as an artefact occurring for the method described in a real OCT image.

Assuming that two non-overlapping points o1,j and o2,j have been random selected, such that m1,j ≠ m2,j or n1,j ≠ n2,j, LM(m1,j, n1,j) and LM(m2,j, n2,j) values were determined for consecutive j – Fig. 4-45.

Fig. 4-45. Graph of LM(m1,j, n1,j) – red and LM(m2,j, n2,j) – green values changes for consecutive points j.

Fig. 4-45

Graph of LM(m1,j, n1,j) – red and LM(m2,j, n2,j) – green values changes for consecutive points j.

Then a maximum value was determined for each sequence of oi,j points:

Om(i)=maxj(LM(mi,j,ni,j))
35

Then all oi,j points were removed, which satisfied the condition oi,j<(Om(i)·pj), where pj is the threshold (precisely the percentage value of Om(i) below which all points are removed). To prevent introduction of discontinuities, only points at the beginning of the component contour are removed. The value was arbitrarily set to pj=0.8. The obtained results are shown in Fig. 4-43 and Fig. 4-46. Example results shown in Fig. 4-46 are obtained for a real OCT image for pr=0.02, Δ=80°, MH×NH=35×35, pb=2, pj=0.8. Correctly determined contour components and other contour fragments, which because of the form of relationship (34) and limitation for Om(i) have not been removed, are visible. However, on the other hand the number and form of parameters available allows pretty high freedom in such their selection as to obtain the expected results. The final form of algorithm was formulated on this basis.

Fig. 4-46. Example of results obtained for a real OCT image for pr=0.02, Δα=80°, MH×NH=35×35, pb=2, pj=0.8.

Fig. 4-46

Example of results obtained for a real OCT image for pr=0.02, Δα=80°, MH×NH=35×35, pb=2, pj=0.8.

[Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]);
Lgray=Lgray(1:850,:);
Lgray=ind2gray(Lgray,map);
Lgray=double(Lgray)/255; Lorg=Lgray;
L=imresize(Lgray,0.5);
Lm=medfilt2(L,[3 3]);
Lm=mat2gray(Lm);
figure; imshow(Lm)
    Nx1=5;
    Sigmax1=24;
    Nx2=5;
    Sigmax2=24;
    Theta1=pi/2;
    Ny1=5;
    Sigmay1=24;
    Ny2=5;
    Sigmay2=24;
    Theta2=0;
    alfa=0.15;
hx=OCT_NOISE_gauss(Nx1,Sigmax1,Nx2,Sigmax2,Theta1);
Lgx= conv2(Lm,hx,‘same’);
hy=OCT_NOISE_gauss(Ny1,Sigmay1,Ny2,Sigmay2,Theta2);
Lgy=conv2(Lm,hy,‘same’);
Lalp=atan2(Lgy,Lgx);
Lalp=Lalp*180/pi;
Lg=mat2gray(abs(Lgx)+abs(Lgy));
figure; imshow(Lg,[],‘notruesize’); colormap(‘jet’);
colorbar
figure; imshow(Lalp,[],‘notruesize’); colormap(‘jet’);
colorbar
figure; imshow(Lg,[],‘notruesize’); hold on
pr=0.05;
Lrand=rand(size(Lg));
[n,m]=meshgrid(1:size(Lrand,2),1:size(Lrand,1));
n(Lrand>(Lg*pr))=[];
m(Lrand>(Lg*pr)) = [];
plot(n,m,‘r.’);
H=ones (5);
[n,m]=OCT_NOISE_area(n,m,Lg,H);
plot(n,m,‘g.’); hold on
Lz=zeros(size(Lalp));
A=5;
delta_alph=50;
n_1=[]; m_1=[];
al_1=[];
for i=1:length(n)
    ns_=[];
    ms_=[];
    ks_=[];
    nma_=[];
    ns_(1)=[n(i)];
    ms_(1)=[m(i)];
    ii=1;
    alp_1=Lalp(ms_(ii),ns_(ii));
    al_1(i,1)=[alp_1];
    kat_r=0;
    while kat_r<delta_alph
        alp_1=Lalp(ms_(end),ns_(end));
        n_p1=round(ns_(end)+A*cos((alp_1+90)*pi/180));
        m_p1=round(ms_(end)+A*sin((alp_1+9 0)*pi/180));
        if
(n_p1<1)|(m_p1<1)|(n_p1>size(Lalp,2))|(m_p1>size(Lalp,1))
            break
        end
        [n_pp1,m_pp1]=OCT_NOISE_area(n_p1,m_p1,Lg,H);
        if
sum(sum([round(m_pp1)==ms_’,round(n_pp1)==ns_’],2)==2)>1
            disp(‘zabezpiecz’)
            break
        end
        if ii>100
            [i, ii]
            break
        end
        ii=ii+1;
        [nss,mss]=line_([ns_(end),n_pp1],[ms_(end),m_pp1]);
        ns_=[ns_;round(nss’)];
        ms_=[ms_;round(mss’)];
        ks_(ii)=alp_1;
        kat_r=abs(alp_1-Lalp(ms_(end),ns_(end))); if
kat_r>180; kat_r=180-kat_r; end
    end
        n_1(i,1:length(ns_))=ns_;
        m_1(i,1:length(ms_))=ms_;
        al_1(i,1:length(ks_))=ks_;
        for im=1:length(ns_)
            Lz(ms_(im),ns_(im))=Lz(ms_(im),ns_(im))+1;
            nma_(im)=Lg(ms_(im),ns_(im));
        end
    ns_s=ns_; ms_s=ms_; m_nma_=max(nma_(:));
    for bg=1:length(nma_)
        if nma_(bg)<(m_nma *0.8)
            ns_s(1)=[];ms_s(1)=[];
        else
            break
        end
    end
    plot(ns_s,ms_s,‘r’,‘LineWidth’,3)
pause (0.0000001)
end

In most cases the obtaining of intended contour shape is possible for one fixed MH×NH value. However, it may turn out necessary to use a hierarchical approach, for which the MH×NH size will be reduced, thanks to which a higher precision of the proposed method will be obtained and the weight (hierarchy) of individual contours importance will be introduced. Examples of results obtained for the algorithm given ultimately in this form are as follows.

Fig. 4-47. Image LG.

Fig. 4-47Image LG

Fig. 4-48. Image Lα.

Fig. 4-48Image Lα

Fig. 4-49. Image Lz with determined contours marked red.

Fig. 4-49Image Lz with determined contours marked red

Fig. 4-50. Enlarged fragment of Lz image.

Fig. 4-50Enlarged fragment of Lz image

4.9.6. Properties of the Algorithm Proposed

The algorithm created is presented in a block diagram – Fig. 4-51.

Fig. 4-51. Block diagram of proposed contour detection algorithm (and hence layers on an OCT eye image).

Fig. 4-51

Block diagram of proposed contour detection algorithm (and hence layers on an OCT eye image).

The assessment of proposed algorithm properties (Fig. 4-51) was carried out evaluating error δ in contour determination for changing parameters pr, Δα, MH×NH, pb, pj within the range pr∈(0,0.1), Δα, MH×NH∈(3,35) pb, pj. An artificial image of rectangular object located centrally in the scene (Fig. 4-52) has been used in the assessment.

Fig. 4-52. Artificial input image used for error assessment.

Fig. 4-52

Artificial input image used for error assessment.

Instead, the error was defined as follows:

δ=1jj(|mi,jmw,j|+|ni,jnw,j|)
36
δmin=minj(|mi,jmw,j|+|ni,jnw,j|)
37
δmax=maxj(|mi,jmw,j|+|ni,jnw,j|)
38

assuming that only one point, i.e. i=1, was random selected. The second part of the assessment consists of points of discontinuity against the standard contour.

Fig. 4-53 shows the graph of error δ values changes and its minimum δmin and maximum δmax value vs. MM×NM changing between 3 and 35. The algorithm intended for properties analysis comprises the already presented source code (as a fundamental part) supplemented with fragments related to the specific nature of the object (Fig. 4-42) and measurements of its properties.

Fig. 4-53. Graph of error δ values changes and its minimum δmin and maximum δmax value vs. MM×NM.

Fig. 4-53

Graph of error δ values changes and its minimum δmin and maximum δmax value vs. MM×NM.

MN_w=[];
    for MN=3:34
    L1=zeros (100);
    L1(40:80,10:70)=1;
    [xw,yw]=meshgrid(1:size(L1,2),1:size(L1,1));
    L111=xor(L1,imerode(L1, ones(3)));
    xw(L111==0) = [];
    yw(L111==0) = [];
    L2=imnoise(L1, ‘gaussian’,0.2);
    L3=medfilt2(L2,[3 3]);
    L4=mat2gray(L3);
         Nx1=8;
         Sigmax1=MN;
         Nx2=8;
         Sigmax2=MN;
         Theta1=pi/2;
         Ny1=8;
         Sigmay1=MN;
         Ny2=8;
         Sigmay2=MN;
         Theta2=0;
         alfa=0.15;
    hx=OCT_NOISE_gauss(Nx1,Sigmax1,Nx2,Sigmax2,Theta1);
    Lgx= conv2(L4,hx,‘same’);
    hy=OCT_NOISE_gauss(Ny1,Sigmay1,Ny2,Sigmay2,Theta2);
    Lgy=conv2(L4,hy,‘same’);
    alp=atan2(Lgy,Lgx);
    Lalp=alp*180/pi;
    Lg=mat2gray(abs(Lgx)+abs(Lgy));
    figure; imshow(L4,‘notruesize’);
    hold on
    Lrand=rand(size(Lg));
    [n,m]=meshgrid(1:size(Lrand,2),1:size(Lrand,1));
    n(Lrand>(Lg*0.02))=[];
    m(Lrand>(Lg*0.02))=[];
    plot(n,m,‘b.’);
    Lz=zeros(size(Lalp));
    delta_alph=50;
    Lz2=zeros(size(Lalp));
    H=ones(5);
    A=5;
    z_kat=80;
    [n,m]=OCT_NOISE_area(n,m,Lg,H);
    plot(n,m,‘g.’); hold on
    n_1=[]; m_1=[];
    al_1=[];
    …
    …
         plot(xw,yw,‘k*’,‘LineWidth’,3)
    nmabs_=[];
    for jjx=1:size(n_1,1)
    for jjy=1:size(n_1,2)
        if (m_1(jjx,jjy)+n_1(jjx,jjy))>0
      nmabs_(jjx,jjy)=Lg(m_1(jjx,jjy),n_1(jjx,jjy));
        end
    end
end
blad_=[];
for cd=1:length(xw)
    blad_(cd)=min(min(abs(n_1-xw(cd))+abs(m_1-yw(cd))));
end
MN_w=[MN_w;[MN, sum(blad_)./length(blad_) min((blad_))
max((blad_))]];
end
figure;
[AX1,H1,H2]=
plotyy(MN_w(:,1),MN_w(:,2),MN_w(:,1),MN_w(:,4), ‘plot’);
set (get(AX1(1), ‘Ylabel’), ‘String’, ‘\delta’, ‘FontSize’,20, ‘C
olor’,‘k’)
set(get(AX1(2),‘Ylabel’),‘String’,‘\delta_{min},\delta_{max
}’,‘FontSize’,20,‘Color’,‘k’)
set(H1,‘LineStyle’,‘-’,‘Marker’,‘s’,‘LineWidth’,2)
set(H2,‘LineStyle’,‘-’,‘Marker’,‘+’)
set (AX1(2),‘Ylim’,[min(min(MN_w(:,3:4))),max(max(MN_w(:,3:4
)))])
xlabel(‘M_MxN_M’,‘FontSize’,20)
grid on
hold on
[AX2,H1,H2]=
plotyy(MN_w(:,1),MN_w(:,2),MN_w(:,1),MN_w(:,3),‘plot’);
set(H2,‘LineStyle’,‘-’,‘Marker’,‘v’);
set(AX2(2),‘Ylim’,[min(min(MN_w(:,3:4))),max(max(MN_w(:,3:4
)))]);
legend([AX1,AX2(2)],‘\delta’,‘\delta_{min}’,‘\delta_{max}’)

As it can be seen (Fig. 4-53) ) the values of δ error fall within the 0.5-0.7 range, what is a small value as compared with the error originating during the algorithm operation for wide changes of other parameters.

Fig. 4-54 shows the graph of error δ values changes and its minimum δmin and maximum δmax value vs. pr. As it results from (29), the change of threshold pr value is directly connected with the number of selected points. For pr=0.02 and higher values the number of random selected points is that large that it is possible to assume that starting from this value their number does not have a significant influence on error δ value. Fig. 4-55 shows the graph of error δ values changes and its minimum δmin and maximum δmax value vs. MH×NH. Both the choice of the points position correction area MH×NH and the amplitude Ai.j which in practical application is constant for various ‘i’ and ‘j’ is a key element affecting the error and thereby the precision of contours reconstruction. As may be seen from Fig. 4-55 the value of δ versus MH×NH is relatively large for Ai,j=const=9 (for variables ‘i’ and ‘j’), for which the computations were carried out. A strict relationship between error δ values changes vs. MH×NH and Ai,j is visible in Fig. 4-56 and Fig. 4-57 the maximum value δmax Fig. 4-57. Based on this it is possible to determine the relationship between MH=NH and Ai,j, i.e.: MH=NH≈1.4*Ai,j (in graphs in Fig. 4-56 and Fig. 4-57 for the minimum error value it may read e.g. MH=NH=25 at Ai,j=35).

Fig. 4-54. Graph of error δ values changes and its minimum δmin and maximum δmax value vs. pr.

Fig. 4-54

Graph of error δ values changes and its minimum δmin and maximum δmax value vs. pr.

Fig. 4-55. Graph of error δ values changes and its minimum δmin and maximum δmax value vs. MH×NH.

Fig. 4-55

Graph of error δ values changes and its minimum δmin and maximum δmax value vs. MH×NH.

Fig. 4-56. Graph of error δ values changes versus MH×H and Ai,j.

Fig. 4-56

Graph of error δ values changes versus MH×H and Ai,j.

Fig. 4-57. Graph of maximum error δmax values changes versus MH×NH and Ai,j.

Fig. 4-57

Graph of maximum error δmax values changes versus MH×NH and Ai,j.

From Fig. 4-56 and Fig. 4-57 it may be noticed that high error values occur for small MH×NH values and high Ai,j. This results from the fact that the consecutive points oi,j+1 are separated from oi,j by Ai,j and their local position correction occurs within a small MH×NH range. At high Ai,j the rounding originating in computations of Lα value formula (28) causes large deviations of oi,j+1 points from the standard contour, what substantially affects the δ and δmax error. Verification of these parameters may be implemented in a similar way as the previous source code with modifications in appropriate places. An attentive Reader will successfully introduce necessary modifications in appropriate place of the previous source code.

4.9.7. Assessment of Results Obtained from the Random Method

The method described gives correct results at contours determination (layers separation) both on OCT images as well as on others, for which classical methods of contours determination do not give results or the results do not provide a continuous contour. The algorithm drawbacks include a high influence of noise on the results obtained. This results from relationship (29) where pixels of pretty high value, resulting from a disturbance, increase the probability of selecting in this place a starting point and hence a component contour. The second drawback is the computations time, which is the longer the larger is the number of selected points and/or the reason, for which searching for the next points oi,j+1 was stopped (these are limitations specified in section 4).

Fig. 4-58 below presents the enlarged results obtained for an example of OCT image.

Fig. 4-58. Example of final enlarged result obtained for a real OCT image for pr=0.02, Δα=45°, MH×NH=35×35, pb=2, pj=0.8, Ai,j=25.

Fig. 4-58

Example of final enlarged result obtained for a real OCT image for pr=0.02, Δα=45°, MH×NH=35×35, pb=2, pj=0.8, Ai,j=25.

The algorithm presented may be further modified and parametrised, e.g. through Ai,j change for various ‘i’ and ‘j’ acc. to the criterion suggested or having considered weights of individual oi,j points and taking them into account as the iteration stopping condition etc.

4.10. Layers Recognition on Tomographic Eye Image Based on Canny Edge Detection

4.10.1. Canny Filtration

The input image Lgray is initially subject to filtration using a median filter of (MH×NH) size of h=13×13 mask. The obtained LMED image is subject to another filtration using a modified Canny filter, for which the next filtration stages are presented in the next sections – as a reminder:

    [Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]);
    Lgray=Lgray(1:850,:);
    Lgray=ind2gray(Lgray,map);
    Lgray=double(Lgray)/255;
    Lmed=medfilt2(Lgray,[13 13]);
    Lmed=mat2gray(Lmed);
    figure; imshow(Lmed)

The first stage of the edge detection method used [14], [35], [40], [41] consists of making a convolution of input image LMED [6], i.e.:

LGX(m,n)=mh=Mh/2Mh/2nh=Nh/2Nh/2LMED(m+mh,n+nh)hx(mh,nh)
39
LGY(m,n)=mh=Mh/2Mh/2nh=Nh/2Nh/2LMED(m+mh,n+nh)hy(mh,nh)
40

with the following Gauss filters masks, e.g. of dimensions 3 × 3 (Fig. 4-59, Fig. 4-60):

Fig. 4-59. Mask hx of filter for the ox axis.

Fig. 4-59

Mask hx of filter for the ox axis.

Fig. 4-60. Mask hy of filter for the oy axis.

Fig. 4-60

Mask hy of filter for the oy axis.

The matrix of gradient in both directions necessary to determine the edges has been determined in accordance with a classical dependence:

LGXY(m,n)=LGX(m,n)2+LGY(m,n)2
41

and pxy threshold:

pxy=ε(maxm,nLGXY(LGXY(m,n))minm,nLGXY(LGXY(m,n)))+minm,nLGXY(LGXY(m,n))
42

where ɛ is a coefficient selected within the range ɛ ∈ (0,1).

A practical implementation of this, initial, phase of algorithm should not give rise to any difficulties:

    Nx1=13;
    Sigmax1=2;
    Nx2=13;
    Sigmax2=2;
    Theta1=pi/2;
    Ny1=13;
    Sigmay1=4;
    Ny2=13;
    Sigmay2=4;
    Theta2=0;
    epsilon=0.15;
hx=OCT_NOISE_gauss(Nx1,Sigmax1,Nx2,Sigmax2,Theta1);
Lgx=conv2(Lmed,hx,‘same’);
Lgx(Lgx<0)=0;
figure; imshow(Lgx,[])
hy=OCT_NOISE_gauss(Ny1,Sigmay1,Ny2,Sigmay2,Theta2);
Lgy=conv2(Lmed,hy,‘same’);
Lgy(Lgy<0)=0;
figure; imshow(Lgy,[])
Lgxy=sqrt(Lgx.*Lgx+Lgy.*Lgy);
figure; imshow(Lgxy)
I_max=max(max(Lgxy));
I_min=min(min(Lgxy));
pxy=epsilon*(I_max-I_min)+I_min;
Lgxym=max(Lgxy,pxy.*ones(size(Lgxy)));
figure; imshow(Lgxym,[])

The obtained images are shown below (Fig. 4-61 - Fig. 4-64).

Fig. 4-61. Image LMED.

Fig. 4-61

Image LMED.

Fig. 4-64. Image LGXYM.

Fig. 4-64

Image LGXYM.

Fig. 4-62. Image LGX.

Fig. 4-62Image LGX

Fig. 4-63. Image LGY.

Fig. 4-63Image LGY

For the final form of the formula for the matrix of edges containing image, i.e. LBIN_KR it is necessary to define LGXYM, i.e.:

LGXYM(m,n)={pxyifLGXY(m,n)<pxyLGXY(m,n)ifLGXY(m,n)pxy
43

and (xi,yi) and (xj,yj) coordinates of ixy and jxy values, respectively, determined from the relationship

xi=cos(α(m,n))oraz xj=cos(α(m,n))
44
yi=sin(α(m,n))oraz yj=sin(α(m,n))
45

where angle α was determined for each pair of pixels LGX and LGY:

α(m,n)=atan(LGY(m,n)LGX(m,n))
46

and then the ixy and jxy values, which assume the level of saturation acc. to values interpolated on the plane determined from the area of 3 × 3 resolution from LGXYM(m±Δm, n±Δn), where Δm and Δn are equal to 1 (Fig. 4-65, Fig. 4-66).

Fig. 4-65. Graphic interpretation of ixy and jxy points location in a fragment of LGXYM(m±1, n±1) image.

Fig. 4-65

Graphic interpretation of ixy and jxy points location in a fragment of LGXYM(m±1, n±1) image.

Fig. 4-66. Input image LMED and white pixels of LBIN_KR image.

Fig. 4-66

Input image LMED and white pixels of LBIN_KR image.

Hence the output image of edges determined using the Canny method LBIN_KR is equal to:

LBIN_KR(m,n)={0ifLGXYM(m,n)pxy1if(LGXYM(m,n)>pxy)(LGXYM(m,n)>ixy(m,n))(LGXYM(m,n)>jxy(m,n))0if(LGXYM(m,n)>pxy)[(LGXYM(m,n)jxy(m,n))(LGXYM(m,n)jxy(m,n))]
47

An example of OCT image generated for ɛ = 0.15, where for better assessment of results obtained white pixels of LBIN_KR image have been shown in Fig. 4-66. The source code of this part is given below

[M,N]=size(Lgxym);
Lkr=zeros(size(Lgxym));
for m=2:M-1,
for n=2:N-l,
    if Lgxym(m,n) > pxy,
        X=[−1,0,+1;−1,0,+1;−1,0,+1];
        Y=[−1,−1,−1;0,0,0;+1,+1,+1];
        Z=[Lgxym(m-1,n-1),Lgxym(m-1,n),Lgxym(m-
1,n+1);Lgxym(m,n-1),Lgxym(m,n),Lgxym(m,n+1);Lgxym(m+1,n-
1),Lgxym(m+1,n),Lgxym(m+1,n+1)];
        alp=atan2(Lgy(m,n),Lgx(m,n));
        ss=sin(alp);
        cc=cos(alp);
        XI=[cc,-cc];
        YI=[ss,-ss];
        ZI=interp2(X,Y,Z,XI,YI);
        if Lgxym(m,n) >= ZI(1) & Lgxym(m,n) >= ZI(2)
            Lkr(m,n)=I_max;
        else
            Lkr(m,n)=I_min;
        end
    else
        Lkr(m,n)=I_min;
    end
end
end
figure; imshow(Lkr,[]);
Lbin_kr=Lkr>0;
figure; imshow(Lbin_kr)

The results obtained are presented in Fig. 4-67, Fig. 4-68.

Fig. 4-67. Image LKR.

Fig. 4-67

Image LKR.

Fig. 4-68. Image LBIN_KR imposed on image LMED.

Fig. 4-68

Image LBIN_KR imposed on image LMED.

The LBIN_KR image further on provides the basis for the next steps of the algorithm operation.

4.10.2. Features of Line Edge

For the LBIN_KR image a labelling operation has been carried out, where each cluster (of values ‘1’) has its label et = 1, 2,…,Et-1, Et.

Lind=bwlabel(Lbin_kr);
figure; imshow(Lind,[]); colormap(‘jet’); colorbar

Then for each label et a dilatation operation is performed for a rectangular structural element SEd of dimension 5 × 1 oriented acc. to the value of angle α(m,n), where the origin of coordinates was placed in its first row [26]. The obtained LIND image in pseudocolours is shown in Fig. 4-69.

Fig. 4-69. Image LIND in pseudocolours (label 178).

Fig. 4-69

Image LIND in pseudocolours (label 178).

Fig. 4-70 shows weight values for consecutive (from among the initial ones) labels of LIND image (Fig. 4-69) i.e. binary images Let, where Pet is the surface of object for label et and Iet is the average value of its level of grey, i.e.:

Fig. 4-70. Table of weights with examples of values for objects with first labels et.

Fig. 4-70

Table of weights with examples of values for objects with first labels et.

Pet=m=1Mn=1NLet(m,n)
48
Iet=1MNm=1Mn=1N(Let(m,n)LMED(m,n))
49

The determined Pet and Iet values will be later on used as features during ultimate analysis of edge lines. These values have been written in order in the data variable in the following source code:

data=[]; xd=[]; xdpk=[]; yd=[]; ydpk=[];
Let_=zeros(size(Lind));
for et=1:max(Lind(:))
Let=(Lind==et);
[xx_,yy_]=meshgrid(1:size(Let,2),1:size(Let,1));
xx_(Let==0)=[];
yy_(Let==0)=[];
xd(et,1:length(xx_))=xx_;
yd(et,1:length(yy_))=yy_;
xdpk(et,1:2) = [xx_(1),xx_(end)];
ydpk(et,1:2)=[yy_(1),yy_(end)];
Let2=Let;
Let3=Let;
for i=8:(size(Let,1)-8)
    for j=8:(size(Let,2)-8)
        p=Let (i,j);
        if p>0;
            alp=atan2(Lgy(i,j),Lgx(i,j));
            ss=sin(alp);
            cc=cos(alp);
            Let2(round(i+ss),round(j+cc))=p;
            Let2(round(i+2*ss),round(j +2*cc))=p;
            Let2(round(i+3*ss),round(j+3*cc))=p;
            Let2(round(i+4*ss),round(j+4*cc))=p;
            Let2(round(i+5*ss),round(j+5*cc))=p;
            Let2(round(i+6*ss),round(j+6*cc))=p;
            Let2(round(i+7*ss),round(j+7*cc))=p;
            Let3(round(i-ss),round(j-cc))=p;
            Let3(round(i-2*ss),round(j-2*cc))=p;
            Let3(round(i-3*ss),round(j-3*cc))=p;
            Let3(round(i-4*ss),round(j-4*cc))=p;
            Let3(round(i-5*ss),round(j-5*cc))=p;
            Let3(round(i-6*ss),round(j-6*cc))=p;
            Let3(round(i-7*ss),round(j-7*cc))=p;
        end
    end
end
Let_((Let2+Let3)>0)=et;
data(et,1)=et;
data(et,2)=sum(sum(Let));
Lmed_1=Let2.*Lmed; Lmed_1(Let2==0)=[];
Lmed_2=Let3.*Lmed; Lmed_2(Let3==0)=[];
Lmed_3=Let.*Lmed; Lmed_3(Let3==0)=[];
data(et,4)=mean(Lmed_1)-mean(Lmed_2);
data(et,3)=mean(Lmed_3);
end
figure; imshow(Let_,[]); colormap(‘jet’); colorbar

Matrices Let2 and Let3 have been used in the above source code, being the result of dilatation on the one and on the other side of analysed pixel of the et area. In addition, coordinates of the beginning and of the end of the analysed et area have been written in variables zdpk and ydpk. This data will be necessary at a further stage of connecting individual contour fragments.

4.10.3. Contour Line Correction

Each solid line of edge visible in Let image for labels et=1,2,…,Et-1,Et is transformed into the form of xet and yet vector of points' coordinates in a Cartesian coordinate system. The method of contour line correction is applied to ‘elongation’ of each edge in both directions. To this end for the first two pairs of coordinates of the first edge (x1(1), y1(1)) and (x1(2), y1(2)) as well as for the last two (x1(end-1), y1(end-1)) and (x1(end), y1(end)) a straight line passing through those points is determined (end – means the last element), i.e. in accordance with demonstrative illustration below (Fig. 4-71):

Fig. 4-71. Graphic interpretation of the contour correction method to determine consecutive points starting from the position of points (x1(end-1), y1(end-1)) and (x1(end), y1(end)) for a new point (pixel) to be determined (x1,k(1),y1,k(1)). To simplify, the angle of inclination of end points of the edges has been set as β=0°.

Fig. 4-71

Graphic interpretation of the contour correction method to determine consecutive points starting from the position of points (x1(end-1), y1(end-1)) and (x1(end), y1(end)) for a new point (pixel) to be determined (x1,k(1),y1,k(1)). To simplify, the angle (more...)

Fig. 4-71 presents the ideas of contour correction method, where starting from the position of points (x1(end-1),y1(end-1)) and (x1(end), y1(end)) the straight line passing through them is determined with a slope β1, i.e.:

β1(x1(end),y1(end))=atan(y1(end)y1(end1)x1(end)x1(end1))
50

and at the distance of Δxy the position of new point (x1,k(1), y1,k(1)) is determined for its various potential positions (within the angle range β1(1)±α every Δα). The selection of right position of a contour point obtained by adding consecutive points to the existing edge is obtained based on the analysis of mean value from eu1(xu, yu, α, 1) and eu1(xd, yd, α, 1) areas of Me×Ne size. The difference ΔS is determined for each position of point (x1,k(1), y1,k(1)):

ΔS(1,α)=1MeNe(yu=1Mexu=1Neeu(xu,yu,α,1)hu(xu,yu)yd=1Mexd=1Need(xd,yd,α,1)hd(xd,yd))
51

where:

xu, yu – coordinates of consecutive elements of matrix eu and hu situated atop relative to the analysed point (x1,k(1), y1,k(1)), for which xu∈{1,2,…,Nu−1,Nu} and yu∈{1,2,…,Nu−1,Nu}

xd, yd – coordinates of consecutive elements of matrix ed and hd situated at the bottom relative to the analysed point (x1,k(1), y1,k(1)), for which xd∈{1,2,…, Nd−1,Nd} and yd∈{1,2,…,Nd−1,Nd}

and hu and hd masks for Me×Ne =3×2

Fig. 4-72. Mask hu for Me×Ne =3×2.

Fig. 4-72Mask hu for Me×Ne =3×2

Fig. 4-73. Mask hd for Me×Ne =3×2.

Fig. 4-73Mask hd for Me×Ne =3×2

The areas (matrices) eu and ed of Me×Ne size are created based on angle β and α every Δα in the following way:

eu1(xu,yu,α,1)==LMED(y1,k(1)+yucos(β1(1)+α+90),x1,k(1)+xusin(β1(1)+α+90))
52
ed1(xd,yd,α,1)==LMED(y1,k(1)ydcos(β1(1)+α+90),x1,k(1)xdsin(β1(1)+α+90))
53

where xu∈{1,2,3,… Ne-1,Ne} and xu∈{1,2,3,… Ne-1,Ne} and β1(1), in general β1(v1):

β1(v1)=atan(y1,k(v1)y1,k(v11)x1,k(v1)x1,k(v11))
54

for v1∈{2,3,… V1-1,V1}, V1 – a total number of points of contour correction implemented for line 1 of the contour.

The angle, for which there is the best fit of the analysed point (x1,k(v1), y1,k(v1)), is calculated as α* for which ΔS(v1,α) reaches a maximum or minimum depending on the position and brightness of the analysed object.

ΔS(v1,α)=maxα(ΔS(v1,α))
55

Consecutive points determined for increasing v1 must be limited. The minimum value ΔS(v1, α*) limited by threshold pr is this bound.

The suggested method of contour correction has very interesting properties. Parameters of this part of algorithm include:

  • α - the angle, within which the best fit is sought with regard to the given criterion,
  • Δα - accuracy, with which the best fit is sought,
  • Δxy - the distance between the current and the next sought point of the active contour,
  • Me - height of analysed area eu and ed,
  • Ne - width of analysed area eu and ed.

The function constructed on this basis is presented below.

function
[x_out,y_out,wagi,iter]=OCT_COR_LINE(Lmed,x_in,y_in,udxy_,m
ene,alpha,iter_,pr,dxy)
wagi=[];
xi=x_in(end); yi=y_in(end);
beta=atan2((y_in(end)-y_in(end-1)),(x_in(end)-x_in(end-
1)));
x_out=xi;y_out=yi;
for iter=1:iter_
    eu=[]; ed=[]; deltaS=[];
   for alpha_=-alpha:alpha
       for udxy=0:udxy_
            yi_=yi+udxy*sin(beta+alpha_*pi/180);
            xi_=xi+udxy*cos(beta+alpha_*pi/180);
            al_be=beta+(alpha_+90)*pi/180;
            ss=sin(al_be);
            cc=cos(al_be);
            for mene_=1:mene
                yy=round(yi_+mene_*ss);
                xx=round(xi_+mene_*cc);
                if
(yy>1)&(yy<=size(Lmed,1))&(xx>1)&(xx<=size(Lmed,2))
                    eu(udxy+1,mene_)=Lmed(yy,xx)/mene_;
                else
                    eu(udxy+1,mene_)=0;
                end
            end
            for mene_=1:mene
                yy=round(yi_-mene_*ss);
                xx=round(xi_-mene_*cc);
                if
(yy>1)&(yy<=size(Lmed,1))&(xx>1)&(xx<=size(Lmed,2))
                    ed(udxy+1,mene_)=Lmed(yy,xx)/mene_;
                else
                    ed(udxy+1,mene_)=1;
                end
            end
        end
            deltaS=[deltas;[alpha_,mean(ed(:))-
mean(eu (:))]];
   end
    deltaS=sortrows(deltas,2);
    if deltas(1,2)>pr
       break
    end
       wagi(iter)=deltaS(1,2);
    al_be_=beta+deltaS(1,1)*pi/180;
            yi=yi+dxy*sin(al_be);
            xi=xi+dxy*cos(al_be);
    beta=al_be_;
    xyxy=[x_out’,y_out’];
    if sum(((round(xyxy(:,1))==round(xi)) +
(round(xyxy(:,2))==round(yi)))==2)>=2
        break
    end
      x_out=[x_out,xi];
      y_out=[y_out,yi];
end
end

Fig. 4-74 - Fig. 4-77 below present the results obtained for an artificial image of a square for modified aforementioned parameters α, Δα, Δxy, Me, Ne changed within the range α∈{1,2,3,…,19,20}, Δxy=Ne∈{1,2,3,…, 19,20}, Me∈{1,2,3,…, 19,20} for Δα=1, and pr=-0.001. Also the number of iterations was limited to 50.

Fig. 4-74. Artificial image and fragment of contour correction action for α=40, Δα=1, Me=10, Δxy=Ne changed within the range (1,20).

Fig. 4-74

Artificial image and fragment of contour correction action for α=40, Δα=1, Me=10, Δxy=Ne changed within the range (1,20).

Fig. 4-77. Artificial image and fragment of contour correction action for α=45, Me=10, Ne=10, Δα=1, and Δxy changed within the range (1,20).

Fig. 4-77

Artificial image and fragment of contour correction action for α=45, Me=10, Ne=10, Δα=1, and Δxy changed within the range (1,20).

Fig. 4-75. Artificial image and fragment of contour correction action for α=40, Δα=1, Δxy=Ne=4, Me changed within the range (1,20).

Fig. 4-75Artificial image and fragment of contour correction action for α=40, Δα=1, Δxy=Ne=4, Me changed within the range (1,20)

Fig. 4-76. Artificial image and fragment of contour correction action for α=40, Me=10, Δxy=Ne=10, Δα changed within the range (1,20).

Fig. 4-76Artificial image and fragment of contour correction action for α=40, Me=10, Δxy=Ne=10, Δα changed within the range (1,20)

The presented contour correction method has the following properties:

  • α - angle defining the range sought in the sense of degree of object edge curvature,
  • Δα - accuracy, with which the degree of edge curvature is sought,
  • Δxy- distance between the current and next sought point affecting the extent of generalisation and approximation of intermediate values (placed between points),
  • Me - height of analysed area affecting the algorithm capability to find objects of higher level of detail,
  • Ne - width of analysed area averaging the contour sought along edges.

The experiments and algorithm parameters measurements presented (Fig. 4-74 - Fig. 4-77) can be easily followed using a short source code:

Lmed=zeros(300); Lmed(200:250,100:250)=1;
Lmed=conv2(Lmed,ones(19))./sum(sum(ones(19)));Lmed(220:end,
:)=0;
figure; imshow(Lmed)
x_in=[100,101]; y_in=[200,200];
hold on; plot(x_in,y_in,‘*g-’)
map=j et (20);
udxy_=4;
iter_=70;
pr=−0.0001;
dxy=4;
alpha=45;
for mene=1:20
[x_out,y_out,wagi,iter]=OCT_COR_LINE(Lmed,x_in,y_in,udxy_,m
ene,alpha,iter_,pr,dxy)
    hold on; plot(x_out,y_out,‘*-’,‘color’,map(mene,:))
    axis([222 275 186 236])
    pause(0.05)
end

We encourage Readers here to perform independent changes of x_in,y_in,udxy_,mene,alpha,iter_,pr,dxy values and to experimentally verify these parameters influence on the result obtained.

4.10.4. Final Analysis of Contour Line

The obtained individual lines of edges et and corresponding values Iet and Pet (average value of brightness and surface) have been adjusted. Because those edges have been removed, which have Iet<prmaxet{1,2,3,,Et}(Iet) and for which Pet<prmaxet{1,2,3,,Et}(Pet) where threshold pr was arbitrarily taken as 0.2 (20%). For the other edges ek, which have not been removed, the adjustment was made using on their ends the active contour method. The values of active contour parameters were taken as α=45, Δα=1, Δxy=1, Me=11, Ne=11. Iterations for individual ek edges of active contour method were interrupted, when one of the following situations occurred:

  • the acceptable iterations number was exceeded – set arbitrarily at 1000,
  • for that point the condition ΔS(vek, α*)<ps has not been met, where ps was set at -0.02,
  • at least two points have the same coordinates – this prevents looping of the algorithm.

Results obtained for parameters determined this way are presented below (Fig. 4-78, Fig. 4-79).

Fig. 4-78. Action of modified active contour on a real image for α=40, Δα=1, Δxy=Ne=11, Me =10. The green line marks the contour obtained from the Canny method, the red line marks consecutive points of active contour method.

Fig. 4-78

Action of modified active contour on a real image for α=40, Δα=1, Δxy=Ne=11, Me =10. The green line marks the contour obtained from the Canny method, the red line marks consecutive points of active contour method.

Fig. 4-79. Action of modified active contour after the described correction on a real image for α=40, Δα=1, Δxy=Ne=11, Me=11.

Fig. 4-79

Action of modified active contour after the described correction on a real image for α=40, Δα=1, Δxy=Ne=11, Me=11.

As shown in the figures above (Fig. 4-78, Fig. 4-79) the method suggested correctly detects individual layers on an OCT eye image. Further stages, which are planned in this approach continuation, are related to a deeper analysis of the algorithm in terms of parameters selection. The discussed algorithm fragment looks as follows:

figure; imshow(Lmed,[]); hold on
hh=waitbar(0,‘Please wait…’)
for et=1:max(Lind(:))
    Let=(Lind==et);
    [x_in,y_in]=meshgrid(1:size(Let,2),1:size(Let,1));
    x_in(Let==0)=[];
    y_in(Let==0)=[];
    mene=15;
    udxy_=10;
    alpha=45;
    dxy=1;
    pr=−0.01;
        if length(x_in)>5
        [x_out,y_out,wagi,iter]=OCT_COR_LINE(Lmed,
x_in,y_in, udxy_,mene,alpha,1275,pr, dxy);
        hold on;
        plot(x_out,y_out,‘w.’)
        pause(0.1)
        end
  waitbar(et/max(Lind(:)))
end
close(hh)

We encourage the Reader again to modify parameters of function OCT_COR_LINE allowing obtaining proper results and enabling learning the function capabilities. A few artefacts, resulting from improper selection of OCT_COR_LINE function parameters, are presented below.

Fig. 4-80. Examples of artefacts resulting from improper selection of function OCT_COR_LINE parameters.

Fig. 4-80Examples of artefacts resulting from improper selection of function OCT_COR_LINE parameters

The presented method of combination of Canny edge detection algorithm with the modified active contour algorithm is applied in detection of external limiting membranes on tomographic OCT eye images. The method proposed may be used during images segmentation into other contents than presented, provided that values of parameters mentioned are modified [23]. Despite satisfactory results presented above there is a pretty large area for research related to modification of the algorithm presented in terms of operation time optimisation. The time of analysis in this, as well as in many other cases of image analysing applications, is of crucial importance in practical use. In terms of functionality, implementation difficulties, the speed of operation, this method may be classified as an average one.

4.11. Hierarchical Approach in the Analysis of Tomographic Eye Image

4.11.1. Image Decomposition

Images originating from a Copernicus tomograph due to its specific nature of operation are obtained in sequences of a few, a few dozen 2D images within approx. 1s, which provide the basis for 3D reconstruction [42]. Because of their number, the analysis of a single 2D image should proceed within a time not exceeding 10, 20, 30, 40, 50 ms, so that the time of operator’s waiting for the result would not be onerous (as it could be easily calculated for the above value, for a few dozen images of resolution usually M × N = 740 × 820 in a sequence, this time will be shorter than 1 s).

At the stage of image preprocessing the input image LGRAY is initially subject to filtration using a median filter of (Mh×Nh) size of mask h equal to Mh×Nh=3×3 (in the final software version this mask may be set as Mh×Nh=5×5 to obtain a better precision of algorithm operation for certain specified group of images), i.e.:.

 [Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]);
Lgray=Lgray(1:850,:);
Lgray=ind2gray(Lgray,map);
Lgray=double(Lgray)/255;
Lm=medfilt2(Lgray, [3 3]);
Lm=mat2gray(Lm);
figure; imshow(Lm)

Image LM obtained this way is subject consecutively to decomposition to an image of lower resolution and analysed in terms of layers detection.

As an assumption, different than those presented in previous algorithm sections, the algorithm described should provide satisfactory results mainly from the operation speed criterion point of view. Although methods (algorithms) described feature high precision of computations, however, they are not fast enough (it is difficult to obtain the speed of single 2D image analysis on a PII 1.33 GHz processor in a time not exceeding 10 ms). Therefore a reduction of image LM resolution by approx. a half was proposed to such value of pixels number in lines and columns, which is a power of ‘2’, i.e.: M×N=256×512 (LM2) applying further on its decompositions to image LD16 (where symbol ‘D’ means decompositions, while ‘16’ the size of block, for which it was obtained), i.e.:

d=16;
fun = @(x) median(x(:));
Ldl6 = blkproc(Lm,[d d],fun);

Each pixel of the input image after decomposition has a value equal to a median of the area (block) of 16×16 size of the input image, acc. to Fig. 4-81.

Fig. 4-81. Blocks arrangement on the LM image.

Fig. 4-81

Blocks arrangement on the LM image.

An example of result LD16 is presented in Błąd! Nie można odnaleźć źródła odwołania.Fig. 4-82. Image LD16 is then subject to determination of pixels position of maximum value for each column, i.e.:

Fig. 4-82. OCT image after decomposition – LD16.

Fig. 4-82

OCT image after decomposition – LD16.

LDM16(m,n)={1ifLD16(m,n)maxm(LD16(m,n))0other
56

where

  • m – means a row numbered from one,
  • n – means a column numbered from one.

Appropriate record in Matlab

Ldml6=Ld16==(ones([size(Ld16,1)1])*max(Ld16));
figure; imshow(Ldm16,‘notruesize’)

Using the described method of threshold setting for the maximum value in lines, in 99 percent of cases only one maximum value in a column is obtained (Fig. 4-83).

Fig. 4-83. Example of LDM16 image.

Fig. 4-83

Example of LDM16 image.

To determine precisely the position of NFL and RPE boundaries (Fig. 4-82) it turned out necessary to use one more LDB16 image, i.e.:

LDB16(m,n)={1if|LD16(m,n)LD16(m+1,n)|>pr0other
57

for m∈(1,M-1), n∈(1,N), where pr – the threshold assumed within the range (0, 0.2).

A record in Matlab looks as follows:

Ldb16_=zeros(size(Ld16));
for n=1:size(Ld16,2)−1
        Ldb16_(1:end-1,n)=diff(Ld16(:,n));
end
pr=0.1;
Ldb16=Ldb16_>pr;
figure; imshow(Ldbl6,‘notruesize’)

As a result, coordinates of NFL and RPE boundaries position points are obtained as such positions of values ‘1’ on LDB16 image, for which yNFL≤yRPE and yRPE is obtained from LDB16 image in the same way.

This method for pr threshold selection at the level of 0.01 gives satisfactory results in around 70 percent of cases of not composed images (i.e. such, which are not images with a visible pathology). Unfortunately for the other 30 percent cases the selection of pr threshold in the adopted limits does not reduce the originated errors (Fig. 4-84).

Fig. 4-84. Example of LDB16 image.

Fig. 4-84

Example of LDB16 image.

The correction on this level of erroneous recognitions of NFL and RPE layers is that important, that for this approach these errors will not be duplicated (in the hierarchical approach presented below) for the subsequent more precise approximations.

4.11.2. Correction of Erroneous Recognitions

In LDB16 image (Fig. 4-84) white pixels are visible in an excess number for most columns. Two largest objects arranged along ‘maxima’ in columns entirely coincide with NFL and RPE limits position. Based on that and having carried out the above analysis for several hundred images, the following limitations were adopted:

  • for coordinates yRPE found on LDM16 image there must be at the same time LDM16(m,n)=1 in other cases this point is considered as disturbance or as a point of Gw(n) layer,
  • if only one pixel of value ‘1’ occurs on image LDM16 and LDB16 for the same position, i.e. for the analysed n there is LDM16(m,n) = LDB16(m,n) the history is analysed for n>l and it is checked, whether |yNFL(n-1) -yNFL(n)|> |yRPE(n-1) -yRPE(n)|, i.e.:
    Rp(n)={mifLDB16(m,n)=LDM16(m,n)=1|yNFL(n1)yNFL(n)|>|yRPE(n1)yRPE(n)|0other
    58
    for m∈(1,M), n∈(2,N)
  • if |yNFL(n-1)-yNFL(n)|≤|yRPE(n-1)-yRPE(n)|, the condition yNFL(n-1)-yNFL(n)=±1 is checked (giving thereby up fluctuations against history n-1 within the range ±1 of area A (Fig. 4-81)). If so, then this point is the next yNFL(n) point. In the other cases the point is considered as a disturbance. It is assumed that lines coincide yNFL(n)=yRPE(n) if yRPE(n-1)-yRPE(n)=±1 and only one pixel occurs of value ‘1’ on LDM16 image.
  • in the case of occurrence in specific column of larger number of pixels than 2, i.e. if summ(LDB16(m,n))>2 a pair is matched (if occurs) yNFL(n-1), yRPE(n-1) so that |yNFL(n-1)-yNFL(n-1)|- |yRPE(n)-yRPE(n)| =±1 would occur. In this case it may happen that lines yNFL(n) and yRPE(n) will coincide. However, in the case of finding more than one solution, that one is adopted, for which LD16(yNFL(n),n)+LD16(yRPE(n),n) assumes the maximum value (the maximum sum of weights in LD16 occurs).

The presented correction gives for the above class of images the effectiveness of around 99% of cases. Despite adopted limitations the method gives erroneous results for the initial value n=1, unfortunately these errors continue to be duplicated.

Unfortunately, the adopted relatively rigid conditions of acceptable difference |yNFL(n-1)-yNFL(n-1)| or |yRPE(n)-yRPE(n)| cause origination of large errors for another class of tomographic images, on which a pathology occurs in any form (Fig. 4-86).

Fig. 4-86. Examples of LDB16 images for pr=0.01 with incorrectly marked yNFL(n), yRPE(n) points (layers).

Fig. 4-86

Examples of LDB16 images for pr=0.01 with incorrectly marked yNFL(n), yRPE(n) points (layers).

As it may be seen in Fig. 4-85 and Fig. 4-86 problems occur not only for the initial n values, but also for the remaining points. The reason for erroneous recognitions of layers positions consists of difficulty in distinguishing proper layers in the case of discovering three ‘lines’, three points in a specific column, which position changes in acceptable range for individual n.

Fig. 4-85. Examples of LDB16 images for pr=0.01 with incorrectly marked yNFL(n), yRPE(n) points (layers).

Fig. 4-85

Examples of LDB16 images for pr=0.01 with incorrectly marked yNFL(n), yRPE(n) points (layers).

These errors cannot be eliminated at this stage of decomposition into 16×16 pixels areas (or 32×32 image resolution). They will be the subject of further considerations in the next sections.

The present form of algorithm is a little extended as against the description presented above, what results from the necessity to introduce numerous limitations and algorithm blocks. As the blocks mentioned are not technically related to the OCT image analysis, they will not be discussed here in detail. However, we encourage the Reader to follow this, apparently, complicated algorithm.

pr=0.005;
[mss,nss,waga_p,L5,L6]=HIERARHICALL_STEP(Lm,fun,d,pr);
fg=figure; imshow(Lm); hold on
plot(nss’*d-d/2,mss’*d-d/2,‘-*’)

where function HIERARHICALL_STEP is:

function
[ynf1_rpe,xnf1_rpe,waga_p,Ld16d,Ldb16z]=HIERARHICALL_STEP(L
m,fun,d,pr)
ynfl_rpe=[]; xnfl_rpe=[]; waga_p=[];
Ld16 = blkproc(Lm,[d d],fun);
fun2 = @(x) max(x(:));
Ld16__= blkproc(Lm,[d d],fun2);
Ld16__=[Ld16__(2:end,:);Ld16__(end,:)];
Ldm16=Ld16__==ones([size(Ld16__,1),1])*max(Ld16__);
for n=1:size(Ld16,2); Ld16(:,n)=mat2gray(Ld16(:,n)); end
Ld16d=zeros(size(Ld16));
for n=1:size(Ld16,2)
        Ld16d(1:end-1,n)=diff(Ld16(:,n)).*Ld16(2:end,n);
end
Ldm16=zeros(size(Ld16d));
for n=1:size(Ld16d,2)
        Ldm16(1:end,n)=Ld16d(1:end,n)==max(Ld16d(1:end,n));
end
Ldb16=Ld16d>pr;
Ldb16=bwmorph(Ldb16,‘clean’);
figure; imshow(Ldb16,[],‘notruesize’); hold on
Ldb16_lab=bwlabel(Ldb16);
Ldb16z=zeros(size(Ldb16_lab));
for et=1:max(Ldb16 lab (:))
    Ldb16i=(Ldb16_lab==et);
    if sum(sum(Ldb16i.*Ldm16))>0
        Ldb16z=Ldb16z|Ldb16i;
    end
end
Ldb16z=bwmorph(Ldb16z,‘clean’);
Ldb16_lab2=bwlabel(Ldb16);
L77=zeros(size(Ldb16z));
for iw=1:size(Ldb16z, 2)
    L77(:,iw)=bwlabel(Ldb16z(:,iw));
end
if (max(L77(:))<2)&(max(Ldb16_lab2(:))==2)
    Ldb16z=Ldb16;
end
ynf1_rpe=[]; xnfl_rpe=[];
for iu=1:size(Ldl6d,2)
if sum(Ldb16z(:,iu))>0
        Ldb16z_lab=bwlabel(Ldb16z(:,iu)|Ldm16(:,iu));
        if maxTLdb16z_lab(:))<=2
            Ldb16z_nr=1:size(Ld16d,1);
            Ldb16z_nr(Ldb16z(:,iu)==0)=[];
            Ld16d_nr=1:size(Ld16d,1);
            Ld16d_nr(Ldb16(:,iu)==0)=[];
            if Ld16d_nr(1)==Ldb16z_nr(end)
                if size(ynf1_rpe,2)>0
                    if min(abs(ynfl_rpe(:,end)-
Ldb16z_nr))<=2
                        if abs(ynf1_rpe(1,end)-
Ld16d_nr (1))<abs (ynf1_rpe(2,end) -Ld16d_nr(1))

ynfl_rpe=[ynfl_rpe,[Ld16d_nr(1);ynfl_rpe(2,end)]];
                            xnfl_rpe=[xnfl_rpe,[iu;iu]];
                        else
ynfl_rpe=[ynfl_rpe,[Ld16d_nr(1);Ldb16z_nr(end)]];
                            xnfl_rpe=[xnfl_rpe,[iu;iu]];
                        end
                    end
                else
ynfl_rpe=[ynfl_rpe,[Ld16d_nr(1);Ldb16z_nr(end)]];
                    xnf1_rpe=[xnf1_rpe,[iu;iu]];
                end
            else

ynfl_rpe=[ynfl_rpe,[Ld16d_nr(1);Ldb16z_nr(end)]];
                xnfl_rpe=[xnfl_rpe,[iu;iu]];
            end
        else
            et_Ldb16=[];
            for et=1:max(Ldb16z_lab)
et_Ldb16=[et_Ldb16;[et,max((Ldb16z_lab==et).*Ld16__(:,iu))]
];
            end
            et_Ldb16=sortrows(et_Ldb16,−2);
            if et_Ldb16(2,2)*8>et_Ldb16(1,2)
                if size(ynfl_rpe,2)>0
                    Ld16d_nr2=1:size(Ld16d,1);

Ld16d_nr2(Ldb16z_lab∼=et_Ldb16(1,1))=[];
                    if abs(ynfl_rpe(2,end)-
Ld16d_nr2)<abs(ynfl_rpe(1,end)-Ld16d_nr2)

et_Ldb16(et_Ldb16(:,1)>et_Ldb16(1,1),:)=[];
                        et_Ldb16=sortrows(et_Ldb16,−2);
                     else
                        et_Ldb16=sortrows(et_Ldb16,−2);
                     end
                end
            end
            et_Ldb16(3:end,:)=[];
            et_Ldb16=sortrows(et_Ldb16,1);
            Ldb16z_nr=1:size(Ld16d,1);
            if size(et_Ldb16,1)==1
                Ldb16z_nr(Ldb16z_lab∼=et_Ldb16(1,1))=[];
            else
                Ldb16z_nr(Ldb16z_lab∼=et_Ldb16(2,1))=[];
            end
            Ld16d_nr=1:size(Ld16d,1);
            Ld16d_nr(Ldb16z_lab∼=et_Ldb16(1,1))=[];

ynfl_rpe=[ynfl_rpe,[Ld16d_nr(1);Ldb16z_nr(end)]];
            xnfl_rpe=[xnfl_rpe,[iu;iu]];
        end
end
end

4.11.3. Reducing the Decomposition Area

The increasing of accuracy and thereby reducing the Am,n area size (Fig. 4-81) – block on LM image is a relatively simple stage of tomographic image processing with particular focus on the operating speed. It has been assumed that Am,n areas will be sequentially reducing by half in each iteration – down to 1×1 size. The reduction of Am,n area is equivalent to performance of the next stage of lines NFL and RPE position approximation.

The increasing of accuracy (precision) of NFL and RPE lines position determined in the previous iteration is connected with two stages:

  • concentration of (m,n) coordinates in the sense of determining intermediate ((m,n) points situated exactly in the centre) values by means of linear interpolation method;
  • change of concentrated points position so that they would better approximate the limits sought.

If the first part is intuitive and results only in resampling, the second requires more precise clarifications. The second stage consists in matching individual points to the layer sought. As on the ox axis the image by definition is decomposed and pixel’s brightness on the image analysed corresponds to the median value of the original image in window A (Fig. 4-81), the modification of points RPE and NFL position occurs only on the vertical axis. The analysis of individual RPE and NFL points is independent in the sense of dependence on n-1 point position, as was the case in the previous section.

Each of RPE points, left from the previous iteration, and newly created from interpolation, in the consecutive algorithm stages is matched with increasingly high precision to the RPE layer. Point’s RPE(n) position changes within the range of ±pu (Fig. 4-87) where the variation range does not depend on the scale of considerations (size of A area) and strictly results from the distance between NFL and RPE (Fig. 4-88). For blocks A of 16×16 to 1×1 size pu is constant and amounts to 2. This value has been assumed on the basis of, typical for the analysed several hundred LGRAY images, average distance between NFL and RPE, equal to around 32 pixels, what means that after decomposition into blocks A of 16×16 size these are two pixels, that is pu=2. The maximum on the LDM image is sought in this ±2 range and a new position of RPE or NFL point is assumed for it. Thus the course of RPE or NFL is closer to the actual course of the layer analysed.

Fig. 4-87. Demonstrative diagram of the process of RPE course matching to the edge of the layer sought. Individual pixels independent of each other may change the position within the ±pu range.

Fig. 4-87

Demonstrative diagram of the process of RPE course matching to the edge of the layer sought. Individual pixels independent of each other may change the position within the ±pu range.

Fig. 4-88. Results of matching for two iterations White colour marks input RPE points and red and green – consecutive approximations.

Fig. 4-88

Results of matching for two iterations White colour marks input RPE points and red and green – consecutive approximations.

The obtained results of matching are presented in Fig. 4-88. White colour shows input RPE values as input data for this stage of algorithm and decomposition into blocks A of size 16×16 (LDM16 and LDB16 images), red colour – results of matching for blocks A of size 8×8 (LDM8 and LDB8 images), and green colour – results of matching for blocks A of size 4 × 4 (LDM4 and LDB4 images). As may be seen from Fig. 4-88 the next decompositions into consecutive smaller and smaller areas A and thus image of higher resolution, a higher precision is obtained at the cost of time (because the number of analysed RPE, NFL points and their neighbourhoods ±pu increases).

This method for A of 16 × 16 size has that good properties of global approach to pixels brightness that there is no need to introduce at this stage additional actions aimed at distinguishing layers situated close to each other (which have not been visible so far due to image resolution). While at areas A of 4×4 size other layers are already visible, which should be further properly analysed. At increased precision, ONL layer is visible, situated close to RPE layer (Fig. 4-88). Thereby in the area marked with a circle there is a high position fluctuation within the oy axis of RPE layer. Because of that the next step of algorithm has been developed, taking into account separation into RPE and ONL layers for appropriately high resolution. In a practical implementation this fragment looks as follows:

   function
[mss2,nss2]=HIERARHICALL_PREC(Lm,mss,nss,fun,d,z,pu)
mss=mss*z;
nss=nss*z;
[mss,nss]=HIERARHICALL_DENSE(mss,nss);
Ld16 = blkproc(Lm,[d/z d/z],fun);
Ld16d=zeros(size(Ld16));
for n=1:size(Ld16,2)
        Ld16d(1:end-1,n)=diff(Ld16(:,n));
end
mss2=[]; nss2=[];
for m=1:size(mss,1)
for n=1:size(mss,2)
    if mss(m,n)∼=0 %
        ms2=mss(m,n);
        ns2=nss(m,n);
        m2=ms2+pu;
        ml=ms2-pu;
        if m1<=0; m1=1; end
        if m2>size(Ld16d,1); m2=size(Ld16d,1); end
            mm12=round(m1:m2);
            if ∼isempty(mm12)
                Ld16dmm=Ld16d(mm12,ns2);
                mm12(Ld16dmm∼=max(Ld16dmm))=[];
                if ∼isempty(mm12)
                    mss2(m,n)=mm12(1);
                    nss2(m,n)=ns2;
                end
            end
    end
end
end

Where function HIERARHICALL_DENSE designed to condense the number of points on determined layers has the following form:

    function [y_out,x_out]=HIERARHICALL_DENSE(y_in,x_in)
    y_out=[0;0]; x_out=[0;0];
    y_in(:,x_in(1,:)==0)=[];
    x_in(:,x_in(1,:)==0)=[];
    for i=1:(size(y_in,2)−l)
        m_1=y_in(1,i:i+1);
        n_12=x_in(1,i:i+1);
        m_2=y_in(2,i:i+1);

x_out(1:2,1:end+length(n_12(1):n_12(2)))=[[x_out(1,:),n_12(
1):n_12(2)];[x_out(2,:),n_12(1):n_12(2)]];
        x_out(:,end)=[];
        if (m_1(2)-m_1(1))∼=0
            w1=m_1(1):(m_1(2)-
m_1(1))/(length(n_12(1):n_12(2))−1):m_1(2);
        else
            w1=ones([1 length(n_12(1):n_12(2))])*m_1(1);
        end
        if (m_2(2)-m_2(1))∼=0
            w2=m_2(1):(m_2(2)-
m_2(1))/(length(n_12(1):n_12(2))-1):m_2(2);
        else
            w2=ones([1 length(n_12(1):n_12(2))])*m_2(1);
        end

y_out(1:2,1:end+length(n_12(1):n_12(2)))=[y_out(1:2,:),[w1;
w2]];
        y_out(:,end)=[];
    end
    y_out=y_out(:,2:end);
    x_out=x_out(:,2:end);

Hence the function HIERARHICALL_PREC is designated to ‘match’ layers position at any precision.

Both functions – HIERARHICALL_PREC and nested HIERARHICALL_DENSE – will be used below in the next stages of detected layers approximation to the proper position.

z=2;
pu=2;
[mss,nss]=HIERARHICALL_PREC(Lm,mss,nss,fun,d,z,pu);
    plot(nss’*d/z-d/z/2, mss’*d/z,‘-r*’)

z=4;
pu=3;
[mss,nss]=HIERARHICALL_PREC(Lm,mss/2,nss/2,fun,d,z,pu);
plot(nss’*d/z-d/z/2, mss’*d/z,‘-g*’)

The obtained results are shown below in Fig. 4-89 and Fig. 4-90.

Fig. 4-89. Obtained results of RPE, NFL layers detection on the Lm image.

Fig. 4-89

Obtained results of RPE, NFL layers detection on the Lm image.

Fig. 4-90. Obtained results of NFL layer detection on the Lm image – enlargement of Lm image.

Fig. 4-90

Obtained results of NFL layer detection on the Lm image – enlargement of Lm image.

The results shown in Fig. 4-89 and Fig. 4-90 are not perfect. A visible minimum of NFL layer results from the lack of filtration at the initial stage of yNFL course. Because of that function HIERARHICALL_MEDIAN presented below has been suggested, intended to filtrate using a median filter.

function [m_s,n_s]=HIERARHICALL_MEDIAN(mss,nss, Z)
for j=1:size(nss,1)
    for io=1:size(nss,2)
        p=io-round(Z/2); k=io+round(Z/2); if k>size(nss,2);
k=size(nss,2); end; if p<1; p=1; end
        m_s(j,io)=median(mss(j,p:k));
    end
end
n_s=nss;

The considerations presented above, related to a hierarchical approach, lead to suggesting the final version of algorithm detecting the ONL, RPE and NFL layers.

[Lgray,map]=imread([‘D:\OCT\SOURCES\3.bmp’]);
Lgray=ind2gray(Lgray,map);
Lgray=double(Lgray)/255;
Lorg=Lgray;
Lm=medfilt2(Lorg,[5 5]);
Lm=mat2gray(Lm);
szer_o=16;
Lm=[Lm(:,1)*ones([1 szer_o]),Lm,Lm(:,end)*ones([1
szer_o])];
fun = @(x) median(x(:));
[mss,nss,waga_p,L5,L6]=HIERARHICALL_STEP(Lm,fun,szer_o,0.03
);
[mss,nss]=HIERARHICALL_PREC(Lm,mss,nss,fun,szer_o,2,2);
[mss,nss]=HIERARHICALL_PREC(Lm,mss/2,nss/2,fun,szer_o,4,3);
[yrpe_onl,xrpe_onl]=HIERARHICALL_MEDIAN(mss(1,:)*4,nss(1,:)
*4,5);
[ynfl,xnfl,Lgr]=HIERARHICALL_PREC2(Lm,mss*4,nss*4,20,20);
xnfl(:,xnfl(1,:)==0)=[];
ynfl(:,ynfl(1,:)==0)=[];
xnfl(:,xnfl(2,:)==0)=[];
ynfl(:,ynfl(2,:)==0)=[];
[ynfl,xnfl]=HIERARHICALL_MEDIAN(ynfl,xnfl,5);
figure; imshow(Lm,‘notruesize’); hold on
plot(xnfl’,ynfl’,‘LineWidth’,2)
plot(xrpe_onl,yrpe onl,‘r’,‘LineWidth’,2)

where function HIERARHICALL_PREC2 looks as follows:

function
[mss2,nss2,Lgr]=HIERARHICALL_PREC2(Lm,mss,nss,pu,pu2)
[mss,nss]=HIERARHICALL_DENSE(mss,nss);
mss2=[]; nss2=[];
Lgr=[];ngr=[];
for n_=1:size(mss,2);
    n=round(nss(2,n_));
    m1=round(mss(2,n_))-pu;
    m2=round(mss(2,n_))+pu;
    if m1<1; m1=1; end; if m2>size(Lm,1); m2=size(Lm,1);
end
    Lmn=Lm(m1:m2,n);
    Lmnr2=1:length(Lmn);
    Lmf=[Lmnr2’,Lmn];
    Lmf=sortrows(Lmf,-2);
Lmf(Lmf(:,2)<(0.9*Lmf(1,2)),:)=[]; Lmf=sortrows(Lmf,-1);
    Lmnr2=Lmf(1,1);
    nss2=[nss2,n];
    mss2=[mss2,ml+Lmnr2(1)-1];
    m11=m1+Lmnr2(1)-1-pu2;
    m22=m1+Lmnr2(1)-1+pu2;
    if m11<1; m11=1; end; if m22>size(Lm,1);
m22=size(Lm,1); end
    if length(m11:m22)==(pu2*2+1)
        Lmn=Lm(m11:m22,n);
        Lgr=[Lgr,Lmn];
        ngr=[ngr,n_];
    end
end
    Lgr=filter2(ones ([3 3]),Lgr)/9;
for n=1:size(Lgr,2)
   po_=Lgr(:,n);
    P_= POLYFIT(1:length(po_),po_’,5);
   po = POLYVAL(P,1:length(po_));
   dpo=diff(po);
   dpo(round(length(dpo)/2):end)=0;
   dnr=1:length(dpo);
   if max(dpo)>0.03
        dnr(dpo∼=max(dpo))=[];
        dnr_=dnr;
    nss2(2,ngr(n))=nss2(1,ngr(n));
    mss2(2,ngr(n))=mss2(1,ngr(n))+dnr-pu2;
        for itt=(n+1):size(Lgr,2)
            po_=Lgr(:,itt);
            P = POLYFIT(1:length(po_),po_’,4);
            po = POLYVAL(P,1:length(po_));
            dpo2=diff(po);
            dnr1=dnr-3;
            dnr2=dnr+3;
            if dnr1<1; dnr1=1; end; if dnr2>length(dpo2);
dnr2=length(dpo2); end
            dpo2([1:dnr1,dnr2:end])=0;
            dnr2=1:length(dpo2);
            if max(dpo2)>0
                dnr2(dpo2∼=max(dpo2))=[];
                dnr=dnr2(1);
nss2(2,ngr(itt))=nss2(1,ngr(itt));
mss2(2,ngr(itt))=mss2(1,ngr(itt))+dnr-pu2;
            end
        end
            dnr=dnr_;
        for itt=(n-1):−1:1
            po_=Lgr(:,itt);
            P = POLYFIT(1:length(po_),po_’,4);
            po = POLYVAL(P,1:length(po_));
            dpo2=diff(po);
            dnr1=dnr-4;
            dnr2=dnr+4;
            if E dnr1<1; dnr1=1; end; if dnr2>length(dpo2);
dnr2=length(dpo2); end
            dpo2([1:dnr1,dnr2:end])=0;
            dnr2=1:length(dpo2);
            if max(dpo2)>0
                dnr2(dpo2∼=max(dpo2))=[];
                dnr=dnr2(1);
    nss2(2,ngr(itt))=nss2(1,ngr(itt));
    mss2(2,ngr(itt))=mss2(1,ngr(itt))+dnr-pu2;
            end
        end
        break
    end
end

The results obtained are shown in Fig. 4-91 and Fig. 4-92.

Fig. 4-91. Detected ONL, RPE and NFL layers.

Fig. 4-91

Detected ONL, RPE and NFL layers.

Fig. 4-92. Enlargement of detected ONL, RPE and NFL layers from the image aside.

Fig. 4-92

Enlargement of detected ONL, RPE and NFL layers from the image aside.

4.11.4. Analysis of ONL Layer

This analysis consists in separating line ONL from line RPE originating from previously executed stages of the algorithm. The issue is facilitated by the fact that on average approx. 80, 90% pixels on each tomographic image have the maximum value in each column exactly at point RPE (this property has been already used in the previous section). So the only problem is to detect the position of ONL line. One of possible approaches consists of an attempt to detect the contour of the layer sought on LIR image. This image originated from LM image thanks to widening of yRPE(n) layer range within oy axis within the range of ±pI=20 pixels. LIR image has been obtained with the number of columns consistent with the number of LM image columns and with the number of lines 2·pI+1. Fig. 4-93 shows image LIR=LM(m-yRPE(n),n) originating from LM image from Fig. 4-88.

Fig. 4-93. Image LIR=LM(m-yRPE(n),n).

Fig. 4-93

Image LIR=LM(m-yRPE(n),n).

The upper layer visible in Fig. 4-93 as a pretty sharp contour is the sought course of ONL. Unfortunately, because of a pretty high individual variation within the ONL layer position relative to RPE, the selected pI range in further stages of the algorithm may be increased even twice (that will be described later). To determine consecutive points of ONL layer position interpolations with 4th degree polynomial of grey level degree for individual columns of LIR image obtaining this way LIRS, which changes of grey levels in individual columns are shown in Fig. 4-94. The position of point ONL(n) occurs in the place of the highest gradient occurring within the range (RPE(n)- pI) ÷ RPE(n) relative to LMS image or 1 ÷ pI relative to LIRS image.

Fig. 4-94. Courses of LIRS=LMS(m-yRPE(n),n) versus m.

Fig. 4-94

Courses of LIRS=LMS(m-yRPE(n),n) versus m.

As may be seen in Fig. 4-95 the method presented perfectly copes with detecting NFL, RPE and ONL layers marked in red, blue and green, respectively.

Fig. 4-95. Parts of LM images with marked courses of NFL – red, RPE – blue, and ONL - green.

Fig. 4-95

Parts of LM images with marked courses of NFL – red, RPE – blue, and ONL - green.

There is another solution of this problem – presented below.

4.11.5. Determination of the Area of Interest and Preprocessing

Having coordinates for consecutive n-columns, points yNFL(n) and yRPE(n) the area of interest has been determined as the area satisfying the condition yNFL(n)<y<yRPE(n). An example of area LGR originated from the LM image presented in Fig. 4-3 after filtration using a median filter of 7×7 size (the size was arbitrarily chosen) is shown in Fig. 4-96. The LGR2 image is related to a similar fragment of LM image, but before filtration.

Fig. 4-96. Image LGR.

Fig. 4-96

Image LGR.

Images presented in Fig. 4-96 and Fig. 4-97 originated from the algorithm

Fig. 4-97. Image LGR2.

Fig. 4-97

Image LGR2.

yrpe_onl=round(yrpe_onl);
xrpe_onl=round(xrpe_onl);
[yrpe_onl,xrpe_onl]=HIERARHICALL_DENSE2(yrpe_onl,xrpe_onl);
ynfl=round(ynfl(1,:));
xnfl=round(xnf1(1,:));
Lgr=[];
Lgr2=[];
fun2 = @(x) median(x(:))*ones(size(x));
Lmf=blkproc(Lm, [3 3], [3 3], fun2);
m1n2=[];
for ix=1:length(yrpe_onl)
    m1=yrpe_onl(ix); n1=xrpe_onl(ix);
    xynfl=[ynfl’,xnfl’]; xynfl_=xynfl(xynfl(:,2)==n1,:);
    m1n2(ix,1:2)=[m1,0];
    if ∼isempty(xynfl_)
       m2=xynfl_(1,1); n2=xynfl_(1,2);
        Lgr2(1:(m2-m1+1),ix)=Lm(m1:m2,n2);
        Lgr(1:(m2-m1+1),ix)=Lmf(m1:m2,n2);
        mln2(ix,1:2)=[m1,n2];
    end
end
figure; imshow(Lgr);
figure; imshow(Lgr2);

where function HIERARHICALL_DENSE2

function [y_out,x_out]=HIERARHICALL_DENSE2(y_in,x_in)
y_out=[0]; x_out=[0];
y_in(:,x_in==0)=[];
x_in(:,x_in==0)=[];
for i=1:(length(y_in)-1)
    m_1=y_in(i:i+1);
    n_12=x in(i:i+1);

x_out(1:end+length(n_12(1):n_12(2)))=[x_out(:)’,n_12(1):n_1
2(2)];
    x_out(:,end)=[];
    if (m_1(2)-m_1(1))∼=0
        w1=m_1(1):(m_l(2)-m_1(1))/(length(n_12(1):n_12(2))-
1):m_1(2);
    else
        w1=ones([1 length(n_12(1):n_12(2))])*m_1(1);
    end
    y_out(1:end+length(n_12(1):n_12(2)))=[y_out(:)’,[w1]];
    y_out(end)=[];
end
y_out=y_out(2:end);
x_out=x_out(2:end);

The first stage of algorithm operation is sequential performance of convolution with mask h, i.e.:

h(mh,nh,θ=0)=[1111111111111111]
59

for angles θ from the range 80° to 100°, every 1°.

LSGR(m,n,θ)mh=Mh/2Mh/2nh=Nh/2Nh/2LGR(m+mh,n+nh)h(mh,nh,θ)
60
LGGR(m,n)=maxθ(80,100)(LSGR(m,n,θ))
61

where m – row, n – column, θ – angle of mask h rotation, Mh,Nh – number of mask h rows and columns.

This fragment implementation is presented below:

t=−4:1:4; f=OCT_GAUSS(t,1); f=f/max(f(:)); f=f*(4+1)-
abs(2);
h=ones([9 1])*f;
h=imresize(h,[3 3],‘bicubic’);
h(:,round(size(h,2)/2):end)=max(h(:));
h=imresize([−2 −2 0 2 2],[15 5],‘bicubic’);
Lggr=zeros(size(Lgr));
Lphi=zeros(size(Lgr));
for phi=-100:10:−80
    h_=imrotate(h,phi,‘bicubic’);
    Lsgr=conv2(Lgr,h_,‘shape’);
    Lpor=Lggr>Lsgr;
    Lphi=Lpor.*Lphi+(∼Lpor)*phi;
    Lggr =max(Lggr,Lsgr);
end
figure
imshow([mat2gray(Lggr)]);
figure;
imshow(Lggr,[0 0.5])

where OCT_GAUSS:

function y = OCT_GAUSS(x,std)
y = exp(−x.^2/(2*std^2))/(std*sqrt(2*pi));

The resultant images are shown in Fig. 4-98 and Fig. 4-99.

Fig. 4-98. Image LGGR.

Fig. 4-98

Image LGGR.

Fig. 4-99. Image LGGR after normalisation to [0 0.5] range.

Fig. 4-99

Image LGGR after normalisation to [0 0.5] range.

The range of θ angle values was selected because of the position of layers sought, which in accordance with medical premises should be ‘nearly’ parallel with small angular deviations. Because each pathology featuring a significant angular change of yNFL(n) and yRPE(n) layers will be corrected after the conversion to the LGR image. The methodology for consecutive convolutions performance (60) for successively changing θ angle values and then the calculation of the maximum occurring for consecutive resultant images (61) was described in detail in [25] and [40]. The created resultant image Lθ obtained on the basis of code presented above and:

figure; imshow(Lphi,[−100 −80]); colormap(‘jet’); colorbar

is shown in Fig. 4-100.

Fig. 4-100. Image Lθ.

Fig. 4-100

Image Lθ.

The division into individual layers consists here in tracking changes of individual points position of individual layers changing their position for consecutive n-columns of the LGGR image. This issue is not a trivial one, mainly due to difficulties in identification of both (in a general case) of the number of layers visible and due to the lack of their continuity and also very often due to their decay because of e.g. existing shadows [2], [4], [18]. These issues are illustrated by the graph of changes of LGGR image grey level changes presented in Fig. 4-101. The change of grey levels has been marked in red and in green for consecutively occurring columns on the LGGR image (for example for presented n=120 and 121). The tracking consists here in suggesting a method to connect individual peaks of courses presented, what will happen in the next section.

Fig. 4-101. Examples of grey level changes for n=120 – red and n=121 – green colour of LUGR image for m∈(5,35).

Fig. 4-101

Examples of grey level changes for n=120 – red and n=121 – green colour of LUGR image for m∈(5,35).

4.11.6. Layers Points Analysis and Connecting

The localisation and determination of layer position, having NFL, RPE and ONL layers, is one of the most difficult issues. The graph shown in Fig. 4-101 clearly confirms this presumption.

In the first stage it is necessary to find the maximums positions on the graph from Fig. 4-101. To this end the following operation was carried out:

LUGR(m,n)=LGGR(m,n)LSR(m,n)
62

where LSR is the image originated as a result of LGGR image filtration using an averaging filter of mask with experimentally chosen 9×9 size, i.e.:

Lgr=(Lggr-conv2(Lggr,ones(9),‘same’)/81);

The procedure enables cutting out the unevenness of lighting visible on the image Fig. 4-96 and thereby on the graph from Fig. 4-101. The graph of the same range of rows and columns, i.e. n=120 and n=121 for m∈(5,35) of the LUGR image is shown in Fig. 4-101.

The implementation in Matlab of the course described looks as follows:

figure;
    plot(Lggr(:,121),‘-g*’); grid on; hold on
    plot(Lggr(:,121-1),‘-r*’); hold off
ylabel(‘L_{GGR} (m,120), L_{GGR} (m,121)’, ‘FontSize’,20)
xlabel(‘m’,‘FontSize’,20)
Lugr=(Lggr-conv2(Lggr, ones(9), ‘same’)/81);
figure;
    plot(Lugr(:,121),‘-g*’); grid on; hold on
    plot(Lugr(:,121−1),‘-r*’); hold off
ylabel(‘L_{UGR}(m,120), L_{UGR}(m,121)’,‘FontSize’,20)
xlabel(‘m’,‘FontSize’,20)

Points p(i,n) (where i – index of a consecutive point in the nth column) are shown in Fig. 4-102 on the LUGR image. The position of individual p(i,n) points for the LUGR image was determined based on the method of finding consecutive maximum values for binary columns (the decimal to binary conversion threshold was set at 0). The source code responsible for this part is presented below:

Fig. 4-102. Image LUGR with marked points p(i,n) of the maximum position for consecutive areas and its enlargement.

Fig. 4-102

Image LUGR with marked points p(i,n) of the maximum position for consecutive areas and its enlargement.

     figure
imshow(Lugr); hold on
for n=1:size(Lugr, 2)
    Lnd=Lugr(:,n);
    Llab=bwlabel(Lnd>0.01);
    Lnr=1:length(Llab);
    for io=1:max(Llab)
       Lnd_=Lnd;
       Lnd_(Llab∼=io)=0;
       Lnrio=Lnr(Lnd_==max(Lnd_(:)));
       plot(n,Lnrio(1),‘.r’)
    end
end

The image generated is shown below

Fig. 4-103. LUGR image with marked p(i,n) points.

Fig. 4-103LUGR image with marked p(i,n) points

The following assumptions were made in the process of individual p(i,n) points connecting:

  • pzx – parameter responsible for permissible range of points connecting (analysing) on the ox axis,
  • pzy – parameter responsible for permissible range of points connecting (analysing) on the oy axis,
  • pzc – parameter responsible for permissible range on the ox axis, where the optimum connection points are sought.
  • each new point, if it does not fulfil the assumptions on pzx and pzy distance, is assumed as the first point of a new layer,
  • each point may belong to only one line, what by definition limits a possibility of lines division or connection.

As an illustration the process of connecting for typical and extreme cases is shown below (Fig. 4-104).

Fig. 4-104. Demonstrative diagrams of typical and extreme cases of individual layers’ p(i,n) points connecting. Results are shown for parameters pzx=2, pzy=2, pzc=6.

Fig. 4-104

Demonstrative diagrams of typical and extreme cases of individual layers’ p(i,n) points connecting. Results are shown for parameters pzx=2, pzy=2, pzc=6.

Fig. 4-104 shows demonstrative diagrams of typical and extreme cases of p(i,n) points connecting into lines marked as w(j,n), where j – is the line number and n – column. Fig. 4-104 a) shows a typical case, where having two points p(1,1) and p(2,1) because of a smaller distance on the oy axis p(1,1) was connected with p(1,3). Fig. 4-104 b) shows a reverse more difficult situation as compared with Fig. 4-104 a), because points p(1,1) and p(2,1) are equidistant. In this case, because points p(i,n) for each column are determined top-down, this connection will be carried out between p(1,1) and p(1,3). Fig. 4-104 c) shows a similar situation to Fig. 4-104 b). In Fig. 4-104 d) the system of connections is visible for the case, where there are points of discontinuity in determination of points comprised by individual layers. Parameter pzc is responsible for that. In the case of Fig. 4-104 e) there was an erroneous lines crossing. Points p(2,2), p(1,4) and p(2,6) were properly connected, while point p(1,2) was improperly connected with p(2,6). Such action results in adopting a principle of connecting with the nearest point and in a too large range of pzc parameter values, which in this case ‘allowed’ connecting p(1,2) and p(2,6). Fig. 4-104 f) is a typical example, where the line formed from points p(2,2) and p(2,4) ends and a new line starts from point p(1,6). This example is interesting to the extent that if parameters pzx, pzx and pzx would allow that, as a result lines created from points p(1,2), p(1,4) and p(1,6) should be obtained as well as the second line p(2,2), p(2,4) and p(2,6). Obviously, having only such data (p(i,n) points coordinates) it is not possible to determine, which solution is the right one. Situations presented in Fig. 4-104 a), b) and c) have another significant feature, by definition they do not allow individual analysed layers (Fig. 4-104 a), b)) to be connected and to be divided (Fig. 4-104 c)).

For parameters pzx=2, pzy=2, pzc=6 and points p(i,n) of LUGR image shown in Fig. 4-102 the following results were obtained - Fig. 4-105.

Fig. 4-105. Image LUGR with marked grouped p(i,n) points for parameters pzx=2, pzy=2, pzc=6 and its enlargement.

Fig. 4-105

Image LUGR with marked grouped p(i,n) points for parameters pzx=2, pzy=2, pzc=6 and its enlargement.

The implementation of the discussed algorithm fragment is presented below. The Reader should be familiar with the first part from the previous implementation, i.e.:

figure
imshow(Lugr); hold on
rr_d_o=0;
rr_u_o=0;
r_pp=[];
rrd=[]; rru=[];
rrd_pol=[];rrd_nr=[];
rrd_pam=[];
rru_pam=[];
for n=1:size(Lugr, 2)
    Lnd=Lugr(:,n);
    Llab=bwlabel(Lnd>0.01);
    Lnr=1:length(Llab);
    rr_d=[];
    for io=1:max(Llab)
        Lnd_=Lnd;
        Lnd_(Llab∼=io)=0;
        Lnrio=Lnr(Lnd_==max(Lnd_(:)));
        rr_d=[rr_d,Lnrio(1)];
    end
…

Instead, in the second part there is the right part of described problem solution, i.e.:

…
pzc=10;
pzy=4;
    rrd_pol(1:length(rr_d),n)=rr_d;
if n==1
    rrd_nr(1:length(rr_d),n)=(1:length(rr_d))’;
else
    rrd_nr(1:length(rr_d),n)=0;
end
wu=[]; wd=[];
wuiu=[]; wdiu=[];
rrd(1:length(rr_d),n)=rr_d;
rr_dpp=rr_d;
for ni=(n-1):−1:(n-pzc)
    if ni>0
            rr_d=rrd(:,n);
            rr_d_o=rrd(:,ni);
            rrd_nr_iu=rrd_nr(:,ni);
            if (∼isempty(rr_d))&(∼isempty(rr_d_o))
                uu=ones([length(rr_d) 1])*rr_d_o’;
                nrnr=ones([length(rr_d) 1])*rrd_nr_iu’;
                dd=rr_d*ones([1 length(rr_d_o)]);
                ww=ones([size(dd-uu,1) 1])*min(abs(dd-
uu))==abs(dd-uu);
                ww_=min(abs(dd-uu),[],2)*ones([1 size(dd-
uu, 2)])==abs(dd-uu);
                ww=ww_.*ww;
                ww(abs(dd-uu)>pzy)=0; ww(dd==0)=0;
ww(uu==0)=0; ww(rr_d==0,:)=0; ww(:,rr_d_o==0)=0;
                wu_=ww.*uu; wu_(wu_==0)=[]; wu=[wu,wu_];
                wd_=ww.*dd; wd_(wd_==0)=[]; wd=[wd,wd_];
                wuiu_=ones(size(wu_))*(ni);
wuiu=[wuiu,wuiu_];
                wdiu_=ones(size(wd_))*(n);
wdiu=[wdiu,wdiu_];
                nrnr=sum(nrnr.*ww,2);
                nrnrw=sum(ww,2);
                niu=max(rrd_nr(:))+1;
                for gf=1:length(nrnr)
                    if (nrnr(gf)==0)&&(nrnrw(gf)==1)
                            nrnr(gf)=niu;
                            wvv=ww(gf,:);
                            rrd_nr(wvv==1,ni)=niu;
                            niu=niu+1;
                    end
                end
                rpnr=rrd_nr(:,n); rpnr=rpnr+nrnr;
rrd_nr(:,n)=rpnr;
                rr_d(sum(ww,2)∼=0)=0;
                rr_d_o(sum(ww,1)∼=0)=0;
                rrd(1:length(rr_d),n)=rr_d;
                rrd(1:length(rr_d_o),ni)=rr_d_o;

            end
    end
end
rrd(1:length(rr_dpp),n)=rr_dpp;
for j=1:length(wu)
    line([wuiu(j) wdiu(j)],[wu(j)
wd(j)], ‘LineWidth’,2,‘Color’,’r’)
end
n
end

Fig. 4-106 shows the arrangement of individual jth w(j,n) lines on the input image LM. Instead, Fig. 4-107 shows other results of points p(i,n) grouping for parameters pzx=2, pzy=2, pzc=6 at other LUGR images.

Fig. 4-106. Image LM and its enlargement with marked groups of connected p(i,n) points for consecutive jth w(j,n) lines.

Fig. 4-106

Image LM and its enlargement with marked groups of connected p(i,n) points for consecutive jth w(j,n) lines.

Fig. 4-107. Example LUGR images with marked grouped p(i,n) points for parameters pzx=2, pzy=2, pzc=6 and its enlargement.

Fig. 4-107

Example LUGR images with marked grouped p(i,n) points for parameters pzx=2, pzy=2, pzc=6 and its enlargement.

Two characteristic elements may be noticed. The first of them is related to the existence of short lines, which are a disturbance (short is understood here as such, which are not longer than 10, 20 points). The second characteristic element is the determination of transition borders (looking in the sequence of rows occurrence – top-down) by a lighter and darker area. This is caused by an asymmetric form of mask h (59). Hence a supplementary approach consists of performance of operations presented starting from the relationship (59) for the suggested h but for angles θ from the range -80° to -100° every 1°. The results obtained are presented below (Fig. 4-108).

Fig. 4-108. Image LM and its enlargement with marked groups of connected p(i,n) points for consecutive jth w(j,n) lines at h for θ angles from the -80° to -100° range.

Fig. 4-108

Image LM and its enlargement with marked groups of connected p(i,n) points for consecutive jth w(j,n) lines at h for θ angles from the -80° to -100° range.

Further on, denoting w(j,n) lines obtained for h rotated within θ angles from the range -80° to -100° as w1(j1,n) and w(j,n) lines obtained for h rotated within θ angles from the range 80° to 100° as w2(j2,n), the following operations were performed:

  • the location of last p(i,n) points positions of consecutive w1(j1,n) and w2(j2,n) lines has been checked,
  • the approximation by the second degree polynomial of the last points of w1(j1,n) and w2(j2,n) lines was carried out,
  • it has been checked, whether the obtained next points extending the analysed line j1* connect with another line j1≠j1* (or similarly j2≠j2*).

These operations have been precisely described in the next section.

4.11.7. Line Correction

The determined w1(j1,n) and w(j,n) lines are shown as an example in Fig. 4-108. The lines correction consists in connecting them, provided that the extension of consecutive points of the approximated line coincides in a specific range with the beginning of the next one. The following assumptions were made in the process of individual w(i,n) lines connecting:

  • Pkx – parameter responsible for permissible range of lines connecting (analysing) on the ox axis,
  • Pky – parameter responsible for permissible range of lines connecting (analysing) on the oy axis,
  • pkc – parameter responsible for the range on the ox axis, in which the line end is approximated,
  • pko – parameter responsible for the size of ox axis analysis window,
  • the process of lines connecting applies only to those, which end and start – branches connecting is not carried out,
  • only those lines are connected, which have minimum 90% of analysed points falling within the range ± pky with respect to the approximated line (Fig. 4-109),
  • lines connection consists in changing their labels – in the case of connecting e.g. w(1,n) with w(2,n) lines, the label is changed from ‘2’ to ‘1’.
Fig. 4-109. Demonstrative lines correction diagram with marked algorithm parameters pkc, pkx, pky and pko.

Fig. 4-109

Demonstrative lines correction diagram with marked algorithm parameters pkc, pkx, pky and pko.

The presented methodology works pretty well for tested image resolutions in the case, when the approximation is carried out using a first or second degree polynomial and when the following values of parameters are assumed pkx=20, pky=4, pkc=10, pko=10. The obtained example results for the last two points (marked - wa‘), three last points (marked – wa‘’) and four last points (marked – wa‘”) are shown in Fig. 4-110.

Fig. 4-110. Demonstrative diagram of lines approximation results using order 1 polynomial for different numbers of end points.

Fig. 4-110

Demonstrative diagram of lines approximation results using order 1 polynomial for different numbers of end points.

Fig. 4-111. Image fragment before lines correction.

Fig. 4-111Image fragment before lines correction

Fig. 4-112. Image fragment after lines correction obtained for parameters pkx=20, pky=4, pkc=10, pko=10.

Fig. 4-112Image fragment after lines correction obtained for parameters pkx=20, pky=4, pkc=10, pko=10

A direct relationship between obtaining correct results from w(j,n) lines connecting and the number of analysed points at their end is visible from the results obtained at the initial analysis of approximation results. In particular, when allowing connecting lines, which – looking at the x axis – have the same values, a situation shown in Fig. 4-113 may occur.

Fig. 4-113. Result of connecting lines overlapping each other for a few pixels with regard to the ox axis.

Fig. 4-113

Result of connecting lines overlapping each other for a few pixels with regard to the ox axis.

As this fragment implementation in Matlab is trivial, we leave this part to be written by the Reader.

4.11.8. Layers Thickness Map and 3D Reconstruc

The analysis of LM images sequence and precisely the acquiring of layers NFL, RPE and ONL allows performing 3D reconstruction and layers thickness measurement. A designation for an image sequence with an upper index (i) has been adopted, where i = {1,2,3,…,k-1,k) i.e. LM(1), LM(2), LM(3),.., LM(k-1), LM(k). For a sequence of 50 images the position of NFL layers (Fig. 4-114), RPE (Fig. 4-115) and ONL (Fig. 4-116) was measured as well as ONL - RPE layer thickness (Fig. 4-117).

Fig. 4-114. NFL spatial position.

Fig. 4-114

NFL spatial position.

Fig. 4-115. RPE spatial position.

Fig. 4-115

RPE spatial position.

Fig. 4-116. ONL spatial position.

Fig. 4-116

ONL spatial position.

Fig. 4-117. ONL-RPE layer thickness.

Fig. 4-117

ONL-RPE layer thickness.

3D reconstruction performed based on LM(i) images sequence is the key element crowning the results obtained from the algorithm suggested. The sequence of images, and more precisely the sequence of NFL(i)(n), RPE(i)(n) and ONL(i)(n) layers position, provides the basis for 3D reconstruction of a tomographic image. For an example of 50 images sequence and one image resolution LM(i) at the level of M×N = 256×512, a 3D image is obtained composed of three layers NFL, RPE and ONL of 50×512 size. The results are shown in Fig. 4-118 for an example of original images reconstruction (without the sample described above) based on i pixels brightness Fig. 4-119 – reconstruction performed using the algorithm described above, on the basis of NFL(i)(n), RPE(i)(n) and ONL(i)(n) information.

Fig. 4-118. Example of 3D reconstruction of layers NFL and ONL – green, RPE - red.

Fig. 4-118

Example of 3D reconstruction of layers NFL and ONL – green, RPE - red.

Fig. 4-119. Example of 3D reconstruction of layers NFL – blue, RPE – red and ONL – green.

Fig. 4-119

Example of 3D reconstruction of layers NFL – blue, RPE – red and ONL – green.

In an obvious way a possibility of automatic determination of the thickest or the thinnest places between any points results from layers presented in Fig. 4-119.

4.11.9. Evaluation of Hierarchical Approach

The algorithm presented, after a minor time optimisation, detects NFL, RPE and ONL layers with up to a few dozen milliseconds on a computer with a 2.5GHz Intel Core 2 Quad processor. The time was measured as an average value of 700 images analysis dividing individual images into blocks A (Fig. 4-81) of consecutive sizes 16×16, 8×8, 4×4, 2×2. This time may be reduced by the modification of approximation blocks number and at the same time increasing the layer position identification error – results are shown in the table below (Tab 4-1).

Tab 4-1. Percentage execution time of algorithm for NFL, RPE and ONL layer detection.

Tab 4-1

Percentage execution time of algorithm for NFL, RPE and ONL layer detection.

The specification of individual algorithm stages’ analysis times presented in the table above clearly shows the longest execution of the first stage of image preprocessing, where filtration with a median filter is of prevailing importance (in terms of execution time) as well as of the last stage of precise determination of RPE and ONL layers position. Because precise RPE and ONL breakdown is related to the analysis and mainly to the correction of RPE and ONL points position in all columns of the image for the most precise approximation (because of a small distance between RPE and ONL it is not possible to perform this breakdown in earlier approximations). So the reduction of computation times may occur only at increasing the error of layers thickness measurement. And so for example for the analysis in the first approximation for A of 32 × 32 size and then for 16 × 16 gross errors are obtained generated in the first stage and duplicated in the next ones. The greatest accuracy is obtained for approximations of A of 16×16 size, and then of 8×8, 4×4, 2×2 and 1×1, however the computation time nearly doubles.

4.12. Evaluation and Comparison of Suggested Approaches Results

The methods presented: classical, Canny, random [28] or hierarchical [27] give correct results at the detection (recognition) of RPE, IS/OS, NFL or OPL layers on a tomographic eye image. Differences in the methods proposed are visible only when comparing their effectiveness in the analysis of mentioned several hundred tomographic images. When comparing the methods mentioned it is necessary to consider the accuracy of layer recognition, algorithm responses to pathologies and optic nerve heads and the operating speed, in this case for a computer (P4 CPU 3GHz, 2GB RAM).

The following table Tab 4-2 presents a cumulative comparison of algorithms proposed and Tab 4-3 a comparison of results obtained using the algorithms discussed, taking into account typical and critical fragments of individual algorithms operation.

Tab 4-2. Cumulative comparison of algorithms proposed.

Tab 4-2

Cumulative comparison of algorithms proposed.

Tab 4-3. Comparison of results obtained using algorithms discussed.

Tab 4-3

Comparison of results obtained using algorithms discussed.

The random method described as an example in this monograph gives correct results at contours determination (layers separation) both on OCT images as well as on others, for which classical methods of contours determination do not give results or the results do not provide a continuous contour. The algorithm drawbacks include a high influence of noise on the results obtained. This results from a relationship that the number of pixels of pretty high value, resulting from a disturbance, increases the probability of selecting in this place a starting point and hence a component contour. The second drawback is the computations time, which is the longer the larger is the number of selected points and/or the reason, for which searching for the next points oi,j+1 was stopped.

The specification of hierarchical algorithm individual stages’ analysis times presented in the table above clearly shows the longest execution of the first stage of image preprocessing, where filtration with a median filter is of prevailing importance (in terms of execution time) as well as of the last stage of precise determination of RPE and IS/OS layers position. Because precise RPE and IS/OS breakdown is related to the analysis and mainly to the correction of RPE and IS/OS points position in all columns of the image for the most precise approximation (because of a small distance between RPE and IS/OS it is not possible to perform this breakdown in earlier approximations). So the reduction of computation times may occur only at increasing the error of layers thickness measurement. And so for example for the analysis in the first approximation for A of 32 × 32 size and then for 16 × 16 gross errors are obtained generated in the first stage and duplicated in the next ones. The greatest accuracy is obtained for approximations of A of 16×16 size, and then of 8×8, 4×4, 2×2 and 1×1, however the computation time nearly doubles.

3D reconstruction performed based on LM(i) images sequence is the key element crowning the results obtained from the algorithm suggested. The sequence of images, and more precisely the sequence of yNFL(i)(n), yRPE(i)(n) and yIS/OS(i)(n) layers position, provides the basis for 3D reconstruction of a tomographic image. For an example sequence of 50 images and one LM(i) image resolution of M×N= 256 × 512 a 3D image is obtained, composed of three NFL, RPE and IS/OS layers of 50 × 512 size. Results are shown in Fig. 4-118 for an example reconstruction of original images (without processing described above) based on pixels brightness and in Fig. 4-119 – the reconstruction performed using the algorithm described above was carried out based on yNFL(i)(n), yRPE(i)(n) and yIS/OS(i)(n) information.

Copyright © 2011 by Robert Koprowski, Zygmunt Wróbel.

This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License

Bookshelf ID: NBK97167