This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License

The second part of this monograph presents the issues of posterior eye segment with special emphasis on automated methods for individual layers detection. Also the optic nerve head and the degree of retinal detachment will be fully automatically analysed. The measurements performed provide a possibility of not only obtaining quantitative data but also of automated determination of individual layers thickness maps.

## 4.1. Introduction to the fundus of the eye analysis

The analysis of the fundus of the eye in its initial part is similar to the analysis of the anterior eye segment [5], [11], [12], [13]. This applies to the DICOM image acquisition and entering to the Matlab space as well as to acquiring the header and comprised by it patient and other data. Methods and tools intended for that have been discussed in detail in the first section of this monograph. The methodology for the image analysis has been presented below assuming that it already had been introduced to the Matlab space.

The input images L_{GRAY} acquired e.g. from an optical tomograph SOCT Copernicus of the following parameters: the light source wavelength: 840nm, spectrum width of 50nm, axial (longitudinal) resolution: 6μm, transverse resolution: 12-18 μm, tomogram window width: 2mm, measurement rate: 25,000 A scans per second, the maximum scanning width: 10mm, the maximum number of A scans falling per a B scan: 10’500, were saved as grey levels of M×N = 722×928 resolution, where 8 bits falls per each pixel.

The identification of individual layers position, starting from the nerve fibre layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), inner and outer segment of photoreceptors (IS/OS) and ending at retinal pigment epithelium (RPE) and choriocapillaris (CC) situated between the inner limiting membrane (ILM) and the CC has been shown in Fig. 4-1 and Fig. 4-2.

Na Fig. 4-3 shows layers put on the L_{M} image detected by means of algorithm described in this monograph, i.e. NFL, ONL and RPE. The position of those layers provides the grounds for further methodology, described in this paper.

Further considerations will refer to methods automatically determining the boundaries of layers visible in Fig. 4-1 i.e.: tractions, internal retina boundary, RNFL/GCL boundary, IS/OS boundary, OS/RPE and RPE boundary preceded by the analysis of results obtained using known algorithms [1], [3], [17], [19], [22], [33], [36], [43].

## 4.2. Algorithm for Automated Analysis of Eye Layers in the Classical Method

The algorithm proposed by the authors, presented below, has a modular (block) structure, where selected blocks can operate independently of each other - Fig. 4-39.

The block diagram presented in Fig. 4-39 divides the algorithm operation into five stages:

- Preprocessing – median filter filtration and normalisation.
- Determination of RPE layer position and then, using a modified active contour method, of ONL and IS.
- Determination of NFL internal retina boundary position and then of GCL areas (usually two).
- Correction of layers obtained with regard to the analysis area – considering the quality by areas of the object presented.
- Determination, based on the image qualitative analysis, of ‘holes’, local brightness minima.

These stages will be the subject of considerations in the next sections.

### 4.2.1. Preprocessing

Preliminary algorithms for image processing include filtration with a median filter of square mask, 21×21 in size, to eliminate noise and small artefacts introduced by the measuring system during the image acquisition. The mask size was selected arbitrarily. In addition, the image was cut at the bottom to correct erroneous instrument readings for the last two lines of the image, i.e.:

[Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]); Lgray=Lgray(1:850,:); Lgray=ind2gray(Lgray, map); Lgray=double(Lgray)/255;Lorg=Lgray; Lmed=medfilt2(Lorg,[5 5]);

The second component consisted of normalisation from the range of minimum and maximum pixel brightness to a full range between 0 and 1, i.e.:

Lmed=mat2gray(Lmed); figure; imshow(Lmed)

The L_{GRAY} images converted this way were analysed using available algorithms, which in this case – the necessity to detect discontinuous line ranges – did not provide satisfactory results.

### 4.2.2. Detection of RPE Boundary

The RPE layer is the first and the simplest to determine in an automated determination on an OCT image. It is perfectly visible on the OCT image as the brightest area for each column. This property has been used to create the first part of the algorithm.

The analysis of L_{GRAY} images after images preprocessing (filtration and normalisation, obtaining L_{MED}) was started analysing the position of maximum for consecutive columns. If m and n denote rows and columns of image matrix, then the new image:

dla n ∈ {1,2,3,…,N-1,N}

where pr – parameter of decimal-to-binary conversion threshold, assumed as 0.9 (90%).

The L_{BIN}__{RPE} image contains values ‘1’ in places, where pixels in a given column are brighter than 90% of the maximum occurring brightness for this column. Values ‘0’ occur in the other places. The image obtained this way is shown below.

### Fig. 4-5Sum of L_{BIN_RPE} images with weight 50% and L_{MED} with 50%; a) image with properly detected Ip area and b) image, where RPE area is discontinuous in ranges

In the next stage the position of the longest section centre for each column of L_{BIN_RPE} image was calculated, obtaining y_{RPE}, i.e.:

where:

n∈ {1,2,3,…,N-1,N}

The obtained course of y_{RPE} and the source code are shown below:

x=(1:size(Lmed,2))’; yyy=(1:size(Lmed,1))’; yrpe=[]; Lk=zeros(size(Lmed)); for ik=1:size(Lmed,2) xx_best=[]; Llabp=bwlabel(Lmed(:,ik)>(max(Lmed(:,ik))*0.9)); Lk(:, ik)=Llabp; for tt=1:max(Llabp) xxl=yyy (Llabp==tt); xx_best=[xx_best; mean(xxl)]; end if ∼isempty(xx_best) yrpe(ik)=max(xx_best); else yrpe(ik)=0; end end figure; imshow(mat2gray(Lk*0.5+Lmed)); hold on; plot(yrpe,‘r*-’)

### Fig. 4-6Sum of L_{BIN_RPE} images with weight 50% and L_{MED} with 50% and marked course of y_{RPE}

### Fig. 4-7Sum of L_{BIN_RPE} images with weight 50% and L_{MED} with 50% and marked course of y_{RPES}

The course of y_{RPE} function is further analysed for clusters using k-means method, obtaining y_{RPES}^{(k)} for each k-cluster. Then (y_{RPES}^{(k1,k2)}) is approximated by a 3^{rd} order polynomial for each pair y_{RPES}^{(k1)} and y_{RPES}^{(k2)} for k_{1}≠k_{2}. All obtained polynomial functions y_{RPES}^{(k1,k2)} determined for all possible cluster pairs (k_{1}, k_{2}) are shown in Fig. 4-8 and an appropriate part of algorithm is given below:

yg=gradient(yrpe); ygg=ones([1 length(yrpe)]); ygg(abs(yg)>20)=0; ygl=bwlabel(ygg); figure; imshow(mat2gray(Lbinrpe*0.5+Lmed)); hold on; palett=jet(max (ygl)); for iiih=1:max(ygl(:)) plot (x (ygl==iiih), yrpe (ygl==iiih),‘Color’,palett (iiih,:),‘LineWidth’,4); end pam_dl=[]; figure; imshow(mat2gray(Lbinrpe*0.5+Lmed)); hold on for iiik=1:max(ygl(:)) for iiikk=iiik:max(ygl(:)) if iiik<=iiikk ygk=[yrpe(ygl==iiik),yrpe(ygl==iiikk)]; xgk=[x(ygl==iiik); x(ygl==iiikk)]; else ygk=[yrpe (ygl==iiikk),yrpe(ygl==iiik)]; xgk=[x(ygl==iiikk); x(ygl==iiik)]; end if length(ygk)>10 P = POLYFIT(xgk’,ygk,2); yrpes = round(POLYVAL(P,x)); plot (yrpes, ‘g*-’) pam_dl= [pam_dl;[iiik iiikk sum(abs(yrpe- yrpes’)<20)]]; end end end

### Fig. 4-9Enlarged fragment of image from Fig. 4-8

The number of points
${\text{y}}_{\text{RPE}}(\text{n})=\sum _{\text{m}=1}^{\text{M}}{\text{y}}_{\text{W}}(\text{m},\text{n})/\sum _{\text{m}=1}^{\text{M}}{\text{L}}_{\text{BIN\_RPE}}(\text{m},\text{n})$ from the range ±15 pixels, i.e. pr_{1}=15 and pr_{2}=15 is determined for each function.

Then this pair (k_{1},k_{2}) is determined, for which:

The pair determined achieves the maximum value at selected y_{RPES}^{(k1*,k2*)} later on named simply y_{RPEC} function. The implementation of the algorithm fragment described above is provided below:

pam_s=sortrows(pam_dl,−3); if size(pam_s,1)==1 ygk=[yrpe(ygl==pam_s(1,1))]; xgk=[x(ygl==pam_s(1,1))]; else ygk=[yrpe(ygl==pam_s(1,1)),yrpe(ygl== pam_s(1,2))]; xgk=[x(ygl==pam_s(1,1)); x(ygl==pam_s(1,2))]; end P = POLYFIT(xgk’,ygk,2); yrpes = round (POLYVAL (P,x)); plot(x,yrpes, ‘w*-’); yrpe=yrpe(:); plot(x,yrpe,‘m*-’);

In further considerations also these points of y_{RPE} are important, which fall within the tolerance predetermined regarding y_{RPES}^{(k1*,k2*)}, i.e.:

dx=x; dx(abs (yrpe-yrpes)>20)=[]; yrpe(abs(yrpe-yrpes)>20)=[]; dxl=bwlabel(diff(dx)<125); pdxl=[]; for qw=1:max(dxl) pdxl=[pdxl;[qw,sum(dxl==qw)]]; end pdxl(pdxl(:,2)<50,:)=[]; dxx=[]; dyy=[]; for wq=1:size(pdxl,1) dxx=[dxx; dx(dxl==pdxl(wq,1))]; dyy=[dyy; yrpe(dxl==pdxl(wq,1))]; end dx=dxx; yrpe=dyy; plot(dx,yrpe,‘c*-’); figure imshow(Lgray); hold on plot(dx,yrpe, ‘c*-’);

The results obtained are presented in the following figure (Fig. 4-10, Fig. 4-11).

The y_{RPEC} values will further, in the next section, provide the basis to determine IS and ONL boundaries.

## 4.3. Detection of IS, ONL Boundaries

Boundaries of IS and ONL were determined on the basis of y_{RPEC} limit. In both cases algorithms were very similar and in their largest fragment applied to the modified active contour method [29], [41]. This method was used to analyse the anterior eye segment in the first part of this monograph and the function intended for its proper operation noted as OCT_activ_cont. This operation could also be performed (obtaining similar results) using other methods, e.g. of the convolution with mask h presented below (Fig. 4-12) or of filtration by a median filter and calculating differences between pixels situated on the oy axis distant from each other by the number of mask rows.

The change of operation selectivity in the sense of individual layers distinction accuracy is obtained depending on the selection of parameters p_{yu} and p_{yd}. Such situation is illustrated by Fig. 4-13 where p_{yu} and p_{yd} were changing between 2 and 20 for an artificial image created as follows:

L1=rand([201 200]); xx=−1:0.01:1; y=gauss(xx+0.5,0.2)+0.5*gauss(xx-0.1,0.05); Ly=y’*ones([1 200]); Ly=mat2gray(Ly); Lw1=L1.*Ly; L1=rand([201 200]); y=gauss(xx,0.2)+0.5*gauss(xx-0.4,0.05); Ly=y’*ones([1 200]); Ly=mat2gray(Ly); Lw2=L1. *Ly; Lw=[Lw1,Lw2]; Lw(:,300:350)=Lw(:,300:350)*.5; Lw(:,50:100)=Lw(:,50:100)*.2; Lw=imrotate(Lw,5,‘crop’); figure; imshow(Lw)

where the
`gauss` function has the following form:

function y = gauss (x,std) y = exp(-x.^2/(2*std^2)) / (std*sqrt(2*pi));

The change of parameters p_{yu} and p_{yd} values affects the selectivity of algorithm operation. The remaining parameters, such as p_{u} or p_{d}, determine the range of search on the vertical axis. Parameters p_{xl} and p_{xp} are the range on the ox axis, from which values Lu and Ld are calculated. They have a direct influence on the algorithm behaviour in places, where shadows occur. Fig. 4-14 shows the influence of parameters p_{xl} and p_{xp} settings on the results obtained.

Images have been obtained at the following implementation in Matlab:

x=1:size(Lw,2); y=round( [ones([1 size(Lw,2)/2])*size(Lw,1)/3 ones([1 size(Lw,2)/2])*size(Lw,1)/2] ); map=j et(70); for pyud=1:4:70 pud=50; pxud=2; pxlp=1; polaryzacj a=−1; [yy,i]=OCT_activ_cont(Lw, x,y+20, pud, pyud, pxud, pxlp, polaryzacja); hold on plot (x,yy,‘Color’,map(pyud,:), ‘LineWidth’,3) pause(0.001) end

As can be seen from Fig. 4-14 and Fig. 4-15 small values of p_{xl} and p_{xp} in the range from around 1÷10 result in the origination of large changes in positions of its consecutive values on the oy axis of y_{IO} course. Values of parameters p_{xl} and p_{xp} changed in the range from around 10÷70 ‘stabilise’ the course of y_{IO} due to which it becomes less sensitive to sudden changes of brightness (e.g. shadows) on the image.

The influence of parameters p_{xl}, p_{xp} and p_{yu}, p_{yd} can be best followed on the graph of error δ_{IO}(p_{xl}=p_{xp}, p_{yu}=p_{yd}) defined as:

where y_{ISW} – a model course of y_{IS}.

In accordance with the graph presented in Fig. 4-16 parameter p_{xl} = p_{xp} for p_{xud}=∞ has the largest influence on the value of δ_{IS} error. Because of two characteristic areas visible on the L_{GRAY} image (Fig. 4-16) the course of error has a local maximum for p_{xl}=p_{xp}≅40. The course of error δ_{IS} value for p_{xud}=1 (Fig. 4-17) is similar, where parameter p_{xud} had no significant impact on its value.

The graphs discussed were generated using the function:

L1=rand([201 200]); xx=−1:0.01:1; y=gauss(xx+0.5,0.2)+0.5*gauss(xx-0.1,0.05); Ly=y’*ones([1 200]); Ly=mat2gray(Ly); Lw1=L1.*Ly; Lw=Lw1; Lw(:,50:100)=Lw(:, 50:100)*.2; figure; imshow(Lw) x=1:size(Lw,2); y=round( [ones([1 size(Lw, 2)/2])*size(Lw,1)/3 ones([1 size(Lw,2)/2])*size(Lw, 1)/2] ); map=jet(70); hold on plot(x,y,‘r’,‘LineWidth’,3) d3_wy=[]; pub=50; pxud=1; polaryzacja=−1; jj=1; for pxlp=2:1:20 ii=1; for pyud=1:2:70 [yy,i]=OCT_activ_cont(Lw, x,y+20, pub, pyud, pxud, pxlp, polaryzacja); d3_wy(ii, jj)=sum(abs(119–yy)./119)/length(yy)*100; ii=ii+1; [ii, jj] end jj=jj+1; end [XX,YY]=meshgrid(2:1:20, 1:2:70); figure; mesh(XX,YY,d3_wy); ylabel(‘p_{xl}=p_{xp}’, ‘FontSize’, 20) xlabel(‘p_{yu}=p_{yd}’, ‘FontSize’, 20) zlabel(‘\delta_{IO} [%]’,‘FontSize’,20) colormap([0 0 0]) set(gca, ‘FontSize’,15)

The sensitivity to a Gaussian noise, which may appear on the image, is a totally different feature of the algorithm discussed. To evaluate the quality of algorithm proposed a Gaussian noise of variance σ changed between 0 and 0.9 was added to the L_{GRAY} image.

Graphs in Fig. 4-18 and Fig. 4-19 show changes of error δ_{IS} values for changes of parameters p_{xl}=p_{xp} within the range 1-70 and of variance σ within the range 0÷0.9 for p_{xud}=2 pixels and p_{xud}=∞. For both graphs at the change of σ values within the range 0-0.3 and p_{xl}=p_{xp} within 50-70 pixels the δ_{IO} error does not exceed 5%. The dependence of error δ_{IS} value on p_{xud} is insignificant, mainly due to its definitions ), where large changes of isolated points of y_{IS} course have no significant impact on the δ_{IS} error. The nature of error δ_{IS} values changes shown in Fig. 4-16 and Fig. 4-17 as well as in Fig. 4-18 and Fig. 4-19 regarding changes of parameters p_{xl}=p_{xp} within their full range depends mainly on the nature and arrangement of objects on the scene and therefore it will not be discussed here. The form of algorithm intended to generate the above results is similar to the previous case.

## 4.4. Detection of NFL Boundary

The NFL boundary position was determined in two stages, of which the second stage of individual points positions correction is the most complicated and the analysis laborious.

**The first stage** comprises determination of decimal to binary conversion for each column of L_{MED} image acc. to previously mentioned relationship for parameter pr assumed arbitrarily around 0.1 (10%). Then, for each column of L_{BIN_NFL} image, the position of the first pixel for each k_{n}- cluster of value ‘1’ for each column – n is calculated. Assuming further that each column n has K_{n} clusters it is possible to write:

where:

and L_{ET_N} - image formed as a result of labelling each cluster for each column irrespective of L_{BIN_NFL} image for k_{n}∈{1,2,3,…,K_{n}-1,K_{n}}.

Fig. 4-20 and Fig. 4-22 show L_{ET_N} images for artificial input image L_{MED} without the added noise (Fig. 4-21) and with added Gaussian noise of variance σ=0.2 (Fig. 4-23).

The image from Fig. 4-23 originated at the following implementation:

L1=rand([201 200]); xx=−1:0.01:1; y=gauss(xx+0.5, 0.2)+0.5*gauss(xx-0.1, 0.05); Lmed=y’*ones([1 200]); Lmed=mat2gray(Lmed); Lmed(:,50:100)=Lmed(:,50:100)*.2; Lmed = imnoise(Lmed,‘gaussian’,0.02); Lmed=medfilt2(Lmed,[3 3]); figure; imshow(Lmed); hold on xyinfy=[]; xyinfdl=[]; for ik=1:size(Lmed,2) grL1=Lmed(:,ik)>(max(Lmed(:,ik))*0.1); lgrL1=bwlabel(grL1); for jju=1:max(lgrL1) xyinfdl(jju, ik)=sum(lgrL1==jju); cuu=1:length(lgrL1); cuu(lgrL1∼=jju)=[]; xyinfy(jju, ik)=cuu(1); plot(ik, cuu(1), ‘b*’) end end

As shown in Fig. 4-20 - Fig. 4-23 the relationship (18) and (19) is very sensitive to noise and to small artefacts on the image, which are the reason for origination of additional erroneous points y_{NFL_P}. In practice, however, this problem is not too arduous because even in the case of proper distribution of points y_{NFL_P} the determination of NFL line is not an unambiguous and simple process, which is illustrated by Fig. 4-24.

This figure was obtained from the following algorithm.

[Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]); Lgray=Lgray(1:850,:); Lgray=ind2gray(Lgray,map); Lgray=double(Lgray)/255; Lorg=Lgray; Lmed=medfilt2(Lorg,[5 5]); Lmed=mat2gray(Lmed); figure; imshow(Lmed) grad_y_punkt=30; figure; imshow(Lmed); hold on [xNFL, yNFL, xyinfdl, xyinfy, ggtxnn, ggtynn, ggdlnn, xyinfdl_o ld, xyinfy_old]=OCT_NFL_line(Lmed, grad_y_punkt); plot(xNFL,yNFL,‘r’,‘LineWidth’,2)

where function
`OCT_NFL_line` is intended to analyse the course of NFL line and is described below.

**The second stage** of NFL line determination is related to the analysis of y_{NFL_P} points on the ox axis. For the next y_{NFL_P} points a derivative for the ox axis was calculated and then the clusters analysis was performed, obtaining this way k_{m} clusters and y_{NFL_D} where for each k_{m}∈{1,2,3,…,K_{m}-1,K_{m}} the following condition is satisfied:

where p_{rd} is the threshold limiting the maximum value of the derivative for consecutive points on the ox axis. This threshold is directly responsible for the obtained number of clusters and thereby the number of sections analysed in further part of the algorithm.

Clusters containing too small number of elements (less than 20% of the largest cluster) are automatically cut off. Instead, the others are analysed in terms of arrangement on the image (coordinate m) and of the number of pixels existing in a specific cluster (y_{NFL_H}).

So analysing the position of individual y_{NFL_S} points and the number of pixels in specific y_{NFL_H} cluster for which they were determined it is possible to create weights y_{W} for analysed clusters (points groups), i.e.:

where ɛ_{S} and ɛ_{P} are constants arbitrarily selected from the 0-1 range and

In the next stage this cluster k_{m} is selected, which has the largest weight k_{m}*. Later on it is used as a start vector for the modified active contour method described in section 0. This way the results presented in Fig. 4-26 are obtained.

Fig. 4-26 shows points for the best, with respect to the criterion set, cluster k_{m}* in turquoise and y_{NFL} results obtained for the active contour method in red.

Taking into account the above analysis the final shape of
`OCT_NFL_line` function was formulated as follows:

function [xNFL,yNFL,xyinfdl,xyinfy,ggtxnn,ggtynn,ggdlnn, xyinfdl_old, xyinfy_old]=OCT_NFL_line(Lmed,grad_y_punkt) xyinfy=[]; xyinfdl=[]; for ik=1:size(Lmed,2) grL1=Lmed(:,ik)>(max(Lmed(:,ik))*0.1); lgrL1=bwlabel(grL1); for jju=1:max(lgrL1) xyinfdl(jju,ik)=sum(lgrL1==jju); cuu=1:length(lgrL1); cuu(lgrL1∼=jju)=[]; xyinfy(jju,ik)=cuu(1); plot(ik,cuu(1),‘b*’) end end xyinfdl_old=xyinfdl; xyinfy_old=xyinfy; ggtxnn=[]; ggtynn=[]; ggdlnn=[]; while sum(sum(xyinfy(:, 1:(end-1))))∼=0 ggtx=[]; ggty=[]; for hvi=1:(size(xyinfy, 2)−1) if sum(xyinfy(:,hvi))∼=0 break end end for hv=hvi:(size(xyinfy, 2)−1) if (min(abs(xyinfy(1,hv)-xyinfy(:,hv+1)) )<grad_y_punkt)&(xyinfy(1, hv)∼=0) vff=1:size(xyinfy,1); vff(abs(xyinfy(1,hv)- xyinfy(:,hv+1))>=grad_y_punkt)=[]; vff=vff(1); xypam=xyinfy(1,hv); vff__=1:size(xyinfy,1); vff__(vff)=[]; xyinfy(1:end,hv+1) =[xyinfy(vff,hv+1); xyinfy(vff__,hv+1)]; xyinfdl(1:end,hv+1)=[xyinfdl(vff,hv+1); xyinfdl(vff__,hv+1)]; xyinfy(1:end, hv) =[xyinfy(2:end, hv); 0]; xyinfdl(1:end, hv)=[xyinfdl(2:end, hv); 0]; ggtx=[ggtx,hv]; ggty=[ggty,xypam]; else xyinfy(1:end,hv)=[xyinfy(2:end,hv); 0]; xyinfdl (1:end,hv)=[xyinfdl(2:end,hv); 0]; break end end if length(ggty)>10 ggtxnn(size(ggtxnn,1)+1,1:length(ggtx))=ggtx; ggtynn(size(ggtynn,1)+1,1:length(ggty))=ggty; ggdlnn=[ggdlnn;[length(ggty)min(ggty)]]; end end ggdlnn_leng=ggdlnn(:,1); ggdlnn=[(1:size(ggdlnn,1))’,ggdlnn]; ggdlnn(:,2)=ggdlnn(:,2)-min(ggdlnn(:,2)); ggdlnn(:,2)=ggdlnn(:,2)./max(ggdlnn(:,2)); ggdlnn_leng(ggdlnn(:,2)<(0.2),:)=[]; ggdlnnTggdlnn(:,2)<(0.2),:)=[]; for bniewazne=1:(size(ggdlnn,1).^2) if size(ggdlnn,1)>=2 usun_=zeros([1 size(ggdlnn,1)]); nr1=ggdlnn(1,1); x11=ggtxnn(nr1,:); y11=ggtynn(nr1,:); x11(y11==0)=[]; y11(y11==0)=[]; for nr_=2:size(ggdlnn,1) nr2=ggdlnn(nr_,1); x22=ggtxnn(nr2,:); y22=ggtynn(nr2,:); x22(y22==0)=[]; y22(y22==0)=[]; for iy=1:length(x11) xbn=1:length(x22); xbni=xbn(x22==x11(iy)); if ∼isempty(xbni) if y11(iy)<y22(xbni(1)) usun_(nr_)=usun_(nr_)+1; end end end end if sum(usun_)∼=0 ggdlnn(usun_>(ggdlnn_leng’*0.2),:)=[]; ggdlnn_leng(usun_>(ggdlnn_leng’*0.2))=[]; ggdlnn=[ggdlnn(2:end,:); ggdlnn(1,:)]; ggdlnn_leng=[ggdlnn_leng(2:end); ggdlnn_leng(1,:)]; else ggdlnn=[ggdlnn(2:end,:); ggdlnn(1,:)]; ggdlnn_leng=[ggdlnn_leng(2:end); ggdlnn_leng(1,:)]; end end end ggdlnn_s=sortrows(ggdlnn,−2); if size(ggdlnn_s,1)==2 xNFL1=ggtxnn(ggdlnn_s(1,1),:); yNFL1=ggtynn(ggdlnn_s(1,1),:); xNFL2=ggtxnn(ggdlnn_s(2,1),:); yNFL2=ggtynn(ggdlnn_s(2,1),:); xNFL1(xNFLl==0)=[]; yNFL1(yNFL1==0)=[]; xNFL2(xNFL2==0)=[]; yNFL2(yNFL2==0)=[]; yNFL1_poczg=yNFL1(1)+std(yNFL1); yNFL1_poczd=yNFL1(1)-std(yNFL1); yNFL2_poczg=yNFL2(1)+std(yNFL2); yNFL2_poczd=yNFL2(1)-std(yNFL2); if min(xNFLl)<min(xNFL2) if (abs(yNFL1(end)- yNFL2_poczd)<std(yNFL1))|(abs(yNFL1(end)- yNFL2_poczg)<std(yNFL1)); xNFL=[xNFL1 xNFL2]; else if length(yNFL1)>length(yNFL2) xNFL=[xNFL1]; else xNFL=[xNFL2]; end end else if (abs(yNFL2(end)- yNFL1_poczd)<std(yNFL2))|(abs(yNFL2(end)- yNFL1_poczg)<std(yNFL2)); xNFL=[xNFL2 xNFL1]; else if length(yNFL1)>length(yNFL2) xNFL=[xNFL1]; else xNFL=[xNFL2]; end end end else xNFL=ggtxnn(ggdlnn_s(1,1),:); xNFL(xNFL==0)=[]; end filtr_med=50; [xNFL, yNFL]=OCT_NFL_line_end(xNFL,xyinfdl_old,xyinfy_old,gr ad_y_punkt,filtr_med); przyci_po_obu_x_proc=0.2; y_dd=abs(diff(yNFL)); y_dd_lab=bwlabel(y_dd<(grad_y_punkt)/2); num_1=y_dd_lab(round(length(y_dd_lab)*przyci_po_obu_x_proc) ); num_end=y_dd_lab(round(length(y_dd_lab)*(1- przyci_po_obu_x_proc))); x_sek=1:length(y_dd_lab); x_sek_1=x_sek(y_dd_lab==num_1); x_sek_1=x_sek_1(1); x_sek_end=x_sek(y_dd_lab==num_end); x_sek_end=x_sek_end(end); xNFL=xNFL(x_sek_1:x_sek_end); yNFL=yNFL(x_sek_1:x_sek_end);

and function
`OCT_NFL_line_end` intended for filtration of the left and right side of the course:

function [xNFL,yNFL]=OCT_NFL_line_end(xNFL_old,xyinfdl,xyinfy,grad_y _punkt,filtr_med) x_start=xNFL_old(round(end/2)); xNFL=[]; yNFL=[]; xyinfy(1,:)=medfi1t2(xyinfy(1,:),[1 filtr_med]); for hv=x_start:(size(xyinfy,2)−1) if (min(abs(xyinfy(1,hv)-xyinfy(:,hv+1)) )<grad_y_punkt)&(xyinfy(1,hv)∼=0) vff=1:size(xyinfy,1); vff(abs(xyinfy(1,hv)- xyinfy(:,hv+1))>=grad_y_punkt)=[]; vff=vff(1); xypam=xyinfy(1,hv); vff__=1:size(xyinfy,1); vff__(vff)=[]; xyinfy(1:end, hv+1) = [xyinfy(vff,hv+1); xyinfy(vff__,hv+l)]; xyinfdl(1:end,hv+1)= [xyinfdl(vff,hv+1); xyinfdl(vff__,hv+1)]; xyinfy(1:end, hv)=[xyinfy(2:end,hv); 0]; xyinfd1(1:end, hv)=[xyinfdl (2:end,hv); 0]; xNFL=[xNFL; hv]; yNFL=[yNFL; xypam]; else xyinfy(1:end,hv)=[xyinfy(2:end,hv); 0]; xyinfdl(1:end,hv)=[xyinfdl(2:end,hv); 0]; break end end for hv=(x_start-1):−1:2 if (min(abs(xyinfy(1,hv)-xyinfy(:,hv-1)) )<grad_y_punkt)&(xyinfy(1,hv)∼=0) vff=1:size(xyinfy,1); vff(abs(xyinfy(1,hv)- xyinfy(:,hv-1))>=grad_y_punkt)=[]; vff=vff(1); xypam=xyinfy(1, hv); vff__=1:size(xyinfy, 1); vff__(vff)=[]; xyinfy(1:end, hv-1) = [xyinfy(vff,hv-1); xyinfy(vff__, hv-1)]; xyinfdl(1:end,hv-1)= [xyinfdl(vff,hv-1); xyinfdl(vff__, hv-1)]; xyinfy(1:end, hv)=[xyinfy(2:end,hv); 0]; xyinfdl(1:end, hv) = [xyinfdl(2:end,hv); 0]; xNFL=[hv; xNFL]; yNFL=[xypam; yNFL]; else xyinfy(1:end, hv)=[xyinfy(2:end,hv); 0]; xyinfdl(1:end, hv)=[xyinfdl(2:end,hv); 0]; break end end xNFL=round(xNFL); yNFL=round(yNFL);

Unfortunately, the method described provides the expected results not in all analysed cases. The situation presented in Fig. 4-27 is an example here, fortunately seldom occurring in practice. Such situations occur for actual images if there is a lot of noise on them or if large eye pathologies exist or shadows are strongly visible. Such cases (where even for an OCT operator it is difficult to answer clearly a question where an individual layer starts and ends) occur pretty seldom in practice.

## 4.5. Correction of Layers Range

y_{IO}, y_{RPE}, y_{NFL} obtained at earlier stages will be now subject to common analysis to eliminate additional disturbances and to improve their quality. The y_{IO}, y_{RPE}, y_{NFL} courses must fulfil the following conditions resulting from medical premises of eye structure (the conditions will be given in a Cartesian coordinate system):

- y
_{RPE}<y_{IO}<y_{NFL}for each x, - y
_{IO}- y_{RPE}≈0.1 mm – being the initial value starting the operation of modified active contour method, - y
_{NFL}- y_{IO}≈ from 0 to 1 mm, for different x may be even y_{IO}>y_{NFL}or/and y_{RPE}>y_{NFL}.

The implementation of this moderately simple correction of layers arrangement we leave to the Reader.

## 4.6. Final Form of Algorithm

Based on considerations carried out in previous sections the final form of algorithm was formulated in the following form:

[Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]); Lgray=Lgray(1:850,:); Lgray=ind2gray(Lgray,map); Lgray=double(Lgray)/255; Lorg=Lgray; Lmed=medfilt2(Lorg,[5 5]); Lmed=mat2gray(Lmed); [xRPE,yRPE,xRPEz,yRPEz]=OCT_global_line(Lmed); grad_y_punkt=30; [xNFL,yNFL,xyinfdl,xyinfy,ggtxnn,ggtynn,ggdlnn,xyinfdl_o ld,xyinfy_old]=OCT_NFL_line(Lmed, grad_y_punkt); z_gd1=60; z_gd2=60; z_sr1=16; z_sr2=16; z_kat1=12; z_kat2=12; z_us_xy1=12; z_us_xy2=12; [yRPEd,ygRPEd]=OCT_activ_cont(Lmed,xRPE,yRPE+50,z_gdl,z_ srl,z_katl,z_us_xyl,−1); [yONL, ygONL]=OCT_activ_cont(Lmed,xRPE,yRPE- 50,z_gd2,z_sr2,z_kat2,z_us_xy2,1); figure; imshow(Lmed); hold on plot(xRPE,yRPE,‘-r*’,‘LineWidth’,2); plot(xRPEz,yRPEz,‘-g*’,‘LineWidth’,2); plot(xNFL,yNFL,‘b’,‘LineWidth’,2) plot(xRPE,yONL,‘y’,‘LineWidth’,2) plot(xRPE,yRPEd,‘m’,‘LineWidth’,2)

Consequently, the following results were obtained - Fig. 4-28 and Fig. 4-29.

In the source code presented the following functions have been used, previously presented
`OCT_activ_cont` and
`OCT_global_line`, which has the following form:

function [x,yrpes,dxx,dyy]=OCT_global_line(Lmed) x=(1:size(Lmed,2))’; yyy=(1:size(Lmed,1))’; yrpe=[]; Lbinrpe=zeros(size(Lmed)); for ik=1:size(Lmed,2) xx_best=[]; Llabp=bwlabel(Lmed(:,ik)>(max(Lmed(:,ik))*0.9)); Lbinrpe(:,ik)=Llabp; for tt=1:max(Llabp) xxl=yyy(Llabp==tt); xx_best=[xx_best; mean(xxl)]; end if ∼isempty(xx_best) yrpe(ik)=max(xx_best); else yrpe(ik)=0; end end figure; imshow(mat2gray(Lbinrpe*0.5+Lmed)); hold on; plot(yrpe, ‘r*-’) yg=gradient(yrpe); ygg=ones([1 length(yrpe)]); ygg(abs(yg)>20)=0; ygl=bwlabel(ygg); figure; imshow(mat2gray(Lbinrpe*0.5+Lmed)); hold on; palett=jet(max(ygl)); for iiih=1:max(ygl(:)) plot(x(ygl==iiih), yrpe(ygl==iiih), ‘Color’, palett(iiih,:), ‘LineWidth’, 4); end pam_dl=[]; figure; imshow(mat2gray(Lbinrpe*0.5+Lmed)); hold on for iiik=1:max(ygl(:)) for iiikk=iiik:max(ygl(:)) if iiik<=iiikk ygk=[yrpe(ygl==iiik),yrpe(ygl==iiikk)]; xgk=[x(ygl==iiik); x(ygl==iiikk)]; else ygk=[yrpe(ygl==iiikk), yrpe(ygl==iiik)]; xgk=[x(ygl==iiikk); x(ygl==iiik)]; end if length(ygk)>10 P = POLYFIT(xgk’, ygk, 2); yrpes = round(POLYVAL(P, x)); plot(yrpes,‘g*-’) pam_dl=[pam_dl;[iiik iiikk sum(abs(yrpe- yrpes’)<20)]]; end end end pam_s=sortrows(pam_dl,−3); if size(pam_s,1)==1 ygk=[yrpe(ygl==pam_s(1,1))]; xgk=[x(ygl==pam_s(1,1))]; else ygk=[yrpe(ygl==pam_s(1,1)), yrpe(ygl==pam_s(1,2))]; xgk=[x(ygl==pam_s(1,1)); x(ygl==pam_s(1,2))]; end P = POLYFIT(xgk’,ygk,2); yrpes = round(POLYVAL(P,x)); plot(x, yrpes,‘w*-’); yrpe=yrpe(:); plot(x,yrpe,‘m*-’); dx=x; dx(abs(yrpe-yrpes)>20)=[]; yrpe(abs(yrpe-yrpes)>20) = []; dxl=bwlabel(diff(dx)<125); pdxl=[]; for qw=1:max(dxl) pdxl=[pdxl; [qw, sum(dxl==qw)]]; end pdxl(pdxl(:,2)<50,:)=[]; dxx=[]; dyy=[]; for wq=1:size(pdxl, 1) dxx=[dxx; dx(dxl==pdxl(wq, 1))]; dyy=[dyy; yrpe(dxl==pdxl(wq, 1))]; end

The result presented is affected mainly by the arguments of
`OCT_activ_cont` function, which in accordance with the description quoted determine the type of layer recognised.

The algorithm presented makes a uniform whole related to the analysis of layers within the fundus of the eye on flat OCT images. The results obtained may be enhanced by an automated analysis of ‘holes’ on the image – presented below.

## 4.7. Determination of ‘Holes’ on the Image

To determine holes on the image a method of binary image L_{BIN_IP} labelling was applied (11) obtaining image L_{ET} shown in Fig. 4-30.

Examples of results obtained are shown in the specification in Fig. 4-31. Each object (cluster) k_{o} received a label and determined coordinates (m_{o}, n_{o}) of its centre of gravity position. In addition, the area of surface P_{o} is also calculated. The source code is provided below:

[Lgray, map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]); Lgray=Lgray(1:850,:); Lgray=ind2gray(Lgray,map); Lgray=double(Lgray)/255; Lorg=Lgray; Lmed=medfilt2(Lorg,[5 5]); Lmed=mat2gray(Lmed); [xRPE,yRPE,xRPEz,yRPEz]=OCT_global_line(Lmed); Lll=filter2(ones(3),Lmed)/(3*3); L12=imregionalmin(L11); L13=∼imopen(L12,ones(9)); [Lbin,L18]=OCT_areaa(L13,xRPE,yRPE); Let=bwlabel(Lbin); Let_=Let; Let_(edge(double(L13))==1)=max(Let(:))+1; figure; imshow(Let_,[]); pall=jet(max(Let(:))); colormap([pall; [1 1 1]]); colorbar; hold on [XX, YY]=meshgrid(1:size(Let,2), 1:size(Let,1)); kmnp=[]; for ju=2:max(Let(:)) Let4=Let==ju; Letx=Let4.*XX; Letx(Letx==0)=[]; Lety=Let4.*YY; Lety(Lety==0)=[]; text(median(Letx), median(Lety), mat2str(sum(Let4(:))), ‘FontS ize’,15, ‘Color’,[1 1 1]) kmnp=[kmnp; [ju median(Letx), median(Lety), sum(Let4(:))]]; end kmnp

For diagnostic reasons the position of clusters analysed (given in Fig. 4-30) was narrowed to those, which position of the centre of gravity falls within the range between y_{RPE} and y_{NFL}.

## 4.8. Assessment of Results Obtained Using the Algorithm Proposed

An example of algorithm implementation intended for analysis of layers occurring on an OCT image has been presented. This methodology has been applied to the analysis of around 500 cases, where during verification it has erroneously determined layers for 5% of images. Examples of properly and improperly recognised layers are shown in Fig. 4-32 and Fig. 4-33.

The algorithm proposed was implemented in the Matlab environment and operates at a rate of one image per 15s for a P4 CPU 3GHz processor, 2GB RAM. Additionally, an application in language C was developed, which after time optimisation on the same computer analyses the same image within 0.85s.

The Reader implementing the above function must notice delays introduced by the graphic card during image displaying. In particular, the resultant images are the point here, for which results were presented in the form of graphs or points on a flat image in greyness levels.

## 4.9. Layers Recognition on a Tomographic Eye Image Based on Random Contour Analysis

### 4.9.1. Determination of Direction Field Image

Like in [25] and [40] the input image L_{GRAY} is initially subject to filtration using a median filter of (M_{h}×N_{h}) size of h=3×3 mask. The obtained image L_{M} is subject to the analysis presented in the next sections.

The first stage of the edge detection method used [14], [35], [41] consists of making a convolution of input image L_{M} of M_{M}×N_{M} resolution, i.e.

with Gauss filters masks, e.g. of 3×3 size [14], [35], [41]. Based on that the matrix of gradient in both directions, necessary to determine the edges, has been determined in accordance with a classical dependence:

And in particular its normalised form, i.e.:

The image of Lα direction field has been determined for each pair of pixels L_{GX}(m,n) and L_{GY}(m,n), and in general L_{GX} and L_{GY} images, i.e.:

The implementation of the above relationships in Matlab looks as follows:

Lm=zeros(100); Lm(10:30,10:20)=1; Lm(40:80,50:70)=1; Lm=imnoise(Lm,‘gaussian’,0.2); Lm=medfilt2(Lm,[3 3]); Lm=mat2gray(Lm); figure; imshow(Lm,‘notruesize’) Nx1=5; Sigmax1=24; Nx2=5; Sigmax2=24; Thetal=pi/2; Ny1=5; Sigmay1=24; Ny2=5; Sigmay2=24; Theta2=0; alfa=0.15; hx=OCT_NOISE_gauss(Nx1, Sigmax1, Nx2, Sigmax2, Theta1); Lgx= conv2(Lm, hx,‘same’); hy=OCT_NOISE_gauss(Ny1,Sigmay1,Ny2,Sigmay2,Theta2); Lgy=conv2(Lm,hy,‘same’); Lalp=atan2(Lgy,Lgx); Lalp=Lalp*180/pi; Lg=mat2gray(abs(Lgx)+abs(Lgy)); figure; imshow(Lg, [],‘notruesize’); colormap(‘jet’); colorbar figure; imshow(Lalp,[],‘notruesize’); colormap (‘jet’); colorbar

where
`OCT_NOISE_gauss`

function h = OCT_NOISE_gauss(n1,sigma1,n2,sigma2,theta) r=[cos(theta) -sin(theta); sin(theta) cos(theta)]; for i = 1: n2 for j = 1: n1 u = r * [j-(n1+1)/2 i-(n2+1)/2]’; h(i, j) = gauss(u(1), sigma1)*OCT_gauss(u(2),sigma2); end end h = h / sqrt(sum(sum(abs(h).*abs(h)))); function y = OCT_gauss(x, std) y = −x * gauss(x, std)/std^2; function y = gauss(x, std) y = exp(−x.^2/(2*std^2))/(std*sqrt(2*pi));

As a result the images presented below are obtained.

### Fig. 4-34Artificial Lm image

### Fig. 4-35Artificial L_{G} image

### Fig. 4-36Artificial Lα image

Those images, Lα and L_{G}, are further used in the analysis, where the starting points random selection is the next step.

### 4.9.2. Starting Points Random Selection and Correction

Starting points, and – based on them – the next ones will be used in consecutive stages of algorithm operation to determine parts of layers contours. The initial position of starting points was determined at random. Random values were obtained from uniform range (0,1) for each point of image matrix L_{o} with image resolution L_{M}, i.e.: M×N. For a created this way (random) image L_{o} a decimal to binary converion is carried out with threshold p_{r}, which is the first and one of matched (described later) parameters of the algorithm, the obtained binary matrix L_{u} is described by the relationship:

In this case:

figure; imshow(Lg,[],‘notruesize’); hold on pr=0.3; Lrand=rand(size(Lg)); [n, m]=meshgrid(1:size(Lrand,2),1:size(Lrand, 1)); n(Lrand>(Lg*pr))=[]; m(Lrand>(Lg*pr))=[]; plot(n,m,‘r.’);

The result obtained is presented in the following figure (Fig. 4-37).

Starting points o*_{i,j} (where index ‘i’ marks the next starting point, while ‘j’ subsequent points created on its basis) satisfy condition L_{u}(m,n)=1 – that is starting points are o*_{i,1}. This way the selection of the threshold value p_{r} within the range (0,1) influences the number of starting points, which number is the larger, the brighter is the grey level (contour) in the L_{G} image. In the next stage the starting points' position is modified in the set area H of M_{H}×N_{H} size. Modification consists in the correction of points o*_{i,1} position of coordinates (m*_{i,1}, n*_{i,1}) to new coordinates (m_{i,1}, n_{i,1}), where shifts within the range m_{i,1}= m*_{i,1}±(M_{H})/2 and n_{i,1}= n*_{i,1}±(N_{H})/2 are possible. A change of coordinates occurs in the area of ±(M_{H})/2 and ±(N_{H})/2, in which the highest value is achieved L_{G}(m*_{i,1}±(M_{H})/2, n*_{i,1}±(N_{H})/2), i.e.:

Then the correction of repeating points is carried out – points of the same coordinates are removed. The source code looks here as follows:

H=ones(5); [n,m]=OCT_NOISE_area(n,m,Lg,H); plot(n,m,‘g.’); hold on

where
`OCT_NOISE_area`

function [n,m]=OCT_NOISE_area(n,m,Lg,H) xn=[]; yn=[]; [xr,yr]=meshgrid(1:size(H,2),1:size(H,1)); for iw=1:length(n) ddx=size(H,2)/2; ddy=size(H,1)/2; xp=round(n(iw)-ddx); xk=round(n(iw)+ddx-1); yp=round(m(iw)-ddy); yk=round(m(iw)+ddy−1); if (xp<1) | (yp<1) | (xk>size(Lg,2)) | (yk>size(Lg,1)) xn(iw)=n(iw); yn(iw)=m(iw); else Lff=Lg(yp:yk, xp:xk); xr_=xr; yr =yr; xr_(Lff∼=max(max(Lff)))=[]; yr_(Lff∼=max(max(Lff)))=[]; xn(iw)=n(iw)+xr_(1)-ddx; yn(iw)=m(iw)+yr_(1)-ddy; end end n=round(xn); m=round(yn); n(n<=0)=l; m(m<=0)=1; n(n>size(Lg,2))=size(Lg,2); m(m>size(Lg,1))=size(Lg, 1);

The obtained results are presented in Fig. 4-38.

### 4.9.3. Iterative Determination of Contour Components

To determine layers on an OCT image, contour components have been determined in the sense of its parts subject to later modification and processing in the following way. For each random selected point o*_{i,1} of coordinates (m*_{i,1}, n*_{i,1}) and then modified (in the sense of its position) to o_{i,1} of coordinates (m_{i,1}, n_{i,1}) an iterative process is carried out consisting in looking for consecutive points o_{i,2} o_{i,3} o_{i,4} o_{i,5} etc. and local modification of their position (described in the previous section) starting from o_{i,1} in accordance with the relationship:

A demonstrative illustration of the iterative process is shown in Fig. 4-39.

In the case of described iterative process of contour components determination it is necessary to introduce a number of limitations (next parameters), comprising:

- j
_{MAX}– maximum iterations number – limitation aimed at elimination of algorithm looping if each time points o_{i,j}of different position are determined and the contour will have the shape of e.g. a spiral. - Stopping the iterative process, if it is detected that m
_{i,j}= m_{i,j+1}and n_{i,j}= n_{i,j+1}. Such situation happens most often if A_{i,j}is close to or higher than M_{H}or N_{H}. Like in the case of starting points random selection and correction, also here a situation may occur that after the correction m_{i,j}= m_{i,j+1}and n_{i,j}= n_{i,j+1}.

Stopping the iterative process if m_{i,j} > M_{M} or n_{i,j} > N_{M} that is in the cases, when indicated point o_{i,j} will be outside the image.

Stopping the iterative process if |L_{α}(m_{i,j}, n_{i,j}) - L_{α}(m_{i,j+1}, n_{i,j+1})|>Δα where Δα is the next parameter set for acceptable contour curvature.

At this stage consecutive contour components for set parameters are obtained. These parameters comprise:

- p
_{r}– threshold responsible for the number of starting points (29) – changed practically within the range 0-0.1, - j
_{MAX}– the maximum acceptable iterations number – set arbitrarily at 100, - Δα - angle range set within the range 10-70°,
- M
_{H}×N_{H}– size of the correction area, a square area, changed within the range from M_{H}×N_{H}=5×5 to M_{H×}N_{H}=25×25, - A
_{ij}– amplitude, constant for individual i,j, set at A_{i,j}=M_{H}, - Δα - acceptable maximum change of angle between consecutive contour points, set within the range 10-70°.

For the artificial image presented in Fig. 4-40 an iterative process of contours determination has been performed, assuming p_{r}=0.1, Δα=45°, M_{H×}N_{H}=5×5. The results obtained are presented in Fig. 4-40.

The source code of the iterative process of contour components determination is presented below:

Lz=zeros(size(Lalp)); Lz2=zeros(size(Lalp)); A=5; delta_alph=50; n_1=[T; m_1=[]; al_1=[]; for i=1:length(n) ns_=[]; ms_=[]; ks_=[]; ns_(1)=[n(i)]; ms_(1)=[m(i)]; ii=1; alp_1=Lalp(ms_(ii),ns_(ii)); al_1(i,1)=[alp_1]; kat_r=0; while kat_r<delta_alph alp_1=Lalp(ms_(end),ns_(end)); n_p1=round(ns_(end)+A*cos((alp_1+90)*pi/180)); m_p1=round(ms_(end)+A*sin((alp_1+9 0)*pi/180)); if (n_p1<1)|(m_p1<1)|(n_p1>size(Lalp,2))|(m_p1>size(Lalp,1)) break end [n_pp1,m_pp1]=OCT_NOISE_area(n_p1,m_p1,Lg,H); if sum(sum([round(m_pp1)==ms_’,round(n_pp1)==ns_’],2)==2)>1 disp (‘zabezpiecz’) break end if ii>100 [i, ii] break end ii=ii+1; [nss,mss]=OCT_NOISE_line([ns_(end),n_pp1],[ms_(end),m_pp1]) ; ns_=[ns_;round(nss’)]; ms_=[ms_;round(mss’)]; ks_(ii)=alp_1; kat_r=abs(alp_1-Lalp(ms_(end),ns_(end))); if kat_r>180; kat_r=180-kat_r; end end n_1(i,1:length(ns_))=ns_; m_1(i,1:length(ms_))=ms_; al_1(i,1:length(ks_))=ks_; for im=1:length(ns_) Lz(ms_(im),ns_(im))=Lz(ms_(im),ns_(im))+1; end plot(ns_, ms_,‘g-*’,‘LineWidth’,3) pause (0.00000001) end figure; imshow(Lz,[],‘notruesize’); colormap(‘jet’); colorbar

where
`OCT_NOISE_line` is a function intended for generation of discrete points on the section connecting the points given, i.e.:

function [n_,m_]=OCT_NOISE_line(n,m) if (abs(n(1)-n(2))==0)&(abs(m(1)-m(2))==0) n =n; m =m; else if abs(n(1)-n(2))<abs(m(1)-m(2)) if m(1)<m(2) m =m(1):m(2); else m_=m(1):-1:m(2); end if n(1)∼=n(2) n_=n(1):((n(2)-n(1))/(length(m_)-1)):n(2); else n_=ones(size(m_))*n(1); end else if n(1)<n(2) n_=n(1):n(2); else n_=n(1):-1:n(2); end if m(1)∼=m(2) m_=m(1):((m(2)-m(1))/(length(n_)−1)):m(2); else m_=ones(size(n_))*m(1); end end end

When analysing results presented in Fig. 4-40 it should be noticed that the iterative process is stopped only when m_{i,j} = m_{i,j+1} and n_{i,j} = n_{i,j+1} (as mentioned before). That is only if points o_{i,j} and o_{i,j+1} have the same position. Instead, this condition does not apply to points o_{i,j} which have the same coordinates but for different ‘i’ that is originated at specific iteration point from various starting points. Easing of this condition leads to origination of overlapping contour components (Fig. 4-41) which will be analysed in the next sections.

### 4.9.4. Determination of Contours from Their Components

As presented in Fig. 4-41 in the previous section, the iterative process carried out may lead to overlapping of points o_{i,j} of the same coordinates originated from various starting points (m_{i,j}, n_{i,j}). This property is used for final determination of layers contour on an OCT image. In the first stage the image L_{z} from Fig. 4-41, is subject to decimal to binary conversion, i.e. the image that originated as follows:

For j=1,2,3,… and finally L_{Z}(m,n):

Where L_{ZB} is a binary image originated from decimal to binary conversion of image L_{z} with threshold p_{b}. The selection of threshold p_{b} is a key element for further analysis and correction of the contour generated. In a general case a situation may occur, where despite relatively low value p_{r} of threshold assumed a selected starting point o_{i,1} is situated outside the object’s edge. Then the next iterations may ‘connect’ it (in consecutive processes (32), (33), with the remaining part. In such case the process of protruding branches removing should be carried out – like branch cutting in skeletonisation. In this case the situation is a bit easier – there are two possibilities of this process implementation: increasing the threshold value p_{b} or considering the brightness value L_{G}(m_{i,j}, n_{i,j}) - Fig. 4-42.

### 4.9.5. Setting the Threshold of Contour Components Sum Image

The selection of threshold p_{b} during image L_{ZB} receiving on the one hand for high values leads to obtaining those contour components, for which the largest number of points overlapped at various ‘i’ of o_{i,j} points (Fig. 4-43, Fig. 4-44). On the other hand contour discontinuities may occur. Therefore the second mentioned method of obtaining the final form of contour, which consists of considering values L_{G}(m_{i,j}, n_{i,j}) for L_{z}(m_{i,j}, n_{i,j}) = 1 and higher, was selected.

Assuming that two non-overlapping points o_{1,j} and o_{2,j} have been random selected, such that m_{1,j} ≠ m_{2,j} or n_{1,j} ≠ n_{2,j}, L_{M}(m_{1,j}, n_{1,j}) and L_{M}(m_{2,j}, n_{2,j}) values were determined for consecutive j – Fig. 4-45.

Then a maximum value was determined for each sequence of o_{i,j} points:

Then all o_{i,j} points were removed, which satisfied the condition o_{i,j}<(O_{m}(i)·p_{j}), where p_{j} is the threshold (precisely the percentage value of O_{m}(i) below which all points are removed). To prevent introduction of discontinuities, only points at the beginning of the component contour are removed. The value was arbitrarily set to p_{j}=0.8. The obtained results are shown in Fig. 4-43 and Fig. 4-46. Example results shown in Fig. 4-46 are obtained for a real OCT image for p_{r}=0.02, Δ=80°, M_{H}×N_{H}=35×35, p_{b}=2, p_{j}=0.8. Correctly determined contour components and other contour fragments, which because of the form of relationship (34) and limitation for O_{m}(i) have not been removed, are visible. However, on the other hand the number and form of parameters available allows pretty high freedom in such their selection as to obtain the expected results. The final form of algorithm was formulated on this basis.

[Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]); Lgray=Lgray(1:850,:); Lgray=ind2gray(Lgray,map); Lgray=double(Lgray)/255; Lorg=Lgray; L=imresize(Lgray,0.5); Lm=medfilt2(L,[3 3]); Lm=mat2gray(Lm); figure; imshow(Lm) Nx1=5; Sigmax1=24; Nx2=5; Sigmax2=24; Theta1=pi/2; Ny1=5; Sigmay1=24; Ny2=5; Sigmay2=24; Theta2=0; alfa=0.15; hx=OCT_NOISE_gauss(Nx1,Sigmax1,Nx2,Sigmax2,Theta1); Lgx= conv2(Lm,hx,‘same’); hy=OCT_NOISE_gauss(Ny1,Sigmay1,Ny2,Sigmay2,Theta2); Lgy=conv2(Lm,hy,‘same’); Lalp=atan2(Lgy,Lgx); Lalp=Lalp*180/pi; Lg=mat2gray(abs(Lgx)+abs(Lgy)); figure; imshow(Lg,[],‘notruesize’); colormap(‘jet’); colorbar figure; imshow(Lalp,[],‘notruesize’); colormap(‘jet’); colorbar figure; imshow(Lg,[],‘notruesize’); hold on pr=0.05; Lrand=rand(size(Lg)); [n,m]=meshgrid(1:size(Lrand,2),1:size(Lrand,1)); n(Lrand>(Lg*pr))=[]; m(Lrand>(Lg*pr)) = []; plot(n,m,‘r.’); H=ones (5); [n,m]=OCT_NOISE_area(n,m,Lg,H); plot(n,m,‘g.’); hold on Lz=zeros(size(Lalp)); A=5; delta_alph=50; n_1=[]; m_1=[]; al_1=[]; for i=1:length(n) ns_=[]; ms_=[]; ks_=[]; nma_=[]; ns_(1)=[n(i)]; ms_(1)=[m(i)]; ii=1; alp_1=Lalp(ms_(ii),ns_(ii)); al_1(i,1)=[alp_1]; kat_r=0; while kat_r<delta_alph alp_1=Lalp(ms_(end),ns_(end)); n_p1=round(ns_(end)+A*cos((alp_1+90)*pi/180)); m_p1=round(ms_(end)+A*sin((alp_1+9 0)*pi/180)); if (n_p1<1)|(m_p1<1)|(n_p1>size(Lalp,2))|(m_p1>size(Lalp,1)) break end [n_pp1,m_pp1]=OCT_NOISE_area(n_p1,m_p1,Lg,H); if sum(sum([round(m_pp1)==ms_’,round(n_pp1)==ns_’],2)==2)>1 disp(‘zabezpiecz’) break end if ii>100 [i, ii] break end ii=ii+1; [nss,mss]=line_([ns_(end),n_pp1],[ms_(end),m_pp1]); ns_=[ns_;round(nss’)]; ms_=[ms_;round(mss’)]; ks_(ii)=alp_1; kat_r=abs(alp_1-Lalp(ms_(end),ns_(end))); if kat_r>180; kat_r=180-kat_r; end end n_1(i,1:length(ns_))=ns_; m_1(i,1:length(ms_))=ms_; al_1(i,1:length(ks_))=ks_; for im=1:length(ns_) Lz(ms_(im),ns_(im))=Lz(ms_(im),ns_(im))+1; nma_(im)=Lg(ms_(im),ns_(im)); end ns_s=ns_; ms_s=ms_; m_nma_=max(nma_(:)); for bg=1:length(nma_) if nma_(bg)<(m_nma *0.8) ns_s(1)=[];ms_s(1)=[]; else break end end plot(ns_s,ms_s,‘r’,‘LineWidth’,3) pause (0.0000001) end

In most cases the obtaining of intended contour shape is possible for one fixed M_{H}×N_{H} value. However, it may turn out necessary to use a hierarchical approach, for which the M_{H}×N_{H} size will be reduced, thanks to which a higher precision of the proposed method will be obtained and the weight (hierarchy) of individual contours importance will be introduced. Examples of results obtained for the algorithm given ultimately in this form are as follows.

### Fig. 4-47Image L_{G}

### Fig. 4-48Image L_{α}

### Fig. 4-49Image L_{z} with determined contours marked red

### Fig. 4-50Enlarged fragment of L_{z} image

### 4.9.6. Properties of the Algorithm Proposed

The algorithm created is presented in a block diagram – Fig. 4-51.

The assessment of proposed algorithm properties (Fig. 4-51) was carried out evaluating error δ in contour determination for changing parameters p_{r}, Δα, M_{H}×N_{H}, p_{b}, p_{j} within the range p_{r}∈(0,0.1), Δα, M_{H}×N_{H}∈(3,35) p_{b}, p_{j}. An artificial image of rectangular object located centrally in the scene (Fig. 4-52) has been used in the assessment.

Instead, the error was defined as follows:

assuming that only one point, i.e. i=1, was random selected. The second part of the assessment consists of points of discontinuity against the standard contour.

Fig. 4-53 shows the graph of error δ values changes and its minimum δ_{min} and maximum δ_{max} value vs. M_{M}×N_{M} changing between 3 and 35. The algorithm intended for properties analysis comprises the already presented source code (as a fundamental part) supplemented with fragments related to the specific nature of the object (Fig. 4-42) and measurements of its properties.

MN_w=[]; for MN=3:34 L1=zeros (100); L1(40:80,10:70)=1; [xw,yw]=meshgrid(1:size(L1,2),1:size(L1,1)); L111=xor(L1,imerode(L1, ones(3))); xw(L111==0) = []; yw(L111==0) = []; L2=imnoise(L1, ‘gaussian’,0.2); L3=medfilt2(L2,[3 3]); L4=mat2gray(L3); Nx1=8; Sigmax1=MN; Nx2=8; Sigmax2=MN; Theta1=pi/2; Ny1=8; Sigmay1=MN; Ny2=8; Sigmay2=MN; Theta2=0; alfa=0.15; hx=OCT_NOISE_gauss(Nx1,Sigmax1,Nx2,Sigmax2,Theta1); Lgx= conv2(L4,hx,‘same’); hy=OCT_NOISE_gauss(Ny1,Sigmay1,Ny2,Sigmay2,Theta2); Lgy=conv2(L4,hy,‘same’); alp=atan2(Lgy,Lgx); Lalp=alp*180/pi; Lg=mat2gray(abs(Lgx)+abs(Lgy)); figure; imshow(L4,‘notruesize’); hold on Lrand=rand(size(Lg)); [n,m]=meshgrid(1:size(Lrand,2),1:size(Lrand,1)); n(Lrand>(Lg*0.02))=[]; m(Lrand>(Lg*0.02))=[]; plot(n,m,‘b.’); Lz=zeros(size(Lalp)); delta_alph=50; Lz2=zeros(size(Lalp)); H=ones(5); A=5; z_kat=80; [n,m]=OCT_NOISE_area(n,m,Lg,H); plot(n,m,‘g.’); hold on n_1=[]; m_1=[]; al_1=[]; … … plot(xw,yw,‘k*’,‘LineWidth’,3) nmabs_=[]; for jjx=1:size(n_1,1) for jjy=1:size(n_1,2) if (m_1(jjx,jjy)+n_1(jjx,jjy))>0 nmabs_(jjx,jjy)=Lg(m_1(jjx,jjy),n_1(jjx,jjy)); end end end blad_=[]; for cd=1:length(xw) blad_(cd)=min(min(abs(n_1-xw(cd))+abs(m_1-yw(cd)))); end MN_w=[MN_w;[MN, sum(blad_)./length(blad_) min((blad_)) max((blad_))]]; end figure; [AX1,H1,H2]= plotyy(MN_w(:,1),MN_w(:,2),MN_w(:,1),MN_w(:,4), ‘plot’); set (get(AX1(1), ‘Ylabel’), ‘String’, ‘\delta’, ‘FontSize’,20, ‘C olor’,‘k’) set(get(AX1(2),‘Ylabel’),‘String’,‘\delta_{min},\delta_{max }’,‘FontSize’,20,‘Color’,‘k’) set(H1,‘LineStyle’,‘-’,‘Marker’,‘s’,‘LineWidth’,2) set(H2,‘LineStyle’,‘-’,‘Marker’,‘+’) set (AX1(2),‘Ylim’,[min(min(MN_w(:,3:4))),max(max(MN_w(:,3:4 )))]) xlabel(‘M_MxN_M’,‘FontSize’,20) grid on hold on [AX2,H1,H2]= plotyy(MN_w(:,1),MN_w(:,2),MN_w(:,1),MN_w(:,3),‘plot’); set(H2,‘LineStyle’,‘-’,‘Marker’,‘v’); set(AX2(2),‘Ylim’,[min(min(MN_w(:,3:4))),max(max(MN_w(:,3:4 )))]); legend([AX1,AX2(2)],‘\delta’,‘\delta_{min}’,‘\delta_{max}’)

As it can be seen (Fig. 4-53) ) the values of δ error fall within the 0.5-0.7 range, what is a small value as compared with the error originating during the algorithm operation for wide changes of other parameters.

Fig. 4-54 shows the graph of error δ values changes and its minimum δ_{min} and maximum δ_{max} value vs. p_{r}. As it results from (29), the change of threshold p_{r} value is directly connected with the number of selected points. For p_{r}=0.02 and higher values the number of random selected points is that large that it is possible to assume that starting from this value their number does not have a significant influence on error δ value. Fig. 4-55 shows the graph of error δ values changes and its minimum δ_{min} and maximum δ_{max} value vs. M_{H}×N_{H}. Both the choice of the points position correction area M_{H}×N_{H} and the amplitude A_{i.j} which in practical application is constant for various ‘i’ and ‘j’ is a key element affecting the error and thereby the precision of contours reconstruction. As may be seen from Fig. 4-55 the value of δ versus M_{H}×N_{H} is relatively large for A_{i,j}=const=9 (for variables ‘i’ and ‘j’), for which the computations were carried out. A strict relationship between error δ values changes vs. M_{H}×N_{H} and A_{i,j} is visible in Fig. 4-56 and Fig. 4-57 the maximum value δ_{max}
Fig. 4-57. Based on this it is possible to determine the relationship between M_{H}=N_{H} and A_{i,j}, i.e.: M_{H}=N_{H}≈1.4*A_{i,j} (in graphs in Fig. 4-56 and Fig. 4-57 for the minimum error value it may read e.g. M_{H}=N_{H}=25 at A_{i,j}=35).

From Fig. 4-56 and Fig. 4-57 it may be noticed that high error values occur for small M_{H}×N_{H} values and high A_{i,j}. This results from the fact that the consecutive points o_{i,j+1} are separated from o_{i,j} by A_{i,j} and their local position correction occurs within a small M_{H}×N_{H} range. At high A_{i,j} the rounding originating in computations of Lα value formula (28) causes large deviations of o_{i,j+1} points from the standard contour, what substantially affects the δ and δ_{max} error. Verification of these parameters may be implemented in a similar way as the previous source code with modifications in appropriate places. An attentive Reader will successfully introduce necessary modifications in appropriate place of the previous source code.

### 4.9.7. Assessment of Results Obtained from the Random Method

The method described gives correct results at contours determination (layers separation) both on OCT images as well as on others, for which classical methods of contours determination do not give results or the results do not provide a continuous contour. The algorithm drawbacks include a high influence of noise on the results obtained. This results from relationship (29) where pixels of pretty high value, resulting from a disturbance, increase the probability of selecting in this place a starting point and hence a component contour. The second drawback is the computations time, which is the longer the larger is the number of selected points and/or the reason, for which searching for the next points o_{i,j+1} was stopped (these are limitations specified in section 4).

Fig. 4-58 below presents the enlarged results obtained for an example of OCT image.

The algorithm presented may be further modified and parametrised, e.g. through A_{i,j} change for various ‘i’ and ‘j’ acc. to the criterion suggested or having considered weights of individual o_{i,j} points and taking them into account as the iteration stopping condition etc.

## 4.10. Layers Recognition on Tomographic Eye Image Based on Canny Edge Detection

### 4.10.1. Canny Filtration

The input image L_{gray} is initially subject to filtration using a median filter of (M_{H}×N_{H}) size of h=13×13 mask. The obtained L_{MED} image is subject to another filtration using a modified Canny filter, for which the next filtration stages are presented in the next sections – as a reminder:

[Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]); Lgray=Lgray(1:850,:); Lgray=ind2gray(Lgray,map); Lgray=double(Lgray)/255; Lmed=medfilt2(Lgray,[13 13]); Lmed=mat2gray(Lmed); figure; imshow(Lmed)

The first stage of the edge detection method used [14], [35], [40], [41] consists of making a convolution of input image L_{MED}
[6], i.e.:

with the following Gauss filters masks, e.g. of dimensions 3 × 3 (Fig. 4-59, Fig. 4-60):

The matrix of gradient in both directions necessary to determine the edges has been determined in accordance with a classical dependence:

and p_{xy} threshold:

where ɛ is a coefficient selected within the range ɛ ∈ (0,1).

A practical implementation of this, initial, phase of algorithm should not give rise to any difficulties:

Nx1=13; Sigmax1=2; Nx2=13; Sigmax2=2; Theta1=pi/2; Ny1=13; Sigmay1=4; Ny2=13; Sigmay2=4; Theta2=0; epsilon=0.15; hx=OCT_NOISE_gauss(Nx1,Sigmax1,Nx2,Sigmax2,Theta1); Lgx=conv2(Lmed,hx,‘same’); Lgx(Lgx<0)=0; figure; imshow(Lgx,[]) hy=OCT_NOISE_gauss(Ny1,Sigmay1,Ny2,Sigmay2,Theta2); Lgy=conv2(Lmed,hy,‘same’); Lgy(Lgy<0)=0; figure; imshow(Lgy,[]) Lgxy=sqrt(Lgx.*Lgx+Lgy.*Lgy); figure; imshow(Lgxy) I_max=max(max(Lgxy)); I_min=min(min(Lgxy)); pxy=epsilon*(I_max-I_min)+I_min; Lgxym=max(Lgxy,pxy.*ones(size(Lgxy))); figure; imshow(Lgxym,[])

The obtained images are shown below (Fig. 4-61 - Fig. 4-64).

### Fig. 4-62Image L_{GX}

### Fig. 4-63Image L_{GY}

For the final form of the formula for the matrix of edges containing image, i.e. L_{BIN_KR} it is necessary to define L_{GXYM}, i.e.:

and (x_{i},y_{i}) and (x_{j},y_{j}) coordinates of i_{xy} and j_{xy} values, respectively, determined from the relationship

where angle α was determined for each pair of pixels L_{GX} and L_{GY}:

and then the i_{xy} and j_{xy} values, which assume the level of saturation acc. to values interpolated on the plane determined from the area of 3 × 3 resolution from L_{GXYM}(m±Δm, n±Δn), where Δm and Δn are equal to 1 (Fig. 4-65, Fig. 4-66).

Hence the output image of edges determined using the Canny method L_{BIN_KR} is equal to:

An example of OCT image generated for ɛ = 0.15, where for better assessment of results obtained white pixels of L_{BIN_KR} image have been shown in Fig. 4-66. The source code of this part is given below

[M,N]=size(Lgxym); Lkr=zeros(size(Lgxym)); for m=2:M-1, for n=2:N-l, if Lgxym(m,n) > pxy, X=[−1,0,+1;−1,0,+1;−1,0,+1]; Y=[−1,−1,−1;0,0,0;+1,+1,+1]; Z=[Lgxym(m-1,n-1),Lgxym(m-1,n),Lgxym(m- 1,n+1);Lgxym(m,n-1),Lgxym(m,n),Lgxym(m,n+1);Lgxym(m+1,n- 1),Lgxym(m+1,n),Lgxym(m+1,n+1)]; alp=atan2(Lgy(m,n),Lgx(m,n)); ss=sin(alp); cc=cos(alp); XI=[cc,-cc]; YI=[ss,-ss]; ZI=interp2(X,Y,Z,XI,YI); if Lgxym(m,n) >= ZI(1) & Lgxym(m,n) >= ZI(2) Lkr(m,n)=I_max; else Lkr(m,n)=I_min; end else Lkr(m,n)=I_min; end end end figure; imshow(Lkr,[]); Lbin_kr=Lkr>0; figure; imshow(Lbin_kr)

The results obtained are presented in Fig. 4-67, Fig. 4-68.

The L_{BIN_KR} image further on provides the basis for the next steps of the algorithm operation.

### 4.10.2. Features of Line Edge

For the L_{BIN_KR} image a labelling operation has been carried out, where each cluster (of values ‘1’) has its label e_{t} = 1, 2,…,E_{t}-1, E_{t}.

Lind=bwlabel(Lbin_kr); figure; imshow(Lind,[]); colormap(‘jet’); colorbar

Then for each label e_{t} a dilatation operation is performed for a rectangular structural element SE_{d} of dimension 5 × 1 oriented acc. to the value of angle α(m,n), where the origin of coordinates was placed in its first row [26]. The obtained L_{IND} image in pseudocolours is shown in Fig. 4-69.

Fig. 4-70 shows weight values for consecutive (from among the initial ones) labels of L_{IND} image (Fig. 4-69) i.e. binary images L_{et}, where P_{et} is the surface of object for label e_{t} and I_{et} is the average value of its level of grey, i.e.:

The determined P_{et} and I_{et} values will be later on used as features during ultimate analysis of edge lines. These values have been written in order in the data variable in the following source code:

data=[]; xd=[]; xdpk=[]; yd=[]; ydpk=[]; Let_=zeros(size(Lind)); for et=1:max(Lind(:)) Let=(Lind==et); [xx_,yy_]=meshgrid(1:size(Let,2),1:size(Let,1)); xx_(Let==0)=[]; yy_(Let==0)=[]; xd(et,1:length(xx_))=xx_; yd(et,1:length(yy_))=yy_; xdpk(et,1:2) = [xx_(1),xx_(end)]; ydpk(et,1:2)=[yy_(1),yy_(end)]; Let2=Let; Let3=Let; for i=8:(size(Let,1)-8) for j=8:(size(Let,2)-8) p=Let (i,j); if p>0; alp=atan2(Lgy(i,j),Lgx(i,j)); ss=sin(alp); cc=cos(alp); Let2(round(i+ss),round(j+cc))=p; Let2(round(i+2*ss),round(j +2*cc))=p; Let2(round(i+3*ss),round(j+3*cc))=p; Let2(round(i+4*ss),round(j+4*cc))=p; Let2(round(i+5*ss),round(j+5*cc))=p; Let2(round(i+6*ss),round(j+6*cc))=p; Let2(round(i+7*ss),round(j+7*cc))=p; Let3(round(i-ss),round(j-cc))=p; Let3(round(i-2*ss),round(j-2*cc))=p; Let3(round(i-3*ss),round(j-3*cc))=p; Let3(round(i-4*ss),round(j-4*cc))=p; Let3(round(i-5*ss),round(j-5*cc))=p; Let3(round(i-6*ss),round(j-6*cc))=p; Let3(round(i-7*ss),round(j-7*cc))=p; end end end Let_((Let2+Let3)>0)=et; data(et,1)=et; data(et,2)=sum(sum(Let)); Lmed_1=Let2.*Lmed; Lmed_1(Let2==0)=[]; Lmed_2=Let3.*Lmed; Lmed_2(Let3==0)=[]; Lmed_3=Let.*Lmed; Lmed_3(Let3==0)=[]; data(et,4)=mean(Lmed_1)-mean(Lmed_2); data(et,3)=mean(Lmed_3); end figure; imshow(Let_,[]); colormap(‘jet’); colorbar

Matrices L_{et2} and L_{et3} have been used in the above source code, being the result of dilatation on the one and on the other side of analysed pixel of the e_{t} area. In addition, coordinates of the beginning and of the end of the analysed e_{t} area have been written in variables z_{dpk} and y_{dpk}. This data will be necessary at a further stage of connecting individual contour fragments.

### 4.10.3. Contour Line Correction

Each solid line of edge visible in L_{et} image for labels e_{t}=1,2,…,E_{t}-1,E_{t} is transformed into the form of x_{et} and y_{et} vector of points' coordinates in a Cartesian coordinate system. The method of contour line correction is applied to ‘elongation’ of each edge in both directions. To this end for the first two pairs of coordinates of the first edge (x_{1}(1), y_{1}(1)) and (x_{1}(2), y_{1}(2)) as well as for the last two (x_{1}(end-1), y_{1}(end-1)) and (x_{1}(end), y_{1}(end)) a straight line passing through those points is determined (end – means the last element), i.e. in accordance with demonstrative illustration below (Fig. 4-71):

Fig. 4-71 presents the ideas of contour correction method, where starting from the position of points (x_{1}(end-1),y_{1}(end-1)) and (x_{1}(end), y_{1}(end)) the straight line passing through them is determined with a slope β_{1}, i.e.:

and at the distance of Δxy the position of new point (x_{1,k}(1), y_{1,k}(1)) is determined for its various potential positions (within the angle range β_{1}(1)±α every Δα). The selection of right position of a contour point obtained by adding consecutive points to the existing edge is obtained based on the analysis of mean value from e_{u1}(x_{u}, y_{u}, α, 1) and e_{u1}(x_{d}, y_{d}, α, 1) areas of M_{e}×N_{e} size. The difference ΔS is determined for each position of point (x_{1,k}(1), y_{1,k}(1)):

where:

x_{u}, y_{u} – coordinates of consecutive elements of matrix e_{u} and h_{u} situated atop relative to the analysed point (x_{1,k}(1), y_{1,k}(1)), for which x_{u}∈{1,2,…,N_{u}−1,N_{u}} and y_{u}∈{1,2,…,N_{u}−1,N_{u}}

x_{d}, y_{d} – coordinates of consecutive elements of matrix e_{d} and h_{d} situated at the bottom relative to the analysed point (x_{1,k}(1), y_{1,k}(1)), for which x_{d}∈{1,2,…, N_{d}−1,N_{d}} and y_{d}∈{1,2,…,N_{d}−1,N_{d}}

and h_{u} and h_{d} masks for M_{e}×N_{e} =3×2

### Fig. 4-72Mask h_{u} for M_{e}×N_{e} =3×2

### Fig. 4-73Mask h_{d} for M_{e}×N_{e} =3×2

The areas (matrices) e_{u} and e_{d} of M_{e}×N_{e} size are created based on angle β and α every Δα in the following way:

where x_{u}∈{1,2,3,… N_{e}-1,N_{e}} and x_{u}∈{1,2,3,… N_{e}-1,N_{e}} and β_{1}(1), in general β_{1}(v_{1}):

for v_{1}∈{2,3,… V_{1}-1,V_{1}}, V_{1} – a total number of points of contour correction implemented for line 1 of the contour.

The angle, for which there is the best fit of the analysed point (x_{1,k}(v_{1}), y_{1,k}(v_{1})), is calculated as α* for which ΔS(v_{1},α) reaches a maximum or minimum depending on the position and brightness of the analysed object.

Consecutive points determined for increasing v_{1} must be limited. The minimum value ΔS(v_{1}, α*) limited by threshold p_{r} is this bound.

The suggested method of contour correction has very interesting properties. Parameters of this part of algorithm include:

- α - the angle, within which the best fit is sought with regard to the given criterion,
- Δα - accuracy, with which the best fit is sought,
- Δxy - the distance between the current and the next sought point of the active contour,
- M
_{e}- height of analysed area e_{u}and e_{d}, - N
_{e}- width of analysed area e_{u}and e_{d}.

The function constructed on this basis is presented below.

function [x_out,y_out,wagi,iter]=OCT_COR_LINE(Lmed,x_in,y_in,udxy_,m ene,alpha,iter_,pr,dxy) wagi=[]; xi=x_in(end); yi=y_in(end); beta=atan2((y_in(end)-y_in(end-1)),(x_in(end)-x_in(end- 1))); x_out=xi;y_out=yi; for iter=1:iter_ eu=[]; ed=[]; deltaS=[]; for alpha_=-alpha:alpha for udxy=0:udxy_ yi_=yi+udxy*sin(beta+alpha_*pi/180); xi_=xi+udxy*cos(beta+alpha_*pi/180); al_be=beta+(alpha_+90)*pi/180; ss=sin(al_be); cc=cos(al_be); for mene_=1:mene yy=round(yi_+mene_*ss); xx=round(xi_+mene_*cc); if (yy>1)&(yy<=size(Lmed,1))&(xx>1)&(xx<=size(Lmed,2)) eu(udxy+1,mene_)=Lmed(yy,xx)/mene_; else eu(udxy+1,mene_)=0; end end for mene_=1:mene yy=round(yi_-mene_*ss); xx=round(xi_-mene_*cc); if (yy>1)&(yy<=size(Lmed,1))&(xx>1)&(xx<=size(Lmed,2)) ed(udxy+1,mene_)=Lmed(yy,xx)/mene_; else ed(udxy+1,mene_)=1; end end end deltaS=[deltas;[alpha_,mean(ed(:))- mean(eu (:))]]; end deltaS=sortrows(deltas,2); if deltas(1,2)>pr break end wagi(iter)=deltaS(1,2); al_be_=beta+deltaS(1,1)*pi/180; yi=yi+dxy*sin(al_be); xi=xi+dxy*cos(al_be); beta=al_be_; xyxy=[x_out’,y_out’]; if sum(((round(xyxy(:,1))==round(xi)) + (round(xyxy(:,2))==round(yi)))==2)>=2 break end x_out=[x_out,xi]; y_out=[y_out,yi]; end end

Fig. 4-74 - Fig. 4-77 below present the results obtained for an artificial image of a square for modified aforementioned parameters α, Δα, Δxy, M_{e}, N_{e} changed within the range α∈{1,2,3,…,19,20}, Δxy=N_{e}∈{1,2,3,…, 19,20}, M_{e}∈{1,2,3,…, 19,20} for Δα=1, and p_{r}=-0.001. Also the number of iterations was limited to 50.

### Fig. 4-75Artificial image and fragment of contour correction action for α=40, Δα=1, Δxy=N_{e}=4, M_{e} changed within the range (1,20)

### Fig. 4-76Artificial image and fragment of contour correction action for α=40, M_{e}=10, Δxy=N_{e}=10, Δα changed within the range (1,20)

The presented contour correction method has the following properties:

- α - angle defining the range sought in the sense of degree of object edge curvature,
- Δα - accuracy, with which the degree of edge curvature is sought,
- Δxy- distance between the current and next sought point affecting the extent of generalisation and approximation of intermediate values (placed between points),
- M
_{e}- height of analysed area affecting the algorithm capability to find objects of higher level of detail, - N
_{e}- width of analysed area averaging the contour sought along edges.

The experiments and algorithm parameters measurements presented (Fig. 4-74 - Fig. 4-77) can be easily followed using a short source code:

Lmed=zeros(300); Lmed(200:250,100:250)=1; Lmed=conv2(Lmed,ones(19))./sum(sum(ones(19)));Lmed(220:end, :)=0; figure; imshow(Lmed) x_in=[100,101]; y_in=[200,200]; hold on; plot(x_in,y_in,‘*g-’) map=j et (20); udxy_=4; iter_=70; pr=−0.0001; dxy=4; alpha=45; for mene=1:20 [x_out,y_out,wagi,iter]=OCT_COR_LINE(Lmed,x_in,y_in,udxy_,m ene,alpha,iter_,pr,dxy) hold on; plot(x_out,y_out,‘*-’,‘color’,map(mene,:)) axis([222 275 186 236]) pause(0.05) end

We encourage Readers here to perform independent changes of x_in,y_in,udxy_,mene,alpha,iter_,pr,dxy values and to experimentally verify these parameters influence on the result obtained.

### 4.10.4. Final Analysis of Contour Line

The obtained individual lines of edges e_{t} and corresponding values I_{et} and P_{et} (average value of brightness and surface) have been adjusted. Because those edges have been removed, which have
${I}_{et}<{p}_{r}\cdot \underset{et\in \{1,2,3,\dots ,Et\}}{\mathrm{max}}({I}_{et})$ and for which
${P}_{et}<{p}_{r}\cdot \underset{et\in \{1,2,3,\dots ,Et\}}{\mathrm{max}}({P}_{et})$ where threshold p_{r} was arbitrarily taken as 0.2 (20%). For the other edges e_{k}, which have not been removed, the adjustment was made using on their ends the active contour method. The values of active contour parameters were taken as α=45, Δα=1, Δxy=1, M_{e}=11, N_{e}=11. Iterations for individual e_{k} edges of active contour method were interrupted, when one of the following situations occurred:

- the acceptable iterations number was exceeded – set arbitrarily at 1000,
- for that point the condition ΔS(v
_{ek}, α*)<p_{s}has not been met, where p_{s}was set at -0.02, - at least two points have the same coordinates – this prevents looping of the algorithm.

Results obtained for parameters determined this way are presented below (Fig. 4-78, Fig. 4-79).

As shown in the figures above (Fig. 4-78, Fig. 4-79) the method suggested correctly detects individual layers on an OCT eye image. Further stages, which are planned in this approach continuation, are related to a deeper analysis of the algorithm in terms of parameters selection. The discussed algorithm fragment looks as follows:

figure; imshow(Lmed,[]); hold on hh=waitbar(0,‘Please wait…’) for et=1:max(Lind(:)) Let=(Lind==et); [x_in,y_in]=meshgrid(1:size(Let,2),1:size(Let,1)); x_in(Let==0)=[]; y_in(Let==0)=[]; mene=15; udxy_=10; alpha=45; dxy=1; pr=−0.01; if length(x_in)>5 [x_out,y_out,wagi,iter]=OCT_COR_LINE(Lmed, x_in,y_in, udxy_,mene,alpha,1275,pr, dxy); hold on; plot(x_out,y_out,‘w.’) pause(0.1) end waitbar(et/max(Lind(:))) end close(hh)

We encourage the Reader again to modify parameters of function
`OCT_COR_LINE` allowing obtaining proper results and enabling learning the function capabilities. A few artefacts, resulting from improper selection of
`OCT_COR_LINE` function parameters, are presented below.

### Fig. 4-80Examples of artefacts resulting from improper selection of function
`OCT_COR_LINE` parameters

The presented method of combination of Canny edge detection algorithm with the modified active contour algorithm is applied in detection of external limiting membranes on tomographic OCT eye images. The method proposed may be used during images segmentation into other contents than presented, provided that values of parameters mentioned are modified [23]. Despite satisfactory results presented above there is a pretty large area for research related to modification of the algorithm presented in terms of operation time optimisation. The time of analysis in this, as well as in many other cases of image analysing applications, is of crucial importance in practical use. In terms of functionality, implementation difficulties, the speed of operation, this method may be classified as an average one.

## 4.11. Hierarchical Approach in the Analysis of Tomographic Eye Image

### 4.11.1. Image Decomposition

Images originating from a Copernicus tomograph due to its specific nature of operation are obtained in sequences of a few, a few dozen 2D images within approx. 1s, which provide the basis for 3D reconstruction [42]. Because of their number, the analysis of a single 2D image should proceed within a time not exceeding 10, 20, 30, 40, 50 ms, so that the time of operator’s waiting for the result would not be onerous (as it could be easily calculated for the above value, for a few dozen images of resolution usually M × N = 740 × 820 in a sequence, this time will be shorter than 1 s).

At the stage of image preprocessing the input image L_{GRAY} is initially subject to filtration using a median filter of (M_{h}×N_{h}) size of mask h equal to M_{h}×N_{h}=3×3 (in the final software version this mask may be set as M_{h}×N_{h}=5×5 to obtain a better precision of algorithm operation for certain specified group of images), i.e.:.

[Lgray,map]=imread([‘D:\OCT\FOLDERS\2.OCT\SKAN7.bmp’]); Lgray=Lgray(1:850,:); Lgray=ind2gray(Lgray,map); Lgray=double(Lgray)/255; Lm=medfilt2(Lgray, [3 3]); Lm=mat2gray(Lm); figure; imshow(Lm)

Image L_{M} obtained this way is subject consecutively to decomposition to an image of lower resolution and analysed in terms of layers detection.

As an assumption, different than those presented in previous algorithm sections, the algorithm described should provide satisfactory results mainly from the operation speed criterion point of view. Although methods (algorithms) described feature high precision of computations, however, they are not fast enough (it is difficult to obtain the speed of single 2D image analysis on a PII 1.33 GHz processor in a time not exceeding 10 ms). Therefore a reduction of image L_{M} resolution by approx. a half was proposed to such value of pixels number in lines and columns, which is a power of ‘2’, i.e.: M×N=256×512 (L_{M2}) applying further on its decompositions to image L_{D16} (where symbol ‘D’ means decompositions, while ‘16’ the size of block, for which it was obtained), i.e.:

d=16; fun = @(x) median(x(:)); Ldl6 = blkproc(Lm,[d d],fun);

Each pixel of the input image after decomposition has a value equal to a median of the area (block) of 16×16 size of the input image, acc. to Fig. 4-81.

An example of result L_{D16} is presented in **Błąd! Nie można odnaleźć źródła odwołania.**Fig. 4-82. Image L_{D16} is then subject to determination of pixels position of maximum value for each column, i.e.:

where

- m – means a row numbered from one,
- n – means a column numbered from one.

Appropriate record in Matlab

Ldml6=Ld16==(ones([size(Ld16,1)1])*max(Ld16)); figure; imshow(Ldm16,‘notruesize’)

Using the described method of threshold setting for the maximum value in lines, in 99 percent of cases only one maximum value in a column is obtained (Fig. 4-83).

To determine precisely the position of NFL and RPE boundaries (Fig. 4-82) it turned out necessary to use one more L_{DB16} image, i.e.:

for m∈(1,M-1), n∈(1,N), where p_{r} – the threshold assumed within the range (0, 0.2).

A record in Matlab looks as follows:

Ldb16_=zeros(size(Ld16)); for n=1:size(Ld16,2)−1 Ldb16_(1:end-1,n)=diff(Ld16(:,n)); end pr=0.1; Ldb16=Ldb16_>pr; figure; imshow(Ldbl6,‘notruesize’)

As a result, coordinates of NFL and RPE boundaries position points are obtained as such positions of values ‘1’ on L_{DB16} image, for which y_{NFL}≤y_{RPE} and y_{RPE} is obtained from L_{DB16} image in the same way.

This method for p_{r} threshold selection at the level of 0.01 gives satisfactory results in around 70 percent of cases of not composed images (i.e. such, which are not images with a visible pathology). Unfortunately for the other 30 percent cases the selection of p_{r} threshold in the adopted limits does not reduce the originated errors (Fig. 4-84).

The correction on this level of erroneous recognitions of NFL and RPE layers is that important, that for this approach these errors will not be duplicated (in the hierarchical approach presented below) for the subsequent more precise approximations.

### 4.11.2. Correction of Erroneous Recognitions

In L_{DB16} image (Fig. 4-84) white pixels are visible in an excess number for most columns. Two largest objects arranged along ‘maxima’ in columns entirely coincide with NFL and RPE limits position. Based on that and having carried out the above analysis for several hundred images, the following limitations were adopted:

- for coordinates y
_{RPE}found on L_{DM16}image there must be at the same time L_{DM16}(m,n)=1 in other cases this point is considered as disturbance or as a point of G_{w}(n) layer, - if only one pixel of value ‘1’ occurs on image L
_{DM16}and L_{DB16}for the same position, i.e. for the analysed n there is L_{DM16}(m,n) = L_{DB16}(m,n) the history is analysed for n>l and it is checked, whether |y_{NFL}(n-1) -y_{NFL}(n)|> |y_{RPE}(n-1) -y_{RPE}(n)|, i.e.:$$\text{Rp}(\text{n})=\{\begin{array}{ccc}\text{m}& \text{if}& \begin{array}{l}{\text{L}}_{\text{DB}16}(\text{m},\text{n})={\text{L}}_{\text{DM}16}(\text{m},\text{n})=1\wedge \hfill \\ \wedge \left|{\text{y}}_{\text{NFL}}(\text{n}\u20101)-{\text{y}}_{\text{NFL}}(\text{n})\right|>\left|{\text{y}}_{\text{RPE}}(\text{n}\u20101)-{\text{y}}_{\text{RPE}}(\text{n})\right|\hfill \end{array}\\ 0& \text{other}& \end{array}$$58for m∈(1,M), n∈(2,N) - if |y
_{NFL}(n-1)-y_{NFL}(n)|≤|y_{RPE}(n-1)-y_{RPE}(n)|, the condition y_{NFL}(n-1)-y_{NFL}(n)=±1 is checked (giving thereby up fluctuations against history n-1 within the range ±1 of area A (Fig. 4-81)). If so, then this point is the next y_{NFL}(n) point. In the other cases the point is considered as a disturbance. It is assumed that lines coincide y_{NFL}(n)=y_{RPE}(n) if y_{RPE}(n-1)-y_{RPE}(n)=±1 and only one pixel occurs of value ‘1’ on L_{DM16}image. - in the case of occurrence in specific column of larger number of pixels than 2, i.e. if $\underset{\text{m}}{\text{sum}}({\text{L}}_{\text{DB16}}(\text{m},\text{n}))>2$ a pair is matched (if occurs) y
_{NFL}(n-1), y_{RPE}(n-1) so that |y_{NFL}(n-1)-y_{NFL}(n-1)|- |y_{RPE}(n)-y_{RPE}(n)| =±1 would occur. In this case it may happen that lines y_{NFL}(n) and y_{RPE}(n) will coincide. However, in the case of finding more than one solution, that one is adopted, for which L_{D16}(y_{NFL}(n),n)+L_{D16}(y_{RPE}(n),n) assumes the maximum value (the maximum sum of weights in L_{D16}occurs).

The presented correction gives for the above class of images the effectiveness of around 99% of cases. Despite adopted limitations the method gives erroneous results for the initial value n=1, unfortunately these errors continue to be duplicated.

Unfortunately, the adopted relatively rigid conditions of acceptable difference |y_{NFL}(n-1)-y_{NFL}(n-1)| or |y_{RPE}(n)-y_{RPE}(n)| cause origination of large errors for another class of tomographic images, on which a pathology occurs in any form (Fig. 4-86).

As it may be seen in Fig. 4-85 and Fig. 4-86 problems occur not only for the initial n values, but also for the remaining points. The reason for erroneous recognitions of layers positions consists of difficulty in distinguishing proper layers in the case of discovering three ‘lines’, three points in a specific column, which position changes in acceptable range for individual n.

These errors cannot be eliminated at this stage of decomposition into 16×16 pixels areas (or 32×32 image resolution). They will be the subject of further considerations in the next sections.

The present form of algorithm is a little extended as against the description presented above, what results from the necessity to introduce numerous limitations and algorithm blocks. As the blocks mentioned are not technically related to the OCT image analysis, they will not be discussed here in detail. However, we encourage the Reader to follow this, apparently, complicated algorithm.

pr=0.005; [mss,nss,waga_p,L5,L6]=HIERARHICALL_STEP(Lm,fun,d,pr); fg=figure; imshow(Lm); hold on plot(nss’*d-d/2,mss’*d-d/2,‘-*’)

where function
`HIERARHICALL_STEP` is:

function [ynf1_rpe,xnf1_rpe,waga_p,Ld16d,Ldb16z]=HIERARHICALL_STEP(L m,fun,d,pr) ynfl_rpe=[]; xnfl_rpe=[]; waga_p=[]; Ld16 = blkproc(Lm,[d d],fun); fun2 = @(x) max(x(:)); Ld16__= blkproc(Lm,[d d],fun2); Ld16__=[Ld16__(2:end,:);Ld16__(end,:)]; Ldm16=Ld16__==ones([size(Ld16__,1),1])*max(Ld16__); for n=1:size(Ld16,2); Ld16(:,n)=mat2gray(Ld16(:,n)); end Ld16d=zeros(size(Ld16)); for n=1:size(Ld16,2) Ld16d(1:end-1,n)=diff(Ld16(:,n)).*Ld16(2:end,n); end Ldm16=zeros(size(Ld16d)); for n=1:size(Ld16d,2) Ldm16(1:end,n)=Ld16d(1:end,n)==max(Ld16d(1:end,n)); end Ldb16=Ld16d>pr; Ldb16=bwmorph(Ldb16,‘clean’); figure; imshow(Ldb16,[],‘notruesize’); hold on Ldb16_lab=bwlabel(Ldb16); Ldb16z=zeros(size(Ldb16_lab)); for et=1:max(Ldb16 lab (:)) Ldb16i=(Ldb16_lab==et); if sum(sum(Ldb16i.*Ldm16))>0 Ldb16z=Ldb16z|Ldb16i; end end Ldb16z=bwmorph(Ldb16z,‘clean’); Ldb16_lab2=bwlabel(Ldb16); L77=zeros(size(Ldb16z)); for iw=1:size(Ldb16z, 2) L77(:,iw)=bwlabel(Ldb16z(:,iw)); end if (max(L77(:))<2)&(max(Ldb16_lab2(:))==2) Ldb16z=Ldb16; end ynf1_rpe=[]; xnfl_rpe=[]; for iu=1:size(Ldl6d,2) if sum(Ldb16z(:,iu))>0 Ldb16z_lab=bwlabel(Ldb16z(:,iu)|Ldm16(:,iu)); if maxTLdb16z_lab(:))<=2 Ldb16z_nr=1:size(Ld16d,1); Ldb16z_nr(Ldb16z(:,iu)==0)=[]; Ld16d_nr=1:size(Ld16d,1); Ld16d_nr(Ldb16(:,iu)==0)=[]; if Ld16d_nr(1)==Ldb16z_nr(end) if size(ynf1_rpe,2)>0 if min(abs(ynfl_rpe(:,end)- Ldb16z_nr))<=2 if abs(ynf1_rpe(1,end)- Ld16d_nr (1))<abs (ynf1_rpe(2,end) -Ld16d_nr(1)) ynfl_rpe=[ynfl_rpe,[Ld16d_nr(1);ynfl_rpe(2,end)]]; xnfl_rpe=[xnfl_rpe,[iu;iu]]; else ynfl_rpe=[ynfl_rpe,[Ld16d_nr(1);Ldb16z_nr(end)]]; xnfl_rpe=[xnfl_rpe,[iu;iu]]; end end else ynfl_rpe=[ynfl_rpe,[Ld16d_nr(1);Ldb16z_nr(end)]]; xnf1_rpe=[xnf1_rpe,[iu;iu]]; end else ynfl_rpe=[ynfl_rpe,[Ld16d_nr(1);Ldb16z_nr(end)]]; xnfl_rpe=[xnfl_rpe,[iu;iu]]; end else et_Ldb16=[]; for et=1:max(Ldb16z_lab) et_Ldb16=[et_Ldb16;[et,max((Ldb16z_lab==et).*Ld16__(:,iu))] ]; end et_Ldb16=sortrows(et_Ldb16,−2); if et_Ldb16(2,2)*8>et_Ldb16(1,2) if size(ynfl_rpe,2)>0 Ld16d_nr2=1:size(Ld16d,1); Ld16d_nr2(Ldb16z_lab∼=et_Ldb16(1,1))=[]; if abs(ynfl_rpe(2,end)- Ld16d_nr2)<abs(ynfl_rpe(1,end)-Ld16d_nr2) et_Ldb16(et_Ldb16(:,1)>et_Ldb16(1,1),:)=[]; et_Ldb16=sortrows(et_Ldb16,−2); else et_Ldb16=sortrows(et_Ldb16,−2); end end end et_Ldb16(3:end,:)=[]; et_Ldb16=sortrows(et_Ldb16,1); Ldb16z_nr=1:size(Ld16d,1); if size(et_Ldb16,1)==1 Ldb16z_nr(Ldb16z_lab∼=et_Ldb16(1,1))=[]; else Ldb16z_nr(Ldb16z_lab∼=et_Ldb16(2,1))=[]; end Ld16d_nr=1:size(Ld16d,1); Ld16d_nr(Ldb16z_lab∼=et_Ldb16(1,1))=[]; ynfl_rpe=[ynfl_rpe,[Ld16d_nr(1);Ldb16z_nr(end)]]; xnfl_rpe=[xnfl_rpe,[iu;iu]]; end end end

### 4.11.3. Reducing the Decomposition Area

The increasing of accuracy and thereby reducing the A_{m,n} area size (Fig. 4-81) – block on L_{M} image is a relatively simple stage of tomographic image processing with particular focus on the operating speed. It has been assumed that A_{m,n} areas will be sequentially reducing by half in each iteration – down to 1×1 size. The reduction of A_{m,n} area is equivalent to performance of the next stage of lines NFL and RPE position approximation.

The increasing of accuracy (precision) of NFL and RPE lines position determined in the previous iteration is connected with two stages:

- concentration of (m,n) coordinates in the sense of determining intermediate ((m,n) points situated exactly in the centre) values by means of linear interpolation method;
- change of concentrated points position so that they would better approximate the limits sought.

If the first part is intuitive and results only in resampling, the second requires more precise clarifications. The second stage consists in matching individual points to the layer sought. As on the ox axis the image by definition is decomposed and pixel’s brightness on the image analysed corresponds to the median value of the original image in window A (Fig. 4-81), the modification of points RPE and NFL position occurs only on the vertical axis. The analysis of individual RPE and NFL points is independent in the sense of dependence on n-1 point position, as was the case in the previous section.

Each of RPE points, left from the previous iteration, and newly created from interpolation, in the consecutive algorithm stages is matched with increasingly high precision to the RPE layer. Point’s RPE(n) position changes within the range of ±p_{u} (Fig. 4-87) where the variation range does not depend on the scale of considerations (size of A area) and strictly results from the distance between NFL and RPE (Fig. 4-88). For blocks A of 16×16 to 1×1 size p_{u} is constant and amounts to 2. This value has been assumed on the basis of, typical for the analysed several hundred L_{GRAY} images, average distance between NFL and RPE, equal to around 32 pixels, what means that after decomposition into blocks A of 16×16 size these are two pixels, that is p_{u}=2. The maximum on the L_{DM} image is sought in this ±2 range and a new position of RPE or NFL point is assumed for it. Thus the course of RPE or NFL is closer to the actual course of the layer analysed.

The obtained results of matching are presented in Fig. 4-88. White colour shows input RPE values as input data for this stage of algorithm and decomposition into blocks A of size 16×16 (L_{DM16} and L_{DB16} images), red colour – results of matching for blocks A of size 8×8 (L_{DM8} and L_{DB8} images), and green colour – results of matching for blocks A of size 4 × 4 (L_{DM4} and L_{DB4} images). As may be seen from Fig. 4-88 the next decompositions into consecutive smaller and smaller areas A and thus image of higher resolution, a higher precision is obtained at the cost of time (because the number of analysed RPE, NFL points and their neighbourhoods ±p_{u} increases).

This method for A of 16 × 16 size has that good properties of global approach to pixels brightness that there is no need to introduce at this stage additional actions aimed at distinguishing layers situated close to each other (which have not been visible so far due to image resolution). While at areas A of 4×4 size other layers are already visible, which should be further properly analysed. At increased precision, ONL layer is visible, situated close to RPE layer (Fig. 4-88). Thereby in the area marked with a circle there is a high position fluctuation within the oy axis of RPE layer. Because of that the next step of algorithm has been developed, taking into account separation into RPE and ONL layers for appropriately high resolution. In a practical implementation this fragment looks as follows:

function [mss2,nss2]=HIERARHICALL_PREC(Lm,mss,nss,fun,d,z,pu) mss=mss*z; nss=nss*z; [mss,nss]=HIERARHICALL_DENSE(mss,nss); Ld16 = blkproc(Lm,[d/z d/z],fun); Ld16d=zeros(size(Ld16)); for n=1:size(Ld16,2) Ld16d(1:end-1,n)=diff(Ld16(:,n)); end mss2=[]; nss2=[]; for m=1:size(mss,1) for n=1:size(mss,2) if mss(m,n)∼=0 % ms2=mss(m,n); ns2=nss(m,n); m2=ms2+pu; ml=ms2-pu; if m1<=0; m1=1; end if m2>size(Ld16d,1); m2=size(Ld16d,1); end mm12=round(m1:m2); if ∼isempty(mm12) Ld16dmm=Ld16d(mm12,ns2); mm12(Ld16dmm∼=max(Ld16dmm))=[]; if ∼isempty(mm12) mss2(m,n)=mm12(1); nss2(m,n)=ns2; end end end end end

Where function
`HIERARHICALL_DENSE` designed to condense the number of points on determined layers has the following form:

function [y_out,x_out]=HIERARHICALL_DENSE(y_in,x_in) y_out=[0;0]; x_out=[0;0]; y_in(:,x_in(1,:)==0)=[]; x_in(:,x_in(1,:)==0)=[]; for i=1:(size(y_in,2)−l) m_1=y_in(1,i:i+1); n_12=x_in(1,i:i+1); m_2=y_in(2,i:i+1); x_out(1:2,1:end+length(n_12(1):n_12(2)))=[[x_out(1,:),n_12( 1):n_12(2)];[x_out(2,:),n_12(1):n_12(2)]]; x_out(:,end)=[]; if (m_1(2)-m_1(1))∼=0 w1=m_1(1):(m_1(2)- m_1(1))/(length(n_12(1):n_12(2))−1):m_1(2); else w1=ones([1 length(n_12(1):n_12(2))])*m_1(1); end if (m_2(2)-m_2(1))∼=0 w2=m_2(1):(m_2(2)- m_2(1))/(length(n_12(1):n_12(2))-1):m_2(2); else w2=ones([1 length(n_12(1):n_12(2))])*m_2(1); end y_out(1:2,1:end+length(n_12(1):n_12(2)))=[y_out(1:2,:),[w1; w2]]; y_out(:,end)=[]; end y_out=y_out(:,2:end); x_out=x_out(:,2:end);

Hence the function
`HIERARHICALL_PREC` is designated to ‘match’ layers position at any precision.

Both functions –
`HIERARHICALL_PREC` and nested
`HIERARHICALL_DENSE` – will be used below in the next stages of detected layers approximation to the proper position.

z=2; pu=2; [mss,nss]=HIERARHICALL_PREC(Lm,mss,nss,fun,d,z,pu); plot(nss’*d/z-d/z/2, mss’*d/z,‘-r*’) z=4; pu=3; [mss,nss]=HIERARHICALL_PREC(Lm,mss/2,nss/2,fun,d,z,pu); plot(nss’*d/z-d/z/2, mss’*d/z,‘-g*’)

The obtained results are shown below in Fig. 4-89 and Fig. 4-90.

The results shown in Fig. 4-89 and Fig. 4-90 are not perfect. A visible minimum of NFL layer results from the lack of filtration at the initial stage of y_{NFL} course. Because of that function
`HIERARHICALL_MEDIAN` presented below has been suggested, intended to filtrate using a median filter.

function [m_s,n_s]=HIERARHICALL_MEDIAN(mss,nss, Z) for j=1:size(nss,1) for io=1:size(nss,2) p=io-round(Z/2); k=io+round(Z/2); if k>size(nss,2); k=size(nss,2); end; if p<1; p=1; end m_s(j,io)=median(mss(j,p:k)); end end n_s=nss;

The considerations presented above, related to a hierarchical approach, lead to suggesting the final version of algorithm detecting the ONL, RPE and NFL layers.

[Lgray,map]=imread([‘D:\OCT\SOURCES\3.bmp’]); Lgray=ind2gray(Lgray,map); Lgray=double(Lgray)/255; Lorg=Lgray; Lm=medfilt2(Lorg,[5 5]); Lm=mat2gray(Lm); szer_o=16; Lm=[Lm(:,1)*ones([1 szer_o]),Lm,Lm(:,end)*ones([1 szer_o])]; fun = @(x) median(x(:)); [mss,nss,waga_p,L5,L6]=HIERARHICALL_STEP(Lm,fun,szer_o,0.03 ); [mss,nss]=HIERARHICALL_PREC(Lm,mss,nss,fun,szer_o,2,2); [mss,nss]=HIERARHICALL_PREC(Lm,mss/2,nss/2,fun,szer_o,4,3); [yrpe_onl,xrpe_onl]=HIERARHICALL_MEDIAN(mss(1,:)*4,nss(1,:) *4,5); [ynfl,xnfl,Lgr]=HIERARHICALL_PREC2(Lm,mss*4,nss*4,20,20); xnfl(:,xnfl(1,:)==0)=[]; ynfl(:,ynfl(1,:)==0)=[]; xnfl(:,xnfl(2,:)==0)=[]; ynfl(:,ynfl(2,:)==0)=[]; [ynfl,xnfl]=HIERARHICALL_MEDIAN(ynfl,xnfl,5); figure; imshow(Lm,‘notruesize’); hold on plot(xnfl’,ynfl’,‘LineWidth’,2) plot(xrpe_onl,yrpe onl,‘r’,‘LineWidth’,2)

where function
`HIERARHICALL_PREC2` looks as follows:

function [mss2,nss2,Lgr]=HIERARHICALL_PREC2(Lm,mss,nss,pu,pu2) [mss,nss]=HIERARHICALL_DENSE(mss,nss); mss2=[]; nss2=[]; Lgr=[];ngr=[]; for n_=1:size(mss,2); n=round(nss(2,n_)); m1=round(mss(2,n_))-pu; m2=round(mss(2,n_))+pu; if m1<1; m1=1; end; if m2>size(Lm,1); m2=size(Lm,1); end Lmn=Lm(m1:m2,n); Lmnr2=1:length(Lmn); Lmf=[Lmnr2’,Lmn]; Lmf=sortrows(Lmf,-2); Lmf(Lmf(:,2)<(0.9*Lmf(1,2)),:)=[]; Lmf=sortrows(Lmf,-1); Lmnr2=Lmf(1,1); nss2=[nss2,n]; mss2=[mss2,ml+Lmnr2(1)-1]; m11=m1+Lmnr2(1)-1-pu2; m22=m1+Lmnr2(1)-1+pu2; if m11<1; m11=1; end; if m22>size(Lm,1); m22=size(Lm,1); end if length(m11:m22)==(pu2*2+1) Lmn=Lm(m11:m22,n); Lgr=[Lgr,Lmn]; ngr=[ngr,n_]; end end Lgr=filter2(ones ([3 3]),Lgr)/9; for n=1:size(Lgr,2) po_=Lgr(:,n); P_= POLYFIT(1:length(po_),po_’,5); po = POLYVAL(P,1:length(po_)); dpo=diff(po); dpo(round(length(dpo)/2):end)=0; dnr=1:length(dpo); if max(dpo)>0.03 dnr(dpo∼=max(dpo))=[]; dnr_=dnr; nss2(2,ngr(n))=nss2(1,ngr(n)); mss2(2,ngr(n))=mss2(1,ngr(n))+dnr-pu2; for itt=(n+1):size(Lgr,2) po_=Lgr(:,itt); P = POLYFIT(1:length(po_),po_’,4); po = POLYVAL(P,1:length(po_)); dpo2=diff(po); dnr1=dnr-3; dnr2=dnr+3; if dnr1<1; dnr1=1; end; if dnr2>length(dpo2); dnr2=length(dpo2); end dpo2([1:dnr1,dnr2:end])=0; dnr2=1:length(dpo2); if max(dpo2)>0 dnr2(dpo2∼=max(dpo2))=[]; dnr=dnr2(1); nss2(2,ngr(itt))=nss2(1,ngr(itt)); mss2(2,ngr(itt))=mss2(1,ngr(itt))+dnr-pu2; end end dnr=dnr_; for itt=(n-1):−1:1 po_=Lgr(:,itt); P = POLYFIT(1:length(po_),po_’,4); po = POLYVAL(P,1:length(po_)); dpo2=diff(po); dnr1=dnr-4; dnr2=dnr+4; if E dnr1<1; dnr1=1; end; if dnr2>length(dpo2); dnr2=length(dpo2); end dpo2([1:dnr1,dnr2:end])=0; dnr2=1:length(dpo2); if max(dpo2)>0 dnr2(dpo2∼=max(dpo2))=[]; dnr=dnr2(1); nss2(2,ngr(itt))=nss2(1,ngr(itt)); mss2(2,ngr(itt))=mss2(1,ngr(itt))+dnr-pu2; end end break end end

### 4.11.4. Analysis of ONL Layer

This analysis consists in separating line ONL from line RPE originating from previously executed stages of the algorithm. The issue is facilitated by the fact that on average approx. 80, 90% pixels on each tomographic image have the maximum value in each column exactly at point RPE (this property has been already used in the previous section). So the only problem is to detect the position of ONL line. One of possible approaches consists of an attempt to detect the contour of the layer sought on L_{IR} image. This image originated from L_{M} image thanks to widening of y_{RPE}(n) layer range within oy axis within the range of ±p_{I}=20 pixels. L_{IR} image has been obtained with the number of columns consistent with the number of L_{M} image columns and with the number of lines 2·p_{I}+1. Fig. 4-93 shows image L_{IR}=L_{M}(m-y_{RPE}(n),n) originating from L_{M} image from Fig. 4-88.

The upper layer visible in Fig. 4-93 as a pretty sharp contour is the sought course of ONL. Unfortunately, because of a pretty high individual variation within the ONL layer position relative to RPE, the selected p_{I} range in further stages of the algorithm may be increased even twice (that will be described later). To determine consecutive points of ONL layer position interpolations with 4^{th} degree polynomial of grey level degree for individual columns of L_{IR} image obtaining this way L_{IRS}, which changes of grey levels in individual columns are shown in Fig. 4-94. The position of point ONL(n) occurs in the place of the highest gradient occurring within the range (RPE(n)- p_{I}) ÷ RPE(n) relative to L_{MS} image or 1 ÷ p_{I} relative to L_{IRS} image.

As may be seen in Fig. 4-95 the method presented perfectly copes with detecting NFL, RPE and ONL layers marked in red, blue and green, respectively.

There is another solution of this problem – presented below.

### 4.11.5. Determination of the Area of Interest and Preprocessing

Having coordinates for consecutive n-columns, points y_{NFL}(n) and y_{RPE}(n) the area of interest has been determined as the area satisfying the condition y_{NFL}(n)<y<y_{RPE}(n). An example of area L_{GR} originated from the L_{M} image presented in Fig. 4-3 after filtration using a median filter of 7×7 size (the size was arbitrarily chosen) is shown in Fig. 4-96. The L_{GR2} image is related to a similar fragment of L_{M} image, but before filtration.

Images presented in Fig. 4-96 and Fig. 4-97 originated from the algorithm

yrpe_onl=round(yrpe_onl); xrpe_onl=round(xrpe_onl); [yrpe_onl,xrpe_onl]=HIERARHICALL_DENSE2(yrpe_onl,xrpe_onl); ynfl=round(ynfl(1,:)); xnfl=round(xnf1(1,:)); Lgr=[]; Lgr2=[]; fun2 = @(x) median(x(:))*ones(size(x)); Lmf=blkproc(Lm, [3 3], [3 3], fun2); m1n2=[]; for ix=1:length(yrpe_onl) m1=yrpe_onl(ix); n1=xrpe_onl(ix); xynfl=[ynfl’,xnfl’]; xynfl_=xynfl(xynfl(:,2)==n1,:); m1n2(ix,1:2)=[m1,0]; if ∼isempty(xynfl_) m2=xynfl_(1,1); n2=xynfl_(1,2); Lgr2(1:(m2-m1+1),ix)=Lm(m1:m2,n2); Lgr(1:(m2-m1+1),ix)=Lmf(m1:m2,n2); mln2(ix,1:2)=[m1,n2]; end end figure; imshow(Lgr); figure; imshow(Lgr2);

where function
`HIERARHICALL_DENSE2`

function [y_out,x_out]=HIERARHICALL_DENSE2(y_in,x_in) y_out=[0]; x_out=[0]; y_in(:,x_in==0)=[]; x_in(:,x_in==0)=[]; for i=1:(length(y_in)-1) m_1=y_in(i:i+1); n_12=x in(i:i+1); x_out(1:end+length(n_12(1):n_12(2)))=[x_out(:)’,n_12(1):n_1 2(2)]; x_out(:,end)=[]; if (m_1(2)-m_1(1))∼=0 w1=m_1(1):(m_l(2)-m_1(1))/(length(n_12(1):n_12(2))- 1):m_1(2); else w1=ones([1 length(n_12(1):n_12(2))])*m_1(1); end y_out(1:end+length(n_12(1):n_12(2)))=[y_out(:)’,[w1]]; y_out(end)=[]; end y_out=y_out(2:end); x_out=x_out(2:end);

The first stage of algorithm operation is sequential performance of convolution with mask h, i.e.:

for angles θ from the range 80° to 100°, every 1°.

where m – row, n – column, θ – angle of mask h rotation, M_{h},N_{h} – number of mask h rows and columns.

This fragment implementation is presented below:

t=−4:1:4; f=OCT_GAUSS(t,1); f=f/max(f(:)); f=f*(4+1)- abs(2); h=ones([9 1])*f; h=imresize(h,[3 3],‘bicubic’); h(:,round(size(h,2)/2):end)=max(h(:)); h=imresize([−2 −2 0 2 2],[15 5],‘bicubic’); Lggr=zeros(size(Lgr)); Lphi=zeros(size(Lgr)); for phi=-100:10:−80 h_=imrotate(h,phi,‘bicubic’); Lsgr=conv2(Lgr,h_,‘shape’); Lpor=Lggr>Lsgr; Lphi=Lpor.*Lphi+(∼Lpor)*phi; Lggr =max(Lggr,Lsgr); end figure imshow([mat2gray(Lggr)]); figure; imshow(Lggr,[0 0.5])

where
`OCT_GAUSS`:

function y = OCT_GAUSS(x,std) y = exp(−x.^2/(2*std^2))/(std*sqrt(2*pi));

The resultant images are shown in Fig. 4-98 and Fig. 4-99.

The range of θ angle values was selected because of the position of layers sought, which in accordance with medical premises should be ‘nearly’ parallel with small angular deviations. Because each pathology featuring a significant angular change of y_{NFL}(n) and y_{RPE}(n) layers will be corrected after the conversion to the L_{GR} image. The methodology for consecutive convolutions performance (60) for successively changing θ angle values and then the calculation of the maximum occurring for consecutive resultant images (61) was described in detail in [25] and [40]. The created resultant image L_{θ} obtained on the basis of code presented above and:

figure; imshow(Lphi,[−100 −80]); colormap(‘jet’); colorbar

is shown in Fig. 4-100.

The division into individual layers consists here in tracking changes of individual points position of individual layers changing their position for consecutive n-columns of the L_{GGR} image. This issue is not a trivial one, mainly due to difficulties in identification of both (in a general case) of the number of layers visible and due to the lack of their continuity and also very often due to their decay because of e.g. existing shadows [2], [4], [18]. These issues are illustrated by the graph of changes of L_{GGR} image grey level changes presented in Fig. 4-101. The change of grey levels has been marked in red and in green for consecutively occurring columns on the L_{GGR} image (for example for presented n=120 and 121). The tracking consists here in suggesting a method to connect individual peaks of courses presented, what will happen in the next section.

### 4.11.6. Layers Points Analysis and Connecting

The localisation and determination of layer position, having NFL, RPE and ONL layers, is one of the most difficult issues. The graph shown in Fig. 4-101 clearly confirms this presumption.

In the first stage it is necessary to find the maximums positions on the graph from Fig. 4-101. To this end the following operation was carried out:

where L_{SR} is the image originated as a result of L_{GGR} image filtration using an averaging filter of mask with experimentally chosen 9×9 size, i.e.:

Lgr=(Lggr-conv2(Lggr,ones(9),‘same’)/81);

The procedure enables cutting out the unevenness of lighting visible on the image Fig. 4-96 and thereby on the graph from Fig. 4-101. The graph of the same range of rows and columns, i.e. n=120 and n=121 for m∈(5,35) of the L_{UGR} image is shown in Fig. 4-101.

The implementation in Matlab of the course described looks as follows:

figure; plot(Lggr(:,121),‘-g*’); grid on; hold on plot(Lggr(:,121-1),‘-r*’); hold off ylabel(‘L_{GGR} (m,120), L_{GGR} (m,121)’, ‘FontSize’,20) xlabel(‘m’,‘FontSize’,20) Lugr=(Lggr-conv2(Lggr, ones(9), ‘same’)/81); figure; plot(Lugr(:,121),‘-g*’); grid on; hold on plot(Lugr(:,121−1),‘-r*’); hold off ylabel(‘L_{UGR}(m,120), L_{UGR}(m,121)’,‘FontSize’,20) xlabel(‘m’,‘FontSize’,20)

Points p(i,n) (where i – index of a consecutive point in the n^{th} column) are shown in Fig. 4-102 on the L_{UGR} image. The position of individual p(i,n) points for the L_{UGR} image was determined based on the method of finding consecutive maximum values for binary columns (the decimal to binary conversion threshold was set at 0). The source code responsible for this part is presented below:

figure imshow(Lugr); hold on for n=1:size(Lugr, 2) Lnd=Lugr(:,n); Llab=bwlabel(Lnd>0.01); Lnr=1:length(Llab); for io=1:max(Llab) Lnd_=Lnd; Lnd_(Llab∼=io)=0; Lnrio=Lnr(Lnd_==max(Lnd_(:))); plot(n,Lnrio(1),‘.r’) end end

The image generated is shown below

The following assumptions were made in the process of individual p(i,n) points connecting:

- p
_{zx}– parameter responsible for permissible range of points connecting (analysing) on the ox axis, - p
_{zy}– parameter responsible for permissible range of points connecting (analysing) on the oy axis, - p
_{zc}– parameter responsible for permissible range on the ox axis, where the optimum connection points are sought. - each new point, if it does not fulfil the assumptions on p
_{zx}and p_{zy}distance, is assumed as the first point of a new layer, - each point may belong to only one line, what by definition limits a possibility of lines division or connection.

As an illustration the process of connecting for typical and extreme cases is shown below (Fig. 4-104).

Fig. 4-104 shows demonstrative diagrams of typical and extreme cases of p(i,n) points connecting into lines marked as w(j,n), where j – is the line number and n – column. Fig. 4-104 a) shows a typical case, where having two points p(1,1) and p(2,1) because of a smaller distance on the oy axis p(1,1) was connected with p(1,3). Fig. 4-104 b) shows a reverse more difficult situation as compared with Fig. 4-104 a), because points p(1,1) and p(2,1) are equidistant. In this case, because points p(i,n) for each column are determined top-down, this connection will be carried out between p(1,1) and p(1,3). Fig. 4-104 c) shows a similar situation to Fig. 4-104 b). In Fig. 4-104 d) the system of connections is visible for the case, where there are points of discontinuity in determination of points comprised by individual layers. Parameter p_{zc} is responsible for that. In the case of Fig. 4-104 e) there was an erroneous lines crossing. Points p(2,2), p(1,4) and p(2,6) were properly connected, while point p(1,2) was improperly connected with p(2,6). Such action results in adopting a principle of connecting with the nearest point and in a too large range of p_{zc} parameter values, which in this case ‘allowed’ connecting p(1,2) and p(2,6). Fig. 4-104 f) is a typical example, where the line formed from points p(2,2) and p(2,4) ends and a new line starts from point p(1,6). This example is interesting to the extent that if parameters p_{zx}, p_{zx} and p_{zx} would allow that, as a result lines created from points p(1,2), p(1,4) and p(1,6) should be obtained as well as the second line p(2,2), p(2,4) and p(2,6). Obviously, having only such data (p(i,n) points coordinates) it is not possible to determine, which solution is the right one. Situations presented in Fig. 4-104 a), b) and c) have another significant feature, by definition they do not allow individual analysed layers (Fig. 4-104 a), b)) to be connected and to be divided (Fig. 4-104 c)).

For parameters p_{zx}=2, p_{zy}=2, p_{zc}=6 and points p(i,n) of L_{UGR} image shown in Fig. 4-102 the following results were obtained - Fig. 4-105.

The implementation of the discussed algorithm fragment is presented below. The Reader should be familiar with the first part from the previous implementation, i.e.:

figure imshow(Lugr); hold on rr_d_o=0; rr_u_o=0; r_pp=[]; rrd=[]; rru=[]; rrd_pol=[];rrd_nr=[]; rrd_pam=[]; rru_pam=[]; for n=1:size(Lugr, 2) Lnd=Lugr(:,n); Llab=bwlabel(Lnd>0.01); Lnr=1:length(Llab); rr_d=[]; for io=1:max(Llab) Lnd_=Lnd; Lnd_(Llab∼=io)=0; Lnrio=Lnr(Lnd_==max(Lnd_(:))); rr_d=[rr_d,Lnrio(1)]; end …

Instead, in the second part there is the right part of described problem solution, i.e.:

… pzc=10; pzy=4; rrd_pol(1:length(rr_d),n)=rr_d; if n==1 rrd_nr(1:length(rr_d),n)=(1:length(rr_d))’; else rrd_nr(1:length(rr_d),n)=0; end wu=[]; wd=[]; wuiu=[]; wdiu=[]; rrd(1:length(rr_d),n)=rr_d; rr_dpp=rr_d; for ni=(n-1):−1:(n-pzc) if ni>0 rr_d=rrd(:,n); rr_d_o=rrd(:,ni); rrd_nr_iu=rrd_nr(:,ni); if (∼isempty(rr_d))&(∼isempty(rr_d_o)) uu=ones([length(rr_d) 1])*rr_d_o’; nrnr=ones([length(rr_d) 1])*rrd_nr_iu’; dd=rr_d*ones([1 length(rr_d_o)]); ww=ones([size(dd-uu,1) 1])*min(abs(dd- uu))==abs(dd-uu); ww_=min(abs(dd-uu),[],2)*ones([1 size(dd- uu, 2)])==abs(dd-uu); ww=ww_.*ww; ww(abs(dd-uu)>pzy)=0; ww(dd==0)=0; ww(uu==0)=0; ww(rr_d==0,:)=0; ww(:,rr_d_o==0)=0; wu_=ww.*uu; wu_(wu_==0)=[]; wu=[wu,wu_]; wd_=ww.*dd; wd_(wd_==0)=[]; wd=[wd,wd_]; wuiu_=ones(size(wu_))*(ni); wuiu=[wuiu,wuiu_]; wdiu_=ones(size(wd_))*(n); wdiu=[wdiu,wdiu_]; nrnr=sum(nrnr.*ww,2); nrnrw=sum(ww,2); niu=max(rrd_nr(:))+1; for gf=1:length(nrnr) if (nrnr(gf)==0)&&(nrnrw(gf)==1) nrnr(gf)=niu; wvv=ww(gf,:); rrd_nr(wvv==1,ni)=niu; niu=niu+1; end end rpnr=rrd_nr(:,n); rpnr=rpnr+nrnr; rrd_nr(:,n)=rpnr; rr_d(sum(ww,2)∼=0)=0; rr_d_o(sum(ww,1)∼=0)=0; rrd(1:length(rr_d),n)=rr_d; rrd(1:length(rr_d_o),ni)=rr_d_o; end end end rrd(1:length(rr_dpp),n)=rr_dpp; for j=1:length(wu) line([wuiu(j) wdiu(j)],[wu(j) wd(j)], ‘LineWidth’,2,‘Color’,’r’) end n end

Fig. 4-106 shows the arrangement of individual j^{th} w(j,n) lines on the input image L_{M}. Instead, Fig. 4-107 shows other results of points p(i,n) grouping for parameters p_{zx}=2, p_{zy}=2, p_{zc}=6 at other L_{UGR} images.

Two characteristic elements may be noticed. The first of them is related to the existence of short lines, which are a disturbance (short is understood here as such, which are not longer than 10, 20 points). The second characteristic element is the determination of transition borders (looking in the sequence of rows occurrence – top-down) by a lighter and darker area. This is caused by an asymmetric form of mask h (59). Hence a supplementary approach consists of performance of operations presented starting from the relationship (59) for the suggested h but for angles θ from the range -80° to -100° every 1°. The results obtained are presented below (Fig. 4-108).

Further on, denoting w(j,n) lines obtained for h rotated within θ angles from the range -80° to -100° as w_{1}(j_{1},n) and w(j,n) lines obtained for h rotated within θ angles from the range 80° to 100° as w_{2}(j_{2},n), the following operations were performed:

- the location of last p(i,n) points positions of consecutive w
_{1}(j_{1},n) and w_{2}(j_{2},n) lines has been checked, - the approximation by the second degree polynomial of the last points of w
_{1}(j_{1},n) and w_{2}(j_{2},n) lines was carried out, - it has been checked, whether the obtained next points extending the analysed line j
_{1}* connect with another line j_{1}≠j_{1}* (or similarly j_{2}≠j_{2}*).

These operations have been precisely described in the next section.

### 4.11.7. Line Correction

The determined w_{1}(j_{1},n) and w(j,n) lines are shown as an example in Fig. 4-108. The lines correction consists in connecting them, provided that the extension of consecutive points of the approximated line coincides in a specific range with the beginning of the next one. The following assumptions were made in the process of individual w(i,n) lines connecting:

- P
_{kx}– parameter responsible for permissible range of lines connecting (analysing) on the ox axis, - P
_{ky}– parameter responsible for permissible range of lines connecting (analysing) on the oy axis, - p
_{kc}– parameter responsible for the range on the ox axis, in which the line end is approximated, - p
_{ko}– parameter responsible for the size of ox axis analysis window, - the process of lines connecting applies only to those, which end and start – branches connecting is not carried out,
- only those lines are connected, which have minimum 90% of analysed points falling within the range ± p
_{ky}with respect to the approximated line (Fig. 4-109), - lines connection consists in changing their labels – in the case of connecting e.g. w(1,n) with w(2,n) lines, the label is changed from ‘2’ to ‘1’.

The presented methodology works pretty well for tested image resolutions in the case, when the approximation is carried out using a first or second degree polynomial and when the following values of parameters are assumed p_{kx}=20, p_{ky}=4, p_{kc}=10, p_{ko}=10. The obtained example results for the last two points (marked - wa‘), three last points (marked – wa‘’) and four last points (marked – wa‘”) are shown in Fig. 4-110.

### Fig. 4-111Image fragment before lines correction

### Fig. 4-112Image fragment after lines correction obtained for parameters p_{kx}=20, p_{ky}=4, p_{kc}=10, p_{ko}=10

A direct relationship between obtaining correct results from w(j,n) lines connecting and the number of analysed points at their end is visible from the results obtained at the initial analysis of approximation results. In particular, when allowing connecting lines, which – looking at the x axis – have the same values, a situation shown in Fig. 4-113 may occur.

As this fragment implementation in Matlab is trivial, we leave this part to be written by the Reader.

### 4.11.8. Layers Thickness Map and 3D Reconstruc

The analysis of L_{M} images sequence and precisely the acquiring of layers NFL, RPE and ONL allows performing 3D reconstruction and layers thickness measurement. A designation for an image sequence with an upper index (i) has been adopted, where i = {1,2,3,…,k-1,k) i.e. L_{M}^{(1)}, L_{M}^{(2)}, L_{M}^{(3)},.., L_{M}^{(k-1)}, L_{M}^{(k)}. For a sequence of 50 images the position of NFL layers (Fig. 4-114), RPE (Fig. 4-115) and ONL (Fig. 4-116) was measured as well as ONL - RPE layer thickness (Fig. 4-117).

3D reconstruction performed based on L_{M}^{(i)} images sequence is the key element crowning the results obtained from the algorithm suggested. The sequence of images, and more precisely the sequence of NFL^{(i)}(n), RPE^{(i)}(n) and ONL^{(i)}(n) layers position, provides the basis for 3D reconstruction of a tomographic image. For an example of 50 images sequence and one image resolution L_{M}^{(i)} at the level of M×N = 256×512, a 3D image is obtained composed of three layers NFL, RPE and ONL of 50×512 size. The results are shown in Fig. 4-118 for an example of original images reconstruction (without the sample described above) based on i pixels brightness Fig. 4-119 – reconstruction performed using the algorithm described above, on the basis of NFL^{(i)}(n), RPE^{(i)}(n) and ONL^{(i)}(n) information.

In an obvious way a possibility of automatic determination of the thickest or the thinnest places between any points results from layers presented in Fig. 4-119.

### 4.11.9. Evaluation of Hierarchical Approach

The algorithm presented, after a minor time optimisation, detects NFL, RPE and ONL layers with up to a few dozen milliseconds on a computer with a 2.5GHz Intel Core 2 Quad processor. The time was measured as an average value of 700 images analysis dividing individual images into blocks A (Fig. 4-81) of consecutive sizes 16×16, 8×8, 4×4, 2×2. This time may be reduced by the modification of approximation blocks number and at the same time increasing the layer position identification error – results are shown in the table below (Tab 4-1).

The specification of individual algorithm stages’ analysis times presented in the table above clearly shows the longest execution of the first stage of image preprocessing, where filtration with a median filter is of prevailing importance (in terms of execution time) as well as of the last stage of precise determination of RPE and ONL layers position. Because precise RPE and ONL breakdown is related to the analysis and mainly to the correction of RPE and ONL points position in all columns of the image for the most precise approximation (because of a small distance between RPE and ONL it is not possible to perform this breakdown in earlier approximations). So the reduction of computation times may occur only at increasing the error of layers thickness measurement. And so for example for the analysis in the first approximation for A of 32 × 32 size and then for 16 × 16 gross errors are obtained generated in the first stage and duplicated in the next ones. The greatest accuracy is obtained for approximations of A of 16×16 size, and then of 8×8, 4×4, 2×2 and 1×1, however the computation time nearly doubles.

## 4.12. Evaluation and Comparison of Suggested Approaches Results

The methods presented: classical, Canny, random [28] or hierarchical [27] give correct results at the detection (recognition) of RPE, IS/OS, NFL or OPL layers on a tomographic eye image. Differences in the methods proposed are visible only when comparing their effectiveness in the analysis of mentioned several hundred tomographic images. When comparing the methods mentioned it is necessary to consider the accuracy of layer recognition, algorithm responses to pathologies and optic nerve heads and the operating speed, in this case for a computer (P4 CPU 3GHz, 2GB RAM).

The following table Tab 4-2 presents a cumulative comparison of algorithms proposed and Tab 4-3 a comparison of results obtained using the algorithms discussed, taking into account typical and critical fragments of individual algorithms operation.

The random method described as an example in this monograph gives correct results at contours determination (layers separation) both on OCT images as well as on others, for which classical methods of contours determination do not give results or the results do not provide a continuous contour. The algorithm drawbacks include a high influence of noise on the results obtained. This results from a relationship that the number of pixels of pretty high value, resulting from a disturbance, increases the probability of selecting in this place a starting point and hence a component contour. The second drawback is the computations time, which is the longer the larger is the number of selected points and/or the reason, for which searching for the next points o_{i,j+1} was stopped.

The specification of hierarchical algorithm individual stages’ analysis times presented in the table above clearly shows the longest execution of the first stage of image preprocessing, where filtration with a median filter is of prevailing importance (in terms of execution time) as well as of the last stage of precise determination of RPE and IS/OS layers position. Because precise RPE and IS/OS breakdown is related to the analysis and mainly to the correction of RPE and IS/OS points position in all columns of the image for the most precise approximation (because of a small distance between RPE and IS/OS it is not possible to perform this breakdown in earlier approximations). So the reduction of computation times may occur only at increasing the error of layers thickness measurement. And so for example for the analysis in the first approximation for A of 32 × 32 size and then for 16 × 16 gross errors are obtained generated in the first stage and duplicated in the next ones. The greatest accuracy is obtained for approximations of A of 16×16 size, and then of 8×8, 4×4, 2×2 and 1×1, however the computation time nearly doubles.

3D reconstruction performed based on L_{M}^{(i)} images sequence is the key element crowning the results obtained from the algorithm suggested. The sequence of images, and more precisely the sequence of y_{NFL}^{(i)}(n), y_{RPE}^{(i)}(n) and y_{IS/OS}^{(i)}(n) layers position, provides the basis for 3D reconstruction of a tomographic image. For an example sequence of 50 images and one L_{M}^{(i)} image resolution of M×N= 256 × 512 a 3D image is obtained, composed of three NFL, RPE and IS/OS layers of 50 × 512 size. Results are shown in Fig. 4-118 for an example reconstruction of original images (without processing described above) based on pixels brightness and in Fig. 4-119 – the reconstruction performed using the algorithm described above was carried out based on y_{NFL}^{(i)}(n), y_{RPE}^{(i)}(n) and y_{IS/OS}^{(i)}(n) information.

- Introduction to the fundus of the eye analysis
- Algorithm for Automated Analysis of Eye Layers in the Classical Method
- Detection of IS, ONL Boundaries
- Detection of NFL Boundary
- Correction of Layers Range
- Final Form of Algorithm
- Determination of ‘Holes’ on the Image
- Assessment of Results Obtained Using the Algorithm Proposed
- Layers Recognition on a Tomographic Eye Image Based on Random Contour Analysis
- Layers Recognition on Tomographic Eye Image Based on Canny Edge Detection
- Hierarchical Approach in the Analysis of Tomographic Eye Image
- Evaluation and Comparison of Suggested Approaches Results

- ANALYSIS OF POSTERIOR EYE SEGMENT - Image Processing in Optical Coherence Tomogr...ANALYSIS OF POSTERIOR EYE SEGMENT - Image Processing in Optical Coherence Tomography

Your browsing activity is empty.

Activity recording is turned off.

See more...