NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Koprowski R, Wróbel Z. Image Processing in Optical Coherence Tomography: Using Matlab [Internet]. Katowice (Poland): University of Silesia; 2011.

Cover of Image Processing in Optical Coherence Tomography

Image Processing in Optical Coherence Tomography: Using Matlab [Internet].

Show details

Chapter 3ANALYSIS OF ANTERIOR EYE SEGMENT

The first part of this monograph presents the issues related to the analysis of anterior eye segment in terms of selection of algorithms analysing the filtration angle and the anterior chamber volume. These are among fundamental issues not resolved so far in applications available in modern tomographs. These calculations are either not possible at all or not fully automated. The algorithms presented below not only fully resolve the problem mentioned but also indicate other possible ways to resolve it.

Image ch3fu1

3.1. Introduction to Anterior Eye Segment Analysis

The filtration angle, i.e. the iridocorneal angle (Fig. 3-1, Fig. 3-2), is the structure responsible for the aqueous humour drainage from the eye's anterior chamber. Both a correct production of the aqueous humour by the ciliated epithelium and a correct rate of aqueous humour drainage through the filtration angle are conditions for a correct intraocular pressure. All anatomical anomalies, the angle narrowing and the angle closing result in a more difficult drainage and in a pressure increase. The examination allowing to determine the angle width is named the gonioscopy. Based on the angle width to the open angle glaucoma and to the closed angle glaucoma [16], [18].

Fig. 3-1. A section of the anterior segment of an eye with marked positions of characteristic areas.

Fig. 3-1

A section of the anterior segment of an eye with marked positions of characteristic areas.

Fig. 3-2. An example of the image of the anterior segment of an eye.

Fig. 3-2

An example of the image of the anterior segment of an eye.

The methods presented are not precisely defined and doctors each time must choose the measuring method used. In consequence, the results obtained are not reliable and difficult to verify and to compare with the standard and with other doctors.

So far all the measurements mentioned have been performed manually indicating appropriate characteristic points. However, in the cases of individual variation or pathology these methods have different accuracy and repeatability of measurements resulting primarily from their nature and from the measured quantities. The AOD (Angle Opening Distance) method (Fig. 3-3.a)) consists in the measurement of distance, TIA (Trabecular-Iris Angle) (Fig. 3-3.b)) in the measurement of angle and (Fig. 3-3.c)) TISA (Trabecular-Iris Space Area) method consists in the measurement of area, respectively [20] (the methods have been shown together in Fig. 3-4).

Fig. 3-3. Methods for the filtration angle measurement: a) AOD (Angle Opening Distance) method, b) TIA (Trabecular-Iris Angle) method, c) TISA (Trabecular-Iris-Space Area) method.

Fig. 3-3

Methods for the filtration angle measurement: a) AOD (Angle Opening Distance) method, b) TIA (Trabecular-Iris Angle) method, c) TISA (Trabecular-Iris-Space Area) method.

Fig. 3-4. Methods for the filtration angle measurement: AOD (Angle Opening Distance) method, TIA (Trabecular-Iris Angle) method, TISA (Trabecular-Iris-Space Area) method.

Fig. 3-4

Methods for the filtration angle measurement: AOD (Angle Opening Distance) method, TIA (Trabecular-Iris Angle) method, TISA (Trabecular-Iris-Space Area) method.

As it can be seen from the measurement data presented (Fig. 3-3) the AOD method does not cope sufficiently well with pathological cases, what makes that the results obtained are not reliable in diagnostic terms. What is even worse, using this method in accordance with the definition a doctor makes consciously a pretty large (depending on the degree of pathology) error of the method. Therefore an automatic method for the filtration angle measurement has been proposed and an original measurement method (based on the aforementioned AOD, TISA and TIA) free of the errors mentioned above. However, further considerations should be preceded by showing the hitherto methods, which are comprised by the software delivered with an OCT instrument.

3.2. Review of Hitherto Filtration Angle Analysis Methods

The hitherto filtration angle analysis methods may be easily assessed, because in each software attached to each tomograph these are manual methods. An operator indicates reference points characteristic of specific measurement method (Fig. 3-5). Partial automation of angle analysis method by “dragging” the marked measuring line to the object contour is rare. However, irrespective of whether the method is computer assisted or fully manual the measurement is not automated and its result is affected by the precision of point indication by the operator. Hence these methods are not free of errors, both of the operator and of the measurement methodology itself. The error related to the lack of measurements repeatability is especially troublesome at statistical calculations.

Fig. 3-5. Fragments of commercial software [38], attached to OCT Visante instruments, operation [37].

Fig. 3-5

Fragments of commercial software [38], attached to OCT Visante instruments, operation [37].

Summing up the software available now has the following deficiencies:

  • missing 3D reconstruction and thereby a possibility to perform calculations of the volume of selected parts of anterior eye segment,
  • missing full automation,
  • calculations, which may be carried out manually, are possible only to a very limited extent,
  • large measurement errors e.g. in the case of filtration angle measurement for pathological conditions.

Taking into account the aforementioned deficiencies an own profiled algorithm has been suggested, designed for automated analysis excluding any involvement of the operator. The description of the algorithm itself and of its parameters has been preceded by sections on reading the files from OCT instruments and the assessment of errors at manual measurements.

3.3. Verification of the Sensitivity of the Proposed Methods

This section is aimed at the analysis of properties (mainly sensitivity to parameters change) of methods specified in the previous section (AOD, TIA and TISA).

The need to evaluate and verify the precision of individual measurement methods at the presence of disturbances results from situations occurring in the case of inaccurate manual method for indication of characteristic points coordinates (marked red in Fig. 3-3). The location of points mentioned strictly depends on the measurement method chosen and on operator's accuracy and is forced by all types of software delivered by the OCT vendor. The calculated values of errors obtainable at manual points indication are the subject of these considerations. A reliable final result, documented by error values, consists of analysed method (AOD, TIA, TISA) sensitivity to operator's error. Conditions related strictly to the operator have been formulated in the summary based on that and referring to the fact, which coordinate of a point indicated by the operator in what way affects the final error of the filtration angle measurement.

3.3.1. Methodology for Measuring Methods Sensitivity to Parameters Change

The verification of AOD, TIA and TISA methods sensitivity to parameters change (operator's error) was carried out, likewise in the previous section, taking into account and not taking into account semi-automation implemented in commercial software. Semi-automatic marking of points characteristic of individual methods is related to dragging the point marked to the edge, most often using the active contour method. However, in both methods mentioned the result is affected by the place indicated by the operator. Preliminary measurements have confirmed, depending on the operator, an error of points indication of around ± 10 pixels, giving an error at the resolution of 32 pixels/mm of around ± 0.31 mm. For the sake of comparison of sensitivity to parameters change for AOD, TIA and TISA methods the scope of analysis and comparisons has been narrowed to two points p1 and p2 (Fig. 3-6).

Fig. 3-6. Location of points pi indicated by the operator.

Fig. 3-6

Location of points pi indicated by the operator.

On this basis the following assumptions related to the studies carried out have been formulated:

  • the range of characteristic points position variability ± 10 pixels,
  • verified software in semi-automatic version,
  • analysis, due to comparative reasons, narrowed to points p1 and p2,
  • the analysed image resolution of 32 pixels/mm.

The measurement error calculated for the AOD method – δAOD, for TIA – δTIA, for TISA – δTISA will be calculated as the difference between the measured and the correct value, expressed as the percentage of notional value, where the notional values is most often understood as the correct value, i.e.:

δAOD=dMdWdW100[%],δTIA=αMαWαW100[%],δTISA=sMsWsW100[%]
1

where:

dM, dW- measured and standard distance, respectively, defined as:

dM=(y1y2)2+(x1x2)2anddw=(yw1yw2)2+(xw1xw2)2
2

αMW- measured and standard angle, respectively

sM, sW – measured and standard area, respectively.

The method sensitivity to parameters change will be understood as a change of the measured value caused by a change of one parameter, indicated by the points operator, referred to the measured value and expressed as percentage, i.e.:

SAOD(pi)=δdMdMδxi100[%]
3

for small increments it is possible to write

SAOD(pi)ΔdMdMΔdi100[%]
4

where:

xi – coordinates of points pi indicated by the operator, the next i-th number in accordance with Fig. 3-6 (in accordance with the assumptions only two points, p1 and p2, are analysed).

Appropriately for the other methods:

STIA(pi)ΔαMαMΔαi100[%]STISA(pi)ΔsMsMΔsi100[%]
5

The calculations have been carried out for an artificial image shown in Fig. 3-7, which may be downloaded from this book site http://robert.frk.pl and which, after entering to the Matlab space, should be converted to sorted coordinates x,y, i.e.:

Fig. 3-7. Binary test image illustrating the filtration angle.

Fig. 3-7

Binary test image illustrating the filtration angle.

L1=imread(‘D:\OCT\reference.jpg’);
L1=1-double(L1(:,:,1)>128);
figure; imshow(L1);
[xx,yy]=meshgrid(1:size(L1,2),1:size(L1,1));
yy(L1∼=1)=[];
xx(L1∼=1)=[];
xy=sortrows([xx',yy'],2);
podz=750;

xl=xy(1:podz,1);
yl=xy(1:podz,2);
xp=xy(podz:end,1);
yp=xy(podz:end,2);
figure; plot(xl,yl,‘-r*’); hold on; grid on
plot(xp,yp,‘-g*’); xlabel(‘x’); ylabel(‘y’)

From the notation we obtain coordinates (xl,yl) and (xp, yp) of the left and right hand side of the measured angle, respectively.

The next section will present the results obtained using this artificial image.

3.3.2. Methods Sensitivity to Parameters Change

Measurements were carried out changing the position of points p1 and p2 in coordinate x within xw ± 10 pixels, assuming automated dragging to the contour line on the y axis (Fig. 3-8). An example of measured quantities values variability range for individual methods has been shown in the following graphs (Fig. 3-9 - Fig. 3-11).

Fig. 3-8. Contour obtained from the image from Fig. 3-7 with marked range of points p1 and p2 fluctuation on x axis.

Fig. 3-8

Contour obtained from the image from Fig. 3-7 with marked range of points p1 and p2 fluctuation on x axis.

Fig. 3-9. Graph of δTIA error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

Fig. 3-9

Graph of δTIA error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

Fig. 3-10. Graph of δAOD error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

Fig. 3-10

Graph of δAOD error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

Fig. 3-11. Graph of δTISA error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

Fig. 3-11

Graph of δTISA error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

For coordinates of contour right and left hand side position the errors may be calculated as follows:

prz=30;
pam=[]; pam0=[];
xx=xp(end,:)-xp(:,:); yp(xx>(prz))=[]; yyyp=yp;
xx=xp(end,:)-xp(:,:); xp(xx>(prz))=[]; xxxp=xp;
xx=xl(1,:) - xl(:,:); yl(xx>(prz))=[]; yyyl=yl;
xx=xl(1,:) - xl(:,:); xl(xx>(prz))=[]; xxxl=xl;
po2p=round(length(xxxp)/2);
po2l=round(length(xxxl)/2);
pam=[];
for pol=1:length(xxxl)
    for pop=1:length(xxxp)
        xl=xy(1:podz,1);
        yl=xy(1:podz,2);
        xp=xy(podz:end,1);
        yp=xy(podz:end,2);
        xxl=xl(end):xxxl(pol);
        xxp=xp(1):xxxp(pop);
        Pl = POLYFIT([xl(end) xxxl(pol)],[yl(end)
yyyl(pol)],1);
        Yl = POLYVAL(Pl,xxl);
        Pp = POLYFIT([xp(1) xxxp(pop)],[yp(1)
yyyp(pop)],1);
        Yp = POLYVAL(Pp,xxp);
        plot(xxl,Yl)
        plot(xxp,Yp)
        katl=180+atan2([xl(end)-xxxl(pol)],[yl(end)-
yyyl(pol)])*180/pi;
        katp=atan2([xxxp(pop)-xp(1)],[yyyp(pop)-
yp(1)])*180/pi;
        pam(pol,pop)=katl-katp;
        if (pop==po2p)&(pol==po2l)
            pam00=katl-katp;
        end
    end
end
pam=(pam-pam00)./pam00*100;
sl=round(size(pam,1));
sp=round(size(pam,2));
[xx,yy]=meshgrid((1:sp)./sp*30-15, (1:sl)./sl*30-15);
figure; mesh(xx,yy,pam);
xlabel(‘\Delta x (p_1) [pixel]’,‘fontSize’,20);
ylabel(‘\Delta x (p_2) [pixel]’,‘fontSize’,20);
zlabel(‘\delta _{TIA} [%]’,‘fontSize’,20)
axis([-15 15 -15 15 min(pam(:)) max(pam(:))])
colormap([0 0 0])

The results obtained, for three methods AOD, TIA and TISA, of error value and of sensitivity to change of points p1 and p2 position are shown in the table below.

Tab 3-1. Table of methods sensitivity to points positions change.

Tab 3-1

Table of methods sensitivity to points positions change.

The table above and the graphs presented (Fig. 3-9 - Fig. 3-11) show the measurement error for individual methods, AOD, TIA and TISA, when changing positions of points p1 and p2 in the x coordinate, assuming “dragging” by a semi-automatic to the correct y coordinate. The measurement error, at incorrect indication of points p1 and p2 position for AOD and TISA methods, affects the result with the sign opposite to that for the TIA method. When moving point p1 or p2 towards the filtration angle, the measurement value is understated for AOD and TISA methods and overstated for the TIA method.

As it can be seen from the graphs presented and from the method sensitivity (Tab 3-1) to a change of the points mentioned, the TISA method is the most sensitive to operators errors. The sensitivity value of around 0.55% for TISA results from the nature of measurement, where very small changes of point p1 and p2 position have a significant impact on the calculated area. The AOD methods is the least sensitive to operators error, because a change of point p1 and p2 position on a contour nearly parallel to the line, which length is calculated, affects the result only slightly.

The results obtained admittedly show an advantage of AOD method, in which a change of points position by the operators affects the total error to the least extent and at the same time this method is least sensitive to operators errors, but only in cases of ideal determination of the contour. Unfortunately it turns out that in the case of disturbances, personal variability and other factors causing sudden local contour changes/fluctuations, the situation is slightly different (Fig. 3-12 - Fig. 3-15). The disturbances may be added like in the case of calculations in the previous section, i.e.:

Fig. 3-12. Contour obtained from the image from Fig. 3-7 after adding noise of uniform distribution on ±20 range with marked range of points p1 and p2 fluctuation on the x axis.

Fig. 3-12

Contour obtained from the image from Fig. 3-7 after adding noise of uniform distribution on ±20 range with marked range of points p1 and p2 fluctuation on the x axis.

Fig. 3-15. Graph of δTISA error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

Fig. 3-15

Graph of δTISA error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

xyrand=rand(size(xy))*40;
xy=xy+xyrand;

The above measurements were carried out for the same contour (Fig. 3-7) adding disturbances of random nature and uniform distribution on ±20 range, as a result obtaining contour shown in Fig. 3-12 and results as error values δAODTIATISAshown in Fig. 3-13, Fig. 3-14 and Fig. 3-15. As it is seen from the graphs presented (Fig. 3-13, Fig. 3-14, Fig. 3-15) the error has totally different distribution for individual methods AOD, TIA and TISA than in the case from Fig. 3-9, Fig. 3-10 i Fig. 3-11. In the case of disturbances existence the lowest error value is achievable for the TISA method, the largest for the TIA method. Based on that the following summary may be formulated.

Fig. 3-13. Graph of δTIA error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

Fig. 3-13

Graph of δTIA error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

Fig. 3-14. Graph of δAOD error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

Fig. 3-14

Graph of δAOD error values changes vs. changes of p1 and p2 points position on the x axis within the range of xw±10 pixels.

3.3.3. Conclusions From the Sensitivity Analysis Methods

For AOD, TIA and TISA methods of filtration angle measurement and for an example contour analysed as an ideal (possible approximated in the software attached to the tomography) and featuring random disturbances the conclusions presented in the following table may be drawn.

On the basis of presented Tab 3-2 the following premises may be drawn for the operator – the person indicating manually the measurement points (supported by a semi-automatic implemented in the software delivered with the instrument or not):

Tab 3-2. Summary of method errors impact for AOD, TIA and TISA measurements.

Tab 3-2

Summary of method errors impact for AOD, TIA and TISA measurements.

  • the AOD method gives results burdened with the smallest error in the case of contour line approximation. Precise manual indication of measurement points makes this method to be the least accurate.
  • the TIA method, irrespective of the way of operation, help, software delivered with the tomography, shows average error values at the indication of measurement points,
  • the TISA method is burdened with the smallest error if the contour is not approximated and the operator indicates measurement points very precisely.

Summing up, the AOD method is the best for a contour in which the filtration angle measured is approximated by lines, in other cases it is the TISA method.

3.4. The Results of Automatic Analysis Chamber Angle Obtained Using Well-Known Algorithms

The justification of the necessity to use a profiled algorithm in this case is related with insufficient results obtained from other known algorithms intended for detection of lines and/or areas on images:

  • the Hough transform enables detecting lines on images of predetermined shape.
  • the wavelet analysis method gives incorrect results in the case, where the objects are poorly visible and lines can overlap,
  • also the methods for elongated objects analysis are not applicable here due to a possibility of large change of dimensions both of the object itself and also of its thickness and to a possibility of its division to many parts.

Based on that, taking also into consideration the medical premises presented below, a profiled algorithm for analysis and processing of anterior eye segment has been proposed.

3.5. Proposed Modifications to the Well-Known Method of Measuring

Fig. 3-16 presents again the anterior chamber for different degrees of pathology with marked distances at various points for one selected method, i.e. AOD [31].

Fig. 3-16. The anterior chamber for different degrees of pathology with measured quantities for the AOD method and distance y marked with arrows.

Fig. 3-16

The anterior chamber for different degrees of pathology with measured quantities for the AOD method and distance y marked with arrows.

As it can be seen (Fig. 3-16) and as previously mentioned the AOD method does not cope sufficiently well with pathological cases, what makes that the results obtained are not reliable in diagnostic terms. The new method, proposed by the authors, consists in continuous measurements via modified AOD, TIA and TISA methods. A continuous measurement will be understood here as a series of measurements for a distance of 500 μm (Fig. 3-16) decreasing by 1 pixel. At a typical resolution of the image of 32 pixels/1 mm this gives on average around 16 measurements. Because of the resolution error the measurements for a small number of pixels are burdened with a larger error. However, this does not affect the advantage of the method proposed over the commonly used methods. For the measurement method defined this way its precision and sensitivity to disturbance have been verified. To this end the shape of contour analysed and also x and y coordinates have been preliminary modelled, e.g. as follows.

figure
% green plot
x=[.1:0.1:4, -4:0.1:-0.1]; y=x.^2;
x(x<0)=x(x<0)*2; x(x<0)=fliplr(x(x<0));
x(x>0)=fliplr(x(x>0)); x=x-min(x);y=max(y)-y;
plot(x,y, ‘-gs’); grid on; hold on

% red plot
xs1=[sqrt(y)]; xs2=[sqrt(y)+8];
x=[flipud((8-(xs1*2)));(xs2)];
ys1=flipud(y); y=[ys1;(y)];
plot(x',y', ‘-r+’); grid on; hold on

% blue plot
x=[-4:0.1:0,.1:0.1:4];
y=x.^2; y(x<=0)=[]; x(x<=0)=[];
x(x>0)=fliplr(x(x>0)); x=x+8; y=max(y)-y;
y_=fliplr(y/max(y)*3*pi); x_=1*cos(y_)-1;
x__=0:(6/(length(x_)-1)):6;
x=[x,x_+8-x__]; y=[y,y_/3/pi*16];
plot(x,y,‘-b+’); grid on; hold on

For each of these curves the filtration angle was calculated according to individual AOD, TIA and TISA methods, i.e.

xl=[]; xp=[];
TIA=[];
TISA=[];
AOD=[];
xr=8; yr=0;
    for i=round(length(x)/2):-1:1
        line([x([i,length(x)-i+1])], [y([i,length(x)-
i+1])], ‘Color’,[0 1 0])
        Pl = POLYFIT([xr x(i)],[yr y(i)],1);
        Pp = POLYFIT([xr x(length(x)-i+1)],[yr y(length(x)-
i+1)],1);
        TIA=[TIA; [y(i) -atan(Pp(1)-Pl(1))*180/pi]];
        AOD=[AOD; [y(i) -(x(length(x)-i+1)-x(i))]];
        TISA=[TISA; [y(i) sum(AOD(:,2))]];
    end

figure;
plot(AOD(:,1), AOD(:,2)./max(max([AOD(:,2)])), ‘-r+’);
hold on; grid on
xlabel(‘y [piksel]’);
ylabel(‘D (AOD), D (TISA),D (TIA), [\\]’)
plot(TISA(:,1), TISA(:,2)./max(max([TISA(:,2)])), ‘-g+’);
plot(TIA(:,1), TIA(:,2)./max(max([TIA(:,2)])), ‘-b+’);
legend(‘AOD’, ‘TISA’, ‘TIA’)

The results obtained at pathologies for a diminishing distance y for images in Fig. 3-16 have been shown in the following figures.

Fig. 3-17. Contours of the filtration angle measured for three examples of patients.

Fig. 3-17

Contours of the filtration angle measured for three examples of patients.

Fig. 3-18. Values of distance D measurements for the AOD method vs. y for different shapes of the filtration angle (Fig. 3-17).

Fig. 3-18Values of distance D measurements for the AOD method vs. y for different shapes of the filtration angle (Fig. 3-17)

Fig. 3-19. Values of area s measurements for the TIA method vs. y for different shapes of the filtration angle (Fig. 3-17).

Fig. 3-19Values of area s measurements for the TIA method vs. y for different shapes of the filtration angle (Fig. 3-17)

The results obtained for an actual image case with the presence of noise (random steady interference in the 0÷1 range) are presented in the following figures (Fig. 3-21 - Fig. 3-24)

Fig. 3-21. Contours of the filtration angle measured for three examples of patients, together with noise.

Fig. 3-21

Contours of the filtration angle measured for three examples of patients, together with noise.

Fig. 3-24. Values of angle α measurements for the TIA method vs. y for different shapes of the filtration angle (Fig. 3-17).

Fig. 3-24

Values of angle α measurements for the TIA method vs. y for different shapes of the filtration angle (Fig. 3-17).

Fig. 3-22. Values of distance D measurements for the AOD method vs. y for different shapes of the filtration angle (Fig. 3-21).

Fig. 3-22Values of distance D measurements for the AOD method vs. y for different shapes of the filtration angle (Fig. 3-21)

The disturbance was random added as follows:

x=x+rand(size(x))*2;
y=y+rand(size(y))*2;

The following conclusions may be drawn from the graphs presented above:

  • increasing value of y (place of the measurement) to the least extent affects the results obtained from the TIA method – Fig. 3-20 and Fig. 3-24.
  • the noise introduced to the contour of the image measured to the least extent affects the measurement by the TISA method;
  • in the cases of moving the place of measurement, of increasing the value of y, the results of measurements for all TISA and AOD methods are overestimated, while for the TIA method they strictly depend on the shape of the contour measured (Fig. 3-20).
  • changes of the shape of the analysed image contour only slightly affect the results obtained from the TIA method;
  • the TISA method, stable in terms of the occurring noise (Fig. 3-23), has a drawback in the form of a nonlinear dependence of the measurement results on the place of measurement – the value of y. As it results from Fig. 3-23 this nonlinearity causes sudden changes of the measured value for the increasing values of y.
  • the drawback of the method used consists of necessary full automation of the measurement due to high amounts of time consumed for individual calculations.
Fig. 3-20. Values of angle α measurements for the TIA method vs. y for different shapes of the filtration angle (Fig. 3-17).

Fig. 3-20

Values of angle α measurements for the TIA method vs. y for different shapes of the filtration angle (Fig. 3-17).

Fig. 3-23. Values of area s measurements for the TIA method vs. y for different shapes of the filtration angle (Fig. 3-17).

Fig. 3-23

Values of area s measurements for the TIA method vs. y for different shapes of the filtration angle (Fig. 3-17).

Summing up, the AOD method is the best for a contour in which the filtration angle measured is approximated by lines, in other cases it is the TISA method.

The method proposed, because of the laborious obtaining of partial results, requires a full automation of the measurement.

So it is already known, which of methods is most appropriate in terms of sensitivity to personal characteristics (degree of pathology); further on it is interesting to assess the sensitivity to change of parameters, but set by the operator (characteristic points indication). These are measurements necessary to assess the precision obtained during manual measurement of parameters.

3.6. Algorithm for Automated Analysis of the Filtration Angle

Two main directions of algorithm operation:

  • automated calculation of the filtration angle,
  • automated determination of sclera layers for 3D reconstruction.

On the basis of the above medical premises [21], [30] and preliminary tests performed the following block diagram of the algorithm has been suggested (Fig. 3-25).

Fig. 3-25. Block diagram of the algorithm.

Fig. 3-25

Block diagram of the algorithm.

As mentioned in the introduction the input image of 256×1024 resolution and on average of 0.0313 mm/pixel is entered in the DICOM format to the Matlab software space. The source code may be divided into two parts: the readout of the file as a set of bytes and the conversion to one of image recording formats including acquiring necessary information from the header.

The readout of 3.dcm file was carried out in accordance with the information provided in the initial section, i.e.:

path_name=‘d:/OCT/SOURCES/3.DCM’;
fid = fopen(path_name, ‘r’);
dataa = fread(fid, ‘uint8’);
fclose(fid);
[header_dicom,Ls]=OCT_head_read(dataa);

Further on the algorithm comprises filtration using a median filter of Ls image of 3×3 mask size, changing resolution to accelerate calculations and individual columns analysis [32].

Ls=medfilt2(Ls,[7 7]);
Ls=imresize(Ls,[256 512]);
figure; imshow(Ls,[]);
L2=Ls

This analysis results in the calculation for each column of the binarisation threshold (images are calibrated) as 10% of the brightest of the existing pixels (Fig. 3-26) i.e.:

Fig. 3-26. A binary image originated from the original image after the binarisation with a threshold of 90% of the maximum value.

Fig. 3-26

A binary image originated from the original image after the binarisation with a threshold of 90% of the maximum value.

przed=1;
L22=imclose(Ls>max(max(Ls))*0.10,ones([3 3]));

The binary values for each column are consecutively analysed considering the criterion of the largest object length. An example record to delete all objects larger than 100 pixels looks as follows:

L2lab=bwlabel(L22);
L33=zeros(size(L22));
for ir=1:max(L2lab(:))
    L2_=L2lab==ir;
    if sum(L2_(:))>100
        L33=L33|L2_;
    end
end
figure; imshow(L33,[]);

Then – to eliminate small inclusions and separations of layers – a method of holes filling is implemented.

L22=bwfill(L33, ‘holes’);
figure; imshow(L22,[]);
L55=bwlabel(xor(L22,L33));
for ir=1:max(L55(:))
    L5_=L55==ir;
    if sum(L5_(:))<100
        L33=L33|L5_;
    end
end
L22=L33;
figure; imshow(L22,[]);

Obviously in this case the function bwfill (…,‘holes’) would be sufficient itself, however, all holes would be filled and not only those, which have the number of pixels (area) smaller than 100.

The image preliminary prepared in this way is used to perform the operation of the sclera boundaries determination and the approximation of the boundaries determined by a third degree polynomial (Fig. 3-30).:

Fig. 3-30. The image of separated analysed area Lwys.

Fig. 3-30

The image of separated analysed area Lwys.

linie_12=[];
for i=1:size(L22,2)
    Lf=L22(:,i);
    Lff=bwlabel(Lf);
    if sum(Lff)>0
        clear Lnr
        for yt=1:max(Lff(:))
            Lffd=Lff==yt;
            if sum(Lffd(:))>10
                Lnr=[(1:length(Lffd))',Lffd];
Lnr(Lnr(:,2)==0,:)=[];
                break
            end
        end
        if (exist(‘Lnr’)>0)&(∼isempty(Lnr))
            linie_12=[linie_12; [i Lnr(1,1) Lnr(end,1)]];
        end
    end
end
hold on;
plot(linie_12(:,1),linie_12(:,2),‘r*’); grid on
plot(linie_12(:,1),linie_12(:,3),‘g*’); grid on

The next stage is the filtration using a median filter, i.e.:

linie_12(:,2)=medfilt2(linie_12(:,2),[5 1]);
linie_12(:,3)=medfilt2(linie_12(:,3),[5 1]);

The obtained values of (x,y) coordinates in the variable linie_12 are analysed with regard to differences in oy axis exceeding the threshold set, e.g. 5 pixels (selected taking into account medical premises), i.e.:

x=linie_12(:,1);
y=linie_12(:,2);
ybw=bwlabel(abs([diff(y') 0])<5);

For each pair of coordinate sets obtained for all combinations of labels, the approximation by a third degree polynomial is performed.

    rzad=3;
    toler=10;
P=polyfit(x,y,rzad);
    Y=polyval(P,x);
    yyy=Y-y;
pamm=[0 0 sum(abs(yyy)<toler)/length(yyy)];
for ir=1:(max(ybw)-1)
    for irr=(ir+1):max(ybw)
        y_=[y(ybw==ir); y(ybw==irr)];
        x_=[x(ybw==ir); x(ybw==irr)];
        P=polyfit(x_,y_,rzad);
        Y=polyval(P,x);
        hold on; plot(x,Y,‘-g*’);
        yyy=Y-y;
        pamm=[pamm; [ir irr sum(abs(yyy)<toler
)/length(yyy)]];
    end
end

Then this combination of such pairs of coordinate sets is chosen, for which around the tolerance set.

pamm_=sortrows(pamm,3); ir=pamm_(end,1); irr=pamm_(end,2);
    if ir==0;
        y_=y;
        x_=x;
        P=polyfit(x_,y_,rzad);
        Y=polyval(P,x);
        yyy=Y-y;
        y_=y(abs(yyy)<toler);
        x_=x(abs(yyy)<toler);
        P=polyfit(x_,y_,rzad);
        Y=polyval(P,x);
    else
        y_=[y(ybw==ir); y(ybw==irr)];
        x_=[x(ybw==ir); x(ybw==irr)];
        P=polyfit(x_,y_,rzad);
        Y=polyval(P,x);
        yyy=Y-y;
        y_=y(abs(yyy)<toler);
        x_=x(abs(yyy)<toler);
        P=polyfit(x_,y_,rzad);
        Y=polyval(P,x);
    end
plot(x,Y,‘b*’); grid on

After the analysis of the iris and of the ciliary processes the analysis of iris endings is carried out, using the information originating within the sclera boundaries (Fig. 3-31).

Fig. 3-31. The diagram of sum of pixel brightness values for individual columns (XXI,YYI) – red (XXp,YYP) – green.

Fig. 3-31

The diagram of sum of pixel brightness values for individual columns (XXI,YYI) – red (XXp,YYP) – green.

The contour of internal boundary is analysed in a similar way:

y_1=Y;
x=linie_12(:,1);
y=linie_12(:,3);
ybw=bwlabel(abs([diff(y') 0])<5);
rzad=3; toler=15;
P=polyfit(x,y,rzad); pamm=[];
    Y=polyval(P,x);
    yyy=Y-y;
       if sum((Y(:)-y_1(:))<0    )==0
           pamm=[0 0 sum(abs(yyy)<toler)/length(yyy)];
       end
for ir=1:(max(ybw)-1)
   for irr=(ir+1):max(ybw)
       y_=[y(ybw==ir); y(ybw==irr)];
       x_=[x(ybw==ir); x(ybw==irr)];
       P=polyfit(x_,y_,rzad);
       Y=polyval(P,x);
       yyy=Y-y;
       if sum((Y(:)-y_1(:))<0)==0
           pamm=[pamm; [ir irr sum(abs(yyy)<toler
)/length(yyy)]];
       end
   end
 end
if size(pamm,1)>1
pamm_=sortrows(pamm,3);
ir=pamm_(end,1); irr=pamm_(end,2);

    if ir==0;
        y_=y;
        x_=x;
        P=polyfit(x_,y_,rzad);
        Y=polyval(P,x);
        yyy=Y-y;
        y_=y(abs(yyy)<toler);
        x_=x(abs(yyy)<toler);
        P=polyfit(x_,y_,rzad);
        Y=polyval(P,x);
    else
        y_=[y(ybw==ir); y(ybw==irr)];
        x_=[x(ybw==ir); x(ybw==irr)];
        P=polyfit(x_,y_,rzad);
        Y=polyval(P,x);
        yyy=Y-y;
        y_=y(abs(yyy)<toler);
        x_=x(abs(yyy)<toler);
        P=polyfit(x_,y_,rzad);
        Y=polyval(P,x);
    end
else
    x=[]; Y=[];P=[];
end
plot(x,Y,‘m*’); grid on
y_2=Y;

The results of contour analysis are shown in (Fig. 3-28). The input image within approximated boundaries, red – the approximation result marked blue, and green – the approximation result marked white, respectively.

Fig. 3-28. The input image with detected boundaries of anterior eye segment marked red and green.

Fig. 3-28

The input image with detected boundaries of anterior eye segment marked red and green.

The next phases of algorithm operation consist in analysing the area situated under the contour marked red in Fig. 3-28. Because of that it is necessary to draw a straight line normal to the tangent at each point of the contour. The algorithm performing such calculations is shown below, while the results in Fig. 3-29 i Fig. 3-30.

Fig. 3-29. OCT image of the anterior eye part with marked analysis area (red and turquoise).

Fig. 3-29

OCT image of the anterior eye part with marked analysis area (red and turquoise).

figure; imshow(L33,[]); hold on
    sf=zeros( [ 1 length(Y) ] ); sf_=zeros( [ 1 length(Y)
]  );pole_=zeros([1 length(Y)]);
    pole_x=zeros([1 length(Y)]); pole_y=zeros([1
length(Y)]);
      p_zn=0; zakres_=40;
Lwys=zeros([zakres_ length(Y)-1]);
Lwys_bin=zeros([zakres_ length(Y)-1]);
L_gridXX=[]; L_gridYY=[];
for nb=1:(length(Y)-1)
     PP=polyfit(x(nb:nb+1),Y(nb:nb+1),1);
     PP2(2)=x(nb)/PP(1)+Y(nb);
     PP2(1)=-1/PP(1);
     if Y(nb)>Y(nb+1)
        XX=x(nb):1:(x(nb)+zakres_);
     else
        XX=x(nb):-1:(x(nb)-zakres_);
     end
     YY=polyval(PP2,XX);
     if  (max(YY)-min(YY))>(zakres_+1)
         YY=Y(nb):1:(Y(nb)+zakres_);
         PP3(1)=1/PP2(1);
         PP3(2)=-PP2(2)/PP2(1);
         XX=polyval(PP3,YY);
        plot(XX,YY,‘r*’); grid on; hold on
        XX(round(YY)>size(L2,1))=[];
YY(round(YY)>size(L2,1))=[];
        YY(round(XX)>size(L2,2))=[];
XX(round(XX)>size(L2,2))=[];
        for vc=1:length(XX);  if
(round(YY(vc))>0)&(round(XX(vc))>0) ;Lwys(vc,nb)=L2(
round(YY(vc)), round(XX(vc))); Lwys_bin(vc,nb)=L22(
round(YY(vc)), round(XX(vc))); end; end
        L_gridXX(1:length(XX),nb)=XX;
L_gridYY(1:length(YY),nb)=YY;
     else
        plot(XX,YY,‘c*’); grid on; hold on
        XX(round(YY)>size(L2,1))=[];
YY(round(YY)>size(L2,1))=[];
        YY(round(XX)>size(L2,2))=[];
XX(round(XX)>size(L2,2))=[];
        for vc=1:length(XX);  if
(round(YY(vc))>0)&(round(XX(vc))>0); Lwys(vc,nb)=L2(
round(YY(vc)), round(XX(vc))); Lwys_bin(vc,nb)=L22(
round(YY(vc)), round(XX(vc))); end; end
        L_gridXX(1:length(XX),nb)=XX;
L_gridYY(1:length(YY),nb)=YY;
     end
end
figure; imshow(Lwys,[]);

The image originated from marked area pixels is analysed in the next stage of algorithm operation. The area is divided into two equal parts and the filtration angle is analysed independently in each of them. This is the last common part of algorithm for both angles calculation.

Lwys_bin=imopen(Lwys_bin,ones(3));
Lss=sum(Lwys);
XX=1:length(Lss);
YY=Lss;
Lm=max(Lss(:));
XXl=XX(1:round(length(XX)/2));
XXp=XX(round(length(XX)/2):end);
YYl=YY(1:round(length(YY)/2));
YYp=YY(round(length(YY)/2):end);
YYlb=YYl>(Lm/4);
YYpb=YYp>(Lm/4);
nr_XXl=1:length(XXl);
nr_XXp=1:length(XXp);
XXl_max=nr_XXl(YYl==max(YYl));
XXp_max=nr_XXp(YYp==max(YYp));
 figure
plot(XXl,YYl,‘-r*’); hold on
plot(XXp,YYp,‘-g*’); grid on
xlabel(‘XXl-red, XXp - green’)
ylabel(‘YYl,YYp’)

The obtained diagram of sum values calculated for individual columns is presented in Fig. 3-31.

An automated finding of the filtration angle vertex and determination of the correspondence between contour points (pixels forming the angle edges) is one of more difficult fragments of the algorithm operation. This analysis was started from automated finding the place on the contour, in which a normal to the tangent zakres_=40 long for the first time comprises a ciliary process, i.e.:

YYlb_=bwlabel(YYlb);
pam_l=[];
for ty=1:max(YYlb_)
    YYt=YYl(YYlb_==ty);
    XXt=XXl(YYlb_==ty);
    pam_l=[pam_l; [ty sum(YYt) XXt(end)]];
end
if size(pam_l,1)>0
    pam_l=pam_l(YYlb_(XXl_max),:);
    plot(pam_l(1,3),Y(pam_l(1,3)),‘rs’,‘MarkerSize’,10)
end

Further on, having the contour point mentioned, the fragment comprising the interesting measured filtration angle is analysed. For the filtration angle situated on the left-hand side of the image the algorithm has the form:

xy_g_l=[];
xy_d_l=[];
for vv=pam_l(1,3):-1:1
    pp=Lwys_bin(:,vv);
    ppl=bwlabel(pp);
    pam_lab=[];
    for jk=1:max(ppl)
        ppl_=ppl==jk;
        y_ppl=1:length(ppl_);
        y_ppl(ppl_==0)=[];
        pam_lab=[pam_lab;[vv jk y_ppl(1) sum(ppl_)
y_ppl(end)]];
    end
    if (size(pam_lab,1)>1)&(pam_lab(1,3)∼=1);
                pam_lab(1,:)=[];
pam_lab=sortrows(pam_lab,4);
            if linie_12(round(L_gridXX(1,vv)),3)<
L_gridYY(pam_lab(end,3),vv)
                xy_g_l=[xy_g_l;[L_gridXX(1,vv)
linie_12(round(L_gridXX(1,vv)),3)]];
                xy_d_l=[xy_d_l;[L_gridXX(pam_lab(end,3),vv)
L_gridYY(pam_lab(end,3),vv)]];
            end
    end
    if (size(pam_lab,1)==1)&(pam_lab(1,3)∼=1);
            if
linie_12(round(L_gridXX(1,vv)),3)<L_gridYY(pam_lab(1,3),vv)
                xy_d_l=[xy_d_l;[L_gridXX(pam_lab(1,3),vv)
L_gridYY(pam_lab(1,3),vv)]];
                xy_g_l=[xy_g_l;[L_gridXX(1,vv)
linie_12(round(L_gridXX(1,vv)),3)]];
            end
    end
    if (size(pam_lab,1)==2)&(pam_lab(1,3)==1);
            if
L_gridYY(pam_lab(1,5),vv)<L_gridYY(pam_lab(end,3),vv)
                xy_d_l=[xy_d_l;[L_gridXX(pam_lab(end,3),vv)
L_gridYY(pam_lab(end,3),vv)]];
                xy_g_l=[xy_g_l;[L_gridXX(pam_lab(1,5),vv)
L_gridYY(pam_lab(1,5),vv)]];
            end
            pam_lab(1,:)=[];
    end
    if (size(pam_lab,1)>2)&(pam_lab(1,3)==1);
            pam_lab(1,:)=[]; pam_lab=sortrows(pam_lab,4);
            if
L_gridYY(pam_lab(1,5),vv)<L_gridYY(pam_lab(end,3),vv)
                xy_g_l=[xy_g_l;[L_gridXX(pam_lab(1,5),vv)
L_gridYY(pam_lab(1,5),vv)]];
                xy_d_l=[xy_d_l;[L_gridXX(pam_lab(end,3),vv)
L_gridYY(pam_lab(end,3),vv)]];
            end
    end
    if (size(pam_lab,1)==1)&(pam_lab(1,3)==1)
        pam_lab=pam_lab(1,1);
        break
    end
end
hold on; plot(pam_lab,Y(pam_lab),‘k*’); hold on; grid on
if size(xy_g_l)>1
plot(xy_g_l(:,1),xy_g_l(:,2),‘-y*’)
plot(xy_d_l(:,1),xy_d_l(:,2),‘-m*’)
for ib=1:size(xy_g_l,1)
    line([xy_g_l(ib,1) xy_d_l(ib,1)],[xy_g_l(ib,2)
xy_d_l(ib,2)],‘Color’,‘y’,‘LineWidth’,1);
end
end

Instead, the algorithm analysing the filtration angle situated on the right-hand side of the image is provided below.

YYpb_=bwlabel(YYpb);
pam_p=[];
for ty=1:max(YYpb_)
    YYt=YYp(YYpb_==ty);
    XXt=XXp(YYpb_==ty);
    pam_p=[pam_p; [ty sum(YYt) XXt(1)]];
end
if size(pam_p,1)>0
    pam_p=pam_p(YYpb_(XXp_max),:);
    plot(pam_p(end,3),Y(pam_p(end,3)),‘rs’,‘MarkerSize’,10)
end
xy_g_p=[];
xy_d_p=[];
for vv=(pam_p(1,3)+1):size(Lwys_bin,2)
    pp=Lwys_bin(:,vv);
    ppl=bwlabel(pp);
    pam_lab=[];
    for jk=1:max(ppl)
        ppl_=ppl==jk;
        y_ppl=1:length(ppl_);
        y_ppl(ppl_==0)=[];
        pam_lab=[pam_lab;[vv jk y_ppl(1) sum(ppl_)
y_ppl(end)]];
    end
    if (size(pam_lab,1)>1)&(pam_lab(1,3)∼=1);
                pam_lab(1,:)=[];
pam_lab=sortrows(pam_lab,4);
            if linie_12(round(L_gridXX(1,vv)),3)<
L_gridYY(pam_lab(end,3),vv)
                xy_g_p=[xy_g_p;[L_gridXX(1,vv)
linie_12(round(L_gridXX(1,vv)),3)]];
                xy_d_p=[xy_d_p;[L_gridXX(pam_lab(end,3),vv)
L_gridYY(pam_lab(end,3),vv)]];
            end
    end
    if (size(pam_lab,1)==1)&(pam_lab(1,3)∼=1);
            if
linie_12(round(L_gridXX(1,vv)),3)<L_gridYY(pam_lab(1,3),vv)
                xy_d_p=[xy_d_p;[L_gridXX(pam_lab(1,3),vv)
L_gridYY(pam_lab(1,3),vv)]];
                xy_g_p=[xy_g_p;[L_gridXX(1,vv)
linie_12(round(L_gridXX(1,vv)),3)]];
            end
    end
    if (size(pam_lab,1)==2)&(pam_lab(1,3)==1);
            if
L_gridYY(pam_lab(1,5),vv)<L_gridYY(pam_lab(end,3),vv)
                xy_d_p=[xy_d_p;[L_gridXX(pam_lab(end,3),vv)
L_gridYY(pam_lab(end,3),vv)]];
                xy_g_p=[xy_g_p;[L_gridXX(pam_lab(1,5),vv)
L_gridYY(pam_lab(1,5),vv)]];
            end
            pam_lab(1,:)=[];
    end
    if (size(pam_lab,1)>2)&(pam_lab(1,3)==1);
            pam_lab(1,:)=[]; pam_lab=sortrows(pam_lab,4);
            if
L_gridYY(pam_lab(1,5),vv)<L_gridYY(pam_lab(end,3),vv)
                xy_g_p=[xy_g_p;[L_gridXX(pam_lab(1,5),vv)
L_gridYY(pam_lab(1,5),vv)]];
                xy_d_p=[xy_d_p;[L_gridXX(pam_lab(end,3),vv)
L_gridYY(pam_lab(end,3),vv)]];
            end
    end
    if (size(pam_lab,1)==1)&(pam_lab(1,3)==1)
        pam_lab
        pam_lab=pam_lab(1,1);
        disp(‘kuku’)
        break
    end

end
hold on; plot(pam_lab,Y(pam_lab),‘k*’); hold on; grid on
if size(xy_g_p)>1
plot(xy_g_p(:,1),xy_g_p(:,2),‘-y*’)
plot(xy_d_p(:,1),xy_d_p(:,2),‘-m*’)
for ib=1:size(xy_g_p,1)
    line([xy_g_p(ib,1) xy_d_p(ib,1)],[xy_g_p(ib,2)
xy_d_p(ib,2)],‘Color’,‘y’,‘LineWidth’,1);
end
end

Based on the presented algorithm fragment it is possible to analyse automatically the filtration angle determined by the yellow and turquoise lines

Fig. 3-32. The result of algorithm fragment automatically determining the walls contours (yellow and red) necessary to calculate the filtration angle – left-hand side.

Fig. 3-32The result of algorithm fragment automatically determining the walls contours (yellow and red) necessary to calculate the filtration angle – left-hand side

The filtration angle calculated traditionally as the angle between tangents formed from the edge lines (yellow and red colour - Fig. 3-33) may be implemented as follows:

Fig. 3-33. The result of algorithm fragment automatically determining the walls contours (yellow and red) necessary to calculate the filtration angle – right-hand side.

Fig. 3-33

The result of algorithm fragment automatically determining the walls contours (yellow and red) necessary to calculate the filtration angle – right-hand side.

PPgl=polyfit(xy_g_l(:,1),xy_g_l(:,2),1);
PPdl=polyfit(xy_d_l(:,1),xy_d_l(:,2),1);
PPgp=polyfit(xy_g_p(:,1),xy_g_p(:,2),1);
PPdp=polyfit(xy_d_p(:,1),xy_d_p(:,2),1);

x=1:size(L33,2);
y=polyval(PPgl,x); plot(x,y,‘r*’)
y=polyval(PPdl,x); plot(x,y,‘g*’)
y=polyval(PPgp,x); plot(x,y,‘b*’)
y=polyval(PPdp,x); plot(x,y,‘y*’)
al=atan(PPdl(1))*180/pi-atan(PPgl(1))*180/pi;
ap=atan(PPgp(1))*180/pi-atan(PPdp(1))*180/pi;
al=round(al*10)/10;
ap=round(ap*10)/10;
title([‘Left - ’,mat2str(al),‘^o’, ‘ Right -
’,mat2str(ap),‘^o’])

As a result, we obtain the filtration angle value calculated traditionally – these are values in al and ap variables.

Fig. 3-35. Fragments selected from Fig. 3-34.

Fig. 3-35Fragments selected from Fig. 3-34

Based on the algorithm, presented above, for the inter-sclera analysis it is possible to estimate the position of filtration angles and to calculate AOD, TISA and TIA values during approx. 3 s/image on a computer with a 64-bit operating system, Intel Core Quad CPU 2.5 GHz processor, 2GB RAM (Fig. 3-36).

Fig. 3-36. Algorithm operation time for consecutive images calculated on a computer with a 64-bit operating system, Intel Core Quad CPU 2.5 GHz processor, 2GB RAM.

Fig. 3-36

Algorithm operation time for consecutive images calculated on a computer with a 64-bit operating system, Intel Core Quad CPU 2.5 GHz processor, 2GB RAM.

AOD, TISA and TIA methods have drawbacks consisting in difficulties to cope with a large degree of pathology. Such situations occur in the case of partial narrowing of the filtration angle. Therefore the analysis of distances between appropriate points has been suggested in accordance with Fig. 3-38 using the previous calculations.

Fig. 3-38. Result of operation of algorithm for automated filtration angle measurement.

Fig. 3-38

Result of operation of algorithm for automated filtration angle measurement.

dist_l=[];
for ib=1:size(xy_g_l,1)
    r_x=xy_g_l(ib,1) - xy_d_l(ib,1);
    r_y=xy_g_l(ib,2) - xy_d_l(ib,2);
    dist_l(ib,1:2)=[ib sqrt( (r_x).^2 + (r_y).^2 )];
end
dist_p=[];
for ib=1:size(xy_g_p,1)
    r_x=xy_g_p(ib,1) - xy_d_p(ib,1);
    r_y=xy_g_p(ib,2) - xy_d_p(ib,2);
    dist_p(ib,1:2)=[ib sqrt( (r_x).^2 + (r_y).^2 )];
end
figure; plot(dist_l(:,1), dist_l(:,2),‘-r*’); hold on
plot(dist_p(:,1), dist_p(:,2),‘-g*’); grid on

The graph obtained using the above fragment of the algorithm is shown below.

Fig. 3-37. Result of distance measurement at the filtration angle measurement.

Fig. 3-37Result of distance measurement at the filtration angle measurement

The following images show examples of results obtained for other patients.

The results obtained for the authors' method show it copes much better with large degrees of pathology, which is confirmed by the graph of distance changes shown in Fig. 3-39.

Fig. 3-39. Result of operation of algorithm for automated filtration angle measurement.

Fig. 3-39

Result of operation of algorithm for automated filtration angle measurement.

3.6.1. Advantages of the Algorithm Proposed

An automated analysis of anterior eye segment allows obtaining reliable results during a period of time not longer than 3.5 s. The assessment of algorithm sensitivity to parameter changes, and in particular - to

  • The area of iridocorneal angle searching shows the largest dependence on the width of the iris searching area for pathological cases,
  • For 70,736 images correct results were obtained for around 55,000 cases. An approximate result indicating the number of properly measured cases results from the difficulties in the assessing and suggesting how the algorithm should properly respond,
  • The greatest measurement error, excluding the impact of method errors and its sensitivity, occurred for AOD and TIA methods,
  • On the basis of experience gained in the measurement of the filtration angle an own authors' measurement method has been suggested.

Summarising this section it is necessary to emphasise the fact that the presented Matlab source code does not exhaust the issue. It is short of both protections, e.g. related to the detection of proper position of filtration angles and also of preliminary analysis of the analysed object position on the scene (Results for path_name= ‘d:/OCT/SOURCES/1.DCM’ - Fig. 3-40 and Fig. 3-41).

Fig. 3-40. Result of erroneous operation of algorithm automatically determining the filtration angle (yellow).

Fig. 3-40

Result of erroneous operation of algorithm automatically determining the filtration angle (yellow).

Fig. 3-41. Enlarged fragment from Fig. 3-40.

Fig. 3-41

Enlarged fragment from Fig. 3-40.

In this place we encourage the Readers to create on their own such simple safeguards in the algorithm. Readers can also optimise the graph (Fig. 3-39.) to useful values. These are such parameters, which will allow rough approximation of graph Fig. 3-39. These situations apply to cases presented in (Fig. 3-42).

Fig. 3-42. Demonstrative figure showing filtration angles and problems with describing the results obtained using the algorithm presented.

Fig. 3-42

Demonstrative figure showing filtration angles and problems with describing the results obtained using the algorithm presented.

A description of pathological cases of filtration angle prevails here, such, for which there are cases of local narrowing or local closure of the angle (Fig. 3-42). The notation, which a Reader can suggest, must consist of a few digits (symbols) automatically determined from the graph, Fig. 3-39. For example, the alphabet created may look as follows:

-

symbols:

  • / - increasing distance for consecutive id-s,
  • ^ - local minimum,
  • v – local maximum,
  • _ - invariable value of distance for a changing id.
-

numerical parameters:

-

angular value,

-

maximum, minimum or constant distance for defined id-s,

-

id range, in which a specific situation does not occur.

For example, the notation _80,100 /30 consists of two symbols “_” and “/”, where according to the interpretation adopted the former stands for a narrowing, a slit, in the filtration angle of dist = 80 value in the range id = 100 um and the latter – a typical angle of 30° (corresponding to the state from the second image in Fig. 3-42).

3.7. Determination of Anterior Chamber Volume Based on a Series of Images

The analysis of anterior chamber (Fig. 3-43) is based on contours of boundaries determined in the previous section and presented in Fig. 3-27. They were determined on images performed at preset angles acc. to (Fig. 3-44).

Fig. 3-43. Anterior chamber position in eye cross-section.

Fig. 3-43

Anterior chamber position in eye cross-section.

Fig. 3-27. A binary image after the operation of holes filling with approximation lines marked green, and the best fit marked blue.

Fig. 3-27

A binary image after the operation of holes filling with approximation lines marked green, and the best fit marked blue.

Fig. 3-44. Arrangement of individual eye scans.

Fig. 3-44

Arrangement of individual eye scans.

Fig. 3-45. Contours of external boundary of sclera on scans A and B (Fig. 3-43) ) in a Cartesian coordinate system.

Fig. 3-45Contours of external boundary of sclera on scans A and B (Fig. 3-43) ) in a Cartesian coordinate system

To simplify the notation, let us further assume that the function determining necessary contours and the filtration angle will be defined as:

[Ls,L22,xy_g_l, xy_d_l, xy_g_p,
xy_d_p,linie_12]=OCT_angle_line(Ls);

where the OCT image is an input parameter and at the output we obtain, in accordance with the previous section:

  • xy_d_l coordinates x and y of the filtration angle, left hand lower contour,
  • xy_g_l coordinates x and y of the filtration angle, left hand upper contour,
  • xy_d_p coordinates x and y of the filtration angle, right hand lower contour,
  • xy_g_p coordinates x and y of the filtration angle, right hand upper contour,
  • linie_12 contour lines.

To obtain the result presented in Fig. 3-34 using the function defined this way, it is necessary to write:

Fig. 3-34. The result of algorithm operation, where the inclination angle of straight lines relative to each other for the right and left filtration angle is the measured value.

Fig. 3-34

The result of algorithm operation, where the inclination angle of straight lines relative to each other for the right and left filtration angle is the measured value.

    path_name=‘d:/OCT/SOURCES/3.DCM’;
    fid = fopen(path_name, ‘r’);
    dataa = fread(fid,‘uint8’);
    fclose(fid);
    [header_dicom,Ls]=OCT_head_read(dataa);
    [Ls,L22,xy_g_l, xy_d_l, xy_g_p,
xy_d_p,linie_12]=OCT_angle_line(Ls);
    figure; imshow(Ls,[]); hold on
    PPgl=polyfit(xy_g_l(:,1),xy_g_l(:,2),1);
    PPdl=polyfit(xy_d_l(:,1),xy_d_l(:,2),1);
    PPgp=polyfit(xy_g_p(:,1),xy_g_p(:,2),1);
    PPdp=polyfit(xy_d_p(:,1),xy_d_p(:,2),1);
    x=1:size(Ls,2);
    y=polyval(PPgl,x); plot(x,y,‘r*’)
    y=polyval(PPdl,x); plot(x,y,‘g*’)
    y=polyval(PPgp,x); plot(x,y,‘b*’)
    y=polyval(PPdp,x); plot(x,y,‘y*’)
    al=atan(PPdl(1))*180/pi-atan(PPgl(1))*180/pi;
    ap=atan(PPgp(1))*180/pi-atan(PPdp(1))*180/pi;
    al=round(al*10)/10;
    ap=round(ap*10)/10;
    title([‘Left - ’,mat2str(al),‘^o’, ‘    Right -
’,mat2str(ap), ‘^o’])

The basic difficulty in an attempt to calculate the volume of anterior eye chamber is a correct determination of sclera boundaries and selection of appropriate method for approximation of intermediate spaces (Fig. 3-44), existing between scans A-B, B-C, C-D, D-A.

A method, preliminary consisting in creating the contour of internal and external sclera boundary, adjusted by the filtration angle boundaries, has been presented below, i.e.:

linie_m_x=[flipud(xy_d_l(:,1))',xy_d_l(1,1):xy_d_p(1,1) ,
xy_d_p(:,1)'];
linie_m_y=[flipud(xy_d_l(:,2))',linspace(xy_d_l(1,2),xy_d_p
(1,2),length(xy_d_l(1,1):xy_d_p(1,1))) , xy_d_p(:,2)'];
plot(linie_m_x,linie_m_y,‘-c*’)

The obtained boundary is shown in Fig. 3-46 and Fig. 3-47.

Fig. 3-46. Determined boundary after correction with the values of filtration angle boundary.

Fig. 3-46

Determined boundary after correction with the values of filtration angle boundary.

Fig. 3-47. Enlarged fragment from Fig. 3-46.

Fig. 3-47

Enlarged fragment from Fig. 3-46.

As visible in the image presented (Fig. 3-46 i Fig. 3-47) the contour marked with a turquoise line is not drawn in a perfect way. The correction consists in using the method of modified active contour (Fig. 3-48).

Fig. 3-48. Demonstrative figure showing the idea of straight line (red) dragging to the lens contour.

Fig. 3-48

Demonstrative figure showing the idea of straight line (red) dragging to the lens contour.

The method of modified active contour consists in maximisation of external – internal energy FZW. This energy is calculated as the difference between average values of pixels brightness inside and outside the declared area. In a general case the calculations start from the determination of characteristic points wi (w1, w2,…, wk-1, wk, wk+1,…,wK).

For each determined point wk a straight line is drawn, perpendicular to adjacent points and passing the point considered. For example, for point wk a straight line is drawn passing through it and perpendicular to the straight line connecting points wk-1, wk+1. In the next stage the outside and inside areas are defined and weights for individual pixels are determined. In the simplest case this is the average value of brightness at weights of individual pixels calculated as 1. Any shape of inside and outside area may be chosen, however, a rectangular area is most often used - Fig. 3-49.

Fig. 3-49. Demonstrative diagram of pixels arrangement in the analysis of operation of the modified active contour method and examples of analysis area.

Fig. 3-49

Demonstrative diagram of pixels arrangement in the analysis of operation of the modified active contour method and examples of analysis area.

If we assume, in accordance with the nomenclature from Fig. 3-49 Lu as an outside area, Ld as an inside area and their dimensions in the sense of the number of rows and columns in a rectangular case as pyd x (pxl+pxp+1) and pyu x (pxl+pxp+1) then the matrix of differences may be written as follows:

Δs={4ΔS,4,1ΔS,4,2ΔS,4,3ΔS,4,43ΔS,3,1ΔS,3,2ΔS,3,3ΔS,3,42ΔS,2,1ΔS,2,2ΔS,2,3ΔS,2,41ΔS,1,1ΔS,1,2ΔS,1,3ΔS,1,40ΔS,0,1ΔS,0,2ΔS,0,3ΔS,0,41ΔS,1,1ΔS,1,2ΔS,1,3ΔS,1,42ΔS,2,1ΔS,2,2ΔS,2,3ΔS,2,43ΔS,3,1ΔS,3,2ΔS,3,3ΔS,3,44ΔS,4,1ΔS,4,2ΔS,4,3ΔS,4,4

where ΔS is the difference in average values of Lu and Ld areas, i.e.:

ΔS=x,yLdpyd(pxl+pxp)x,yLupyu(pxl+pxp)
6
  • pyu - number of rows Lu,
  • pyd - number of rows Ld,
  • pu - range of movement and areas of the pixel Lu i Ld top,
  • pd - range of movement and areas of the pixel Lu i Ld down,
  • pxl - number of columns on the left part of the analyzed pixel,
  • pxp - number of columns on the left part of the analyzed piel,
  • ply - distance in the axis oy with yRPEC,
  • pxud - distance between neighboring pixels in the axis oy.

In the next stage elements are sorted separately for each column of matrix ΔS. As a result we obtain, for example:

sort(ΔS) is the basis to determine the target new position of pixels. The analysis of searching for the best solution is close to a problem of path seeking at the criterion of maximising the difference in the average values and minimising the difference in adjacent pixels positions. As against the latter ones, a coefficient pxud has been suggested, defined as a permissible difference in the position on the oy axis of pixels neighbouring in consecutive positions on the ox axis. Fig. 3-50 shows the selection of optimum path for pxud=0 – red colour, pxud=1 – black colour and pxud=2 – blue colour. Let us assume that we consider the case for pxud=2.

Fig. 3-50. Demonstrative figure of the method for selecting the optimum path.

Fig. 3-50

Demonstrative figure of the method for selecting the optimum path.

Starting from the point of coordinates (1,1) we obtain positions of the next pixels (-1,2), because |-1-1|<= pxud, then (-1,3), (-2,4), (-2,5) and (-4,6). While selecting the next points we should consider two elements: permissible change of the location on the ox axis defined by parameter pxud and the position of the largest values (the higher is a full element in the column, the better).

Reducing the value of pxud we obtain smaller differences on the oy axis between consecutive pixels, at a cost of increased error of contour fit. Instead, increasing the value of pxud we allow a possibility of greater fluctuation of neighbouring pixels on the oy axis, obtaining this way more precise representation of the contour. Looking at the matrix sort(ΔS) it is possible to notice a trend of finding the highest situated path for consecutive columns; this feature has been used in a practical implementation, i.e. in the function OCT_activ_cont

function
[yy,i]=OCT_activ_cont(L1,x,y,pud,pyud,pxud,pxlp,polaryzacja
)
x=x(:);
y=y(:);
pam_grd=[];
pam_num=[];
if polaryzacja==1
for i=1:size(x,1)
    gr_gd=[];
    for j=-pud:pud
        wgp=(y(i)-pyud+j);
        wgk=(y(i)+j);
        kgp=(x(i)-pxlp);
        kgk=(x(i)+pxlp);
        wdp=(y(i)+j);
        wdk=(y(i)+pyud+j);
        kdp=(x(i)-pxlp);
        kdk=(x(i)+pxlp);
        if wgp<=0; wgp=1; end
        if wdp<=0; wdp=1; end
        if wgk>size(L1,1); wgk=size(L1,1); end
        if wdk>size(L1,1); wdk=size(L1,1); end
        if kgp<=0; kgp=1; end
        if kdp<=0; kdp=1; end
        if kgk>size(L1,2); kgk=size(L1,2); end
        if kdk>size(L1,2); kdk=size(L1,2); end
       Lu=L1(wgp:wgk,kgp:kgk);
       Ld=L1(wdp:wdk,kdp:kdk);
       gr_gd=[gr_gd;mean(Lu(:))-mean(Ld(:))];
    end
    pam_grd=[pam_grd,gr_gd];
    gr_num=[gr_gd,((y(i)-pud) : (y(i)+pud))'];
    gr_num=sortrows(gr_num,1);
    pam_num=[pam_num,gr_num(:,2)];
end


elseif polaryzacja==-1

for i=1:size(x,1)
    gr_gd=[];
    for j=-pud:pud
        wgp=(y(i)-pyud+j);
        wgk=(y(i)+j);
        kgp=(x(i)-pxlp);
        kgk=(x(i)+pxlp);
        wdp=(y(i)+j);
        wdk=(y(i)+pyud+j);
        kdp=(x(i)-pxlp);
        kdk=(x(i)+pxlp);
        if wgp<=0; wgp=1; end
        if wdp<=0; wdp=1; end
        if wgk>size(L1,1); wgk=size(L1,1); end
        if wdk>size(L1,1); wdk=size(L1,1); end
        if kgp<=0; kgp=1; end
        if kdp<=0; kdp=1; end
        if kgk>size(L1,2); kgk=size(L1,2); end
        if kdk>size(L1,2); kdk=size(L1,2); end
       Lu=L1(wgp:wgk,kgp:kgk);
       Ld=L1(wdp:wdk,kdp:kdk);
       gr_gd=[gr_gd;mean(Ld(:))-mean(Lu(:))];
    end
    pam_grd=[pam_grd,gr_gd];
    gr_num=[gr_gd,((y(i)-pud) : (y(i)+pud))'];
    gr_num=sortrows(gr_num,1);
    pam_num=[pam_num,gr_num(:,2)];
end

else
    disp(‘polaryzation ?’)
end
i_hh=[];
for hh=1:7
i=ones([1 size(pam_num,2)]);
i(1)=hh;
j=1;
while (j+1)<size(pam_num,2)
    if abs(pam_num(i(j),j)-pam_num(i(j+1),j+1))<pxud
            j=j+1;
    else
            if i(j+1)<size(pam_num,1)
                i(j+1)=i(j+1)+1;
            else
                i(j+1)=i(j);
                j=j+1;
            end
    end
end
i_hh=[i_hh;i];
end
[d_,smiec]=find(sum(i_hh,2)==min(sum(i_hh,2)));
i=i_hh(d_(1),:);

yy=y;
for i__=1:length(i)
    yy(i__)=pam_num(i(i__),i__);
end

The input arguments for the functions are:

  • L1 – input image,
  • x – position of input points on the ox axis,
  • y – position of input points on the oy axis,

polarisation – parameter responsible for a feature of searched contour, “1” stands for a white object against a dark background, while value of “-1” – the opposite situation.

As a result, new coordinates on the oy axis are obtained. The presented implementation of modified active contour function has many limitations and assumptions made, related for instance to making an assumption that the contour searched for is situated horizontally.

However, the function presented has very interesting properties depending on the parameters adopted. These properties will be the subject of further considerations in one of the next sections. Using the function as follows:

pud=10;
pyud=10;
pxud=2;
pxlp=10;
polaryzacja=1;
[yy,i]=OCT_activ_cont(mat2gray(Ls),linie_m_x,linie_m_y,pud,
pyud,pxud, pxlp, polaryzacja);
plot(linie_m_x,yy,‘-w*’)
linie_12(:,2)=medfilt2(linie_12(:,2),[15 1]);
linie_12(:,3)=medfilt2(linie_12(:,3),[15 1]);
linie_mm_x=[flipud(xy_g_l(:,1)); linie_12(
(linie_12(:,1)>xy_g_l(1,1)) & (linie_12(:,1)<xy_g_p(1,1))
,1) ; xy_g_p(:,1)];
linie_mm_y=[flipud(xy_g_l(:,2)); linie_12(
(linie_12(:,1)>xy_g_l(1,1)) & (linie_12(:,1)<xy_g_p(1,1))
,3) ; xy_g_p(:,2)];
plot(linie_mm_x,linie_mm_y,‘-w*’)

We obtain the results presented in Fig. 3-51. The determined boundaries, marked white, have been obtained using the function OCT_activ_cont

Fig. 3-51. Determined boundaries marked white, using the function OCT_activ_cont.

Fig. 3-51

Determined boundaries marked white, using the function OCT_activ_cont.

Fig. 3-52 shows determined boundaries, marked white, obtained using the function OCT_activ_cont and connected corresponding points are marked with turquoise lines.

Fig. 3-52. Determined boundaries marked white, using the function OCT_activ_cont.

Fig. 3-52

Determined boundaries marked white, using the function OCT_activ_cont.

To prepare the final fragment of the algorithm for anterior chamber volume calculation it is necessary to connect the lines and to allocate correspondence to individual points, which is a simple procedure, i.e.:

if length(linie_m_x)>=length(linie_mm_x)
    for ht=1:length(linie_mm_x)
        linie_r=sortrows ([abs(linie_m_x'-
linie_mm_x(ht)),linie_m_x',linie_m_y']);
        line([linie_r(1,2) linie_mm_x(ht)],[linie_r(1,3)
linie_mm_y(ht)]);
    end
else
    for ht=1:length(linie_m_x)
        linie_r=sortrows ([abs(linie_mm_x-
linie_m_x(ht)),linie_mm_x,linie_mm_y]);
        line([linie_r(1,2) linie_m_x(ht)],[linie_r(1,3)
linie_m_y(ht)]);
    end
end

The above stage of inside boundaries determination on a single OCT image is a compact whole, which has been located in the function OCT_edge_inside returning values of boundaries contour line coordinates, i.e.:

[linie_m_x1,linie_m_y1,linie_121,linie_mm_x1,linie_mm_y1]=O
CT_edge_inside(Ls);

The last stage of the algorithm presented consists of calculation of anterior chamber volume on the basis of reconstruction presented. There are many practical methods used in such calculations.

The first group of methods is based on the definition for calculation of solid of revolution volume formed as a result of function f(x) revolution around axis ox and using the formula for volume V:

V=πx2x1(f(x))2dx
7

In this case there is a difficulty in defining the analytical shape of function f(x). Instead, accuracies obtained using this method are very high.

The second method consists in the calculation of average value V, calculated from revolutions of contour solid for each image. This method features lower accuracy, however, the results are obtained pretty quickly.

The third group of methods is based on the calculation of a sum of binary images pixels of image sequence originated on the xyo axis. This method is accurate and fast, but only in the case of discrete structures – 3D matrices existence. Unfortunately in this case the necessary conversion to 3D matrices causes an unnecessary increase in the algorithm computational complexity.

The fourth method consists of two stages:

  • digitisation of anterior chamber 3D contour calculations,
  • summing up spherical sectors formed from consecutive points of the wall, i.e.:
    S=(Δα360πsinΔα2)r2
    8

which after simple transformations for constant values Δα=3.6° gives:

S=0.2527r2
9

The fifth method (practically implemented) consists in counting the areas of unit triangles formed by vertices (x1,y1,z1), (x2,y2,z2), (x3,y3,z3), (x4,y4,z4). By definition x1=x2=x3=x4 have been ensured for consecutive iterations, for which there is a unit increment in the value on the ox axis.

The Pole (Area) variable contains the result of summing up areas of calculated triangles located on the oyz axis for x values featuring unit increments. The basic relationship for a triangle area has been used here, i.e.

S=12det|y1z11y2z21y3z31|
10

A demonstrative figure is shown below, presenting the methodology adopted in the algorithm and designed to calculate the anterior chamber value based on the partial areas sum.

Fig. 3-53. Contour lines, based on which vertices of triangles analysed have been formed.

Fig. 3-53Contour lines, based on which vertices of triangles analysed have been formed

Fig. 3-54. Arrangement of triangle vertices (x1,y1,z1), (x2,y2,z2), (x3,y3,z3), (x4,y4,z4).

Fig. 3-54Arrangement of triangle vertices (x1,y1,z1), (x2,y2,z2), (x3,y3,z3), (x4,y4,z4)

To perform a practical implementation of one of the methods for anterior chamber volume calculation first it is necessary to use the function OCT_edge_inside returning the values of contour boundary lines to two OCT images made at an angle of 90°:

   path_name=‘d:/OCT/SOURCES/3.DCM’;
   fid = fopen(path_name, ‘r’);
   dataa = fread(fid,‘uint8’);
   fclose(fid);
   [header_dicom,Ls1]=OCT_head_read(dataa);
   [linie_m_x1,linie_m_y1,linie_121,linie_mm_x1,linie_mm_y1
]=OCT_edge_inside(Ls1);
   path_name=‘d:/OCT/SOURCES/3.DCM’;
   fid = fopen(path_name, ‘r’);
   dataa = fread(fid,‘uint8’);
   fclose(fid);
   [header_dicom,Ls2]=OCT_head_read(dataa);
   [linie_m_x2,linie_m_y2,linie_122,linie_mm_x2,linie_mm_y2
]=OCT_edge_inside(Ls2);

In the results obtained it is necessary to modify the sequence of points coordinates occurrence, i.e.:

xa1=[linie_mm_x1(end:-1:1)];
xa2=[linie_mm_x2(end:-1:1)];
za1=[linie_mm_y1(end:-1:1)];
za2=[ linie_mm_y2(end:-1:1)];

Then, if we assume, that the apparatus axis is in the middle of coordinates correction image, i.e.:

mm1=median(xa1);
mm2=median(xa2);
xa1=xa1-median(xa1);
xa2=xa2-median(xa2);
ya1=zeros(size(za1));
ya2=zeros(size(za2));

In the next stage, in the case of both images situated against each other at an angle of 90°, an appropriate correction, i.e.:

 [THETAa,RHOa,Za] = cart2sph(xa1,ya1,za1);
THETAa=THETAa+90*pi/180;
[xa1,ya1,za1] = sph2cart(THETAa,RHOa,Za);

The division, necessary to carry out further steps of the algorithm, into the left and right part looks as follows:

xa1_a=xa1(ya1<=0);
xa1_b=xa1(ya1>0);
ya1_a=ya1(ya1<=0);
ya1_b=ya1(ya1>0);
za1_a=za1(ya1<=0);
za1_b=za1(ya1>0);
xa2_a=xa2(xa2<=0);
xa2_b=xa2(xa2>0);
ya2_a=ya2(xa2<=0);
ya2_b=ya2(xa2>0);
za2_a=za2(xa2<=0);
za2_b=za2(xa2>0);
figure
plot3(xa1_a,ya1_a,za1_a,‘-r*’); grid on; hold on
plot3(xa1_b,ya1_b,za1_b,‘-g*’);
plot3(xa2_a,ya2_a,za2_a,‘-b*’);
plot3(xa2_b,ya2_b,za2_b,‘-m*’);
xlabel(‘x’,‘FontSize’,20); ylabel(‘y’,‘FontSize’,20);
zlabel(‘z’,‘FontSize’,20)

The result obtained is presented in Fig. 3-55.

Fig. 3-55. Determined boundaries xa1_a,ya1_a,za1_a, xa1_b,ya1_b,za1_b xa2_a,ya2_a,za2_a, xa2_b,ya2_b,za2_b, marked in colours.

Fig. 3-55

Determined boundaries xa1_a,ya1_a,za1_a, xa1_b,ya1_b,za1_b xa2_a,ya2_a,za2_a, xa2_b,ya2_b,za2_b, marked in colours.

For further calculations it turns out necessary to unify the number of elements existing for each of 4 edges visible in Fig. 3-55 as follows:

s_m=max([length(xa1_a), length(xa1_b), length(xa2_a),
length(xa2_b)]);
xa1_aa=[]; xa1_bb=[];
ya1_aa=[]; ya1_bb=[];
za1_aa=[]; za1_bb=[];
xa2_aa=[]; xa2_bb=[];
ya2_aa=[]; ya2_bb=[];
za2_aa=[]; za2_bb=[];
for it=1:s_m
    xa1_aa(it)=xa1_a(round((length(xa1_a)/s_m) *it));
    xa1_bb(it)=xa1_b(round((length(xa1_b)/s_m) *it));
    ya1_aa(it)=ya1_a(round((length(ya1_a)/s_m) *it));
    ya1_bb(it)=ya1_b(round((length(ya1_b)/s_m) *it));
    za1_aa(it)=za1_a(round((length(za1_a)/s_m) *it));
    za1_bb(it)=za1_b(round((length(za1_b)/s_m) *it));

    xa2_aa(it)=xa2_a(round((length(xa2_a)/s_m) *it));
    xa2_bb(it)=xa2_b(round((length(xa2_b)/s_m) *it));
    ya2_aa(it)=ya2_a(round((length(ya2_a)/s_m) *it));
    ya2_bb(it)=ya2_b(round((length(ya2_b)/s_m) *it));
    za2_aa(it)=za2_a(round((length(za2_a)/s_m) *it));
    za2_bb(it)=za2_b(round((length(za2_b)/s_m) *it));
end
plot3(xa1_aa,ya1_aa,za1_aa,‘-w*’); grid on; hold on
plot3(xa1_bb,ya1_bb,za1_bb,‘-w*’);
plot3(xa2_aa,ya2_aa,za2_aa,‘-w*’);
plot3(xa2_bb,ya2_bb,za2_bb,‘-w*’);

The spline function is the basis for missing points reconstruction. Function spline returns the piecewise polynomial form of the cubic spline interpolan. For values xa1_aa,ya1_aa,za1_aa etc. unified in such a way, the following notation using the spline function has been introduced:

xc=[]; yc=[]; zc=[];
pam_p=[];
for i=1:s_m
        xi = pi*[0:.5:2];
        xyzi = [xa1_aa(end-i+1), xa2_aa(end-
i+1),xa1_bb(i), xa2_bb(i) xa1_a(end-i+1);
                ya1_aa(end-i+1), ya2_aa(end-
i+1),ya1_bb(i), ya2_bb(i) ya1_aa(end-i+1);
                za1_aa(end-i+1), za2_aa(end-
i+1),za1_bb(i), za2_bb(i) za1_aa(end-i+1)];
        pp = spline(xi,xyzi);
        xyz_ = ppval(pp, linspace(0,2*pi,101));
        plot3(xyz_(1,:),xyz_(2,:),xyz_(3,:),‘-*b’)
        xc=[xc,xyz_(1,:)'];
        yc=[yc,xyz_(2,:)'];
        zc=[zc,xyz_(3,:)'];
end

The result obtained is presented in Fig. 3-56.

Fig. 3-56. Figure after reconstruction.

Fig. 3-56

Figure after reconstruction.

After calculations using the spline function the next stage comprises calculation of anterior chamber volume, i.e.:

xc=round(xc); yc=round(yc); zc=round(zc);
Objetosc=0;
min_x=min(min(xc))-1;
for iuu=1:(size(xc,2)-1)
for iu=1:49
        xq=[]; yq=[]; zq=[];

        xq1=linspace(xc(iu,iuu),xc(end-
iu,iuu),abs(xc(iu,iuu)-xc(end-iu,iuu)) );
        xq(1,(xc(iu,iuu)-min_x):(xc(iu,iuu)-
min_x+length(xq1)-1))=xq1;

        yq1=linspace(yc(iu,iuu),yc(end-
iu,iuu),abs(xc(iu,iuu)-xc(end-iu,iuu)) );
        yq(1,(xc(iu,iuu)-min_x):(xc(iu,iuu)-
min_x+length(yq1)-1))=yq1;
        zq1=linspace(zc(iu,iuu),zc(end-
iu,iuu),abs(xc(iu,iuu)-xc(end-iu,iuu)) );
        zq(1,(xc(iu,iuu)-min_x):(xc(iu,iuu)-
min_x+length(zq1)-1))=zq1;
        xq2=linspace(xc(iu,iuu+1),xc(end-
iu,iuu+1),abs(xc(iu,iuu+1)-xc(end-iu,iuu+1)) );
        xq(2,(xc(iu,iuu+1)-min_x):(xc(iu,iuu+1)-
min_x+length(xq2)-1))=xq2;
        yq2=linspace(yc(iu,iuu+1),yc(end-
iu,iuu+1),abs(xc(iu,iuu+1)-xc(end-iu,iuu+1)) );
        yq(2,(xc(iu,iuu+1)-min_x):(xc(iu,iuu+1)-
min_x+length(yq2)-1))=yq2;
        zq2=linspace(zc(iu,iuu+1),zc(end-
iu,iuu+1),abs(xc(iu,iuu+1)-xc(end-iu,iuu+1)) );
        zq(2,(xc(iu,iuu+1)-min_x):(xc(iu,iuu+1)-
min_x+length(zq2)-1))=zq2;
        xq3=linspace(xc(iu+1,iuu),xc(end-
iu+1,iuu),abs(xc(iu+1,iuu)-xc(end-iu+1,iuu)) );
        xq(3,(xc(iu+1,iuu)-min_x):(xc(iu+1,iuu)-
min_x+length(xq3)-1))=xq3;
        yq3=linspace(yc(iu+1,iuu),yc(end-
iu+1,iuu),abs(xc(iu+1,iuu)-xc(end-iu+1,iuu)) );
        yq(3,(xc(iu+1,iuu)-min_x):(xc(iu+1,iuu)-
min_x+length(yq3)-1))=yq3;
        zq3=linspace(zc(iu+1,iuu),zc(end-
iu+1,iuu),abs(xc(iu+1,iuu)-xc(end-iu+1,iuu)) );
        zq(3,(xc(iu+1,iuu)-min_x):(xc(iu+1,iuu)-
min_x+length(zq3)-1))=zq3;
        xq4=linspace(xc(iu+1,iuu+1),xc(end-
iu+1,iuu+1),abs(xc(iu+1,iuu+1)-xc(end-iu+1,iuu+1)) );
        xq(4,(xc(iu+1,iuu+1)-min_x):(xc(iu+1,iuu+1)-
min_x+length(xq4)-1))=xq4;
        yq4=linspace(yc(iu+1,iuu+1),yc(end-
iu+1,iuu+1),abs(xc(iu+1,iuu+1)-xc(end-iu+1,iuu+1)) );
        yq(4,(xc(iu+1,iuu+1)-min_x):(xc(iu+1,iuu+1)-
min_x+length(yq4)-1))=yq4;
        zq4=linspace(zc(iu+1,iuu+1),zc(end-
iu+1,iuu+1),abs(xc(iu+1,iuu+1)-xc(end-iu+1,iuu+1)) );
        zq(4,(xc(iu+1,iuu+1)-min_x):(xc(iu+1,iuu+1)-
min_x+length(zq4)-1))=zq4;
         plot3(xq1,yq1,zq1,‘r*’);
        for tu=1:size(xq,2)
            if sum(xq(:,tu)∼=0)==4
                    Objetosc=Objetosc+0.5*…
                        abs(det([yq(1,tu) zq(1,tu) 1;
yq(2,tu) zq(2,tu) 1; yq(3,tu) zq(3,tu) 1]))…
                        +0.5*…
                        abs(det([yq(2,tu) zq(2,tu) 1;
yq(3,tu) zq(3,tu) 1; yq(4,tu) zq(4,tu) 1]));
            end
        end
end
end
Objetosc

We obtain the result for the anterior chamber expressed in pixels, shown in (Fig. 3-57):

Fig. 3-57. Anterior chamber with calculated volume marked red.

Fig. 3-57

Anterior chamber with calculated volume marked red.

Objetosc =

  2.7584e+006

Fig. 3-58 presents the outside envelope of the measured volume, i.e.:

Fig. 3-58. Anterior chamber with calculated volume marked in a form of blue envelope.

Fig. 3-58

Anterior chamber with calculated volume marked in a form of blue envelope.

figure
fd=surf(xc,yc,zc,‘FaceColor’,[0 0 1],…
‘EdgeColor’,‘none’,…
‘FaceLighting’,‘phong’)
daspect([5 5 1])
view(-50,30)
camlight lef
set(fd,‘FaceAlpha’,.5)
hold on
plot3(xa1_aa,ya1_aa,za1_aa,‘-w*’); grid on; hold on
plot3(xa1_bb,ya1_bb,za1_bb,‘-w*’);
plot3(xa2_aa,ya2_aa,za2_aa,‘-w*’);
plot3(xa2_bb,ya2_bb,za2_bb,‘-w*’);
axis equal
xlabel(‘x’,‘FontSize’,20); ylabel(‘y’,‘FontSize’,20);
zlabel(‘z’,‘FontSize’,20)

To confirm visual correctness of calculations and of automation it is possible to overlap component images Ls1 and Ls2 on the envelope formed (Fig. 3-59) tj.:

Fig. 3-59. Anterior chamber with calculated volume together with component flat images.

Fig. 3-59

Anterior chamber with calculated volume together with component flat images.

Ls1=imresize(Ls1,[256 512]);
Ls2=imresize(Ls2,[256 512]);
 [XX,YY]=meshgrid(1:size(Ls1,2),1:size(Ls1,1));
 Ls1=mat2gray(Ls1);
 Ls1=uint8(round(histeq(Ls1)*255));
 Ls1=cat(3,Ls1,Ls1,Ls1);
 surface(ones(size(XX)),XX-
 mm1/(size(Ls1,1)/256),YY,(Ls1),…
         ‘FaceColor’,‘texturemap’,…
         ‘EdgeColor’,‘none’,…
         ‘CDataMapping’,‘direct’)
     [XX,YY]=meshgrid(1:size(Ls2,2),1:size(Ls2,1));
  Ls2=mat2gray(Ls2);
Ls2=uint8(round(histeq(Ls2)*255));
Ls2=cat(3,Ls2,Ls2,Ls2);
surface(XX-
mm2/(size(Ls2,1)/256),ones(size(XX)),YY,(Ls2),…
         ‘FaceColor’,‘texturemap’,…
         ‘EdgeColor’,‘none’,…
         ‘CDataMapping’,‘direct’)

The algorithm presented has numerous drawbacks and we encourage Readers to remove them. These drawbacks include:

  • the calculated volume is understated,
  • the shape of the top surface is not considered at all,
  • the calculated volume is expressed only in pixels – it should be converted to appropriate unit of volume, reading the unit of distance falling per pixel from the file header.

Summarising, the algorithm calculates the anterior chamber volume in a fully automated way. The computation time for a PC class computer with the Windows Vista operating system, Intel Core Quad CPU Q9300, 2.5 GHz processor, 8GB RAM amounts to approx. 2 s.

Copyright © 2011 by Robert Koprowski, Zygmunt Wróbel.

This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License

Bookshelf ID: NBK97176