8

Click here to load reader

Edge-preserving noise filtering based on adaptive windowing

  • Upload
    wa

  • View
    220

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Edge-preserving noise filtering based on adaptive windowing

1048 IEEE TRANSACTIONS ON CIRCUITS A N D SYSTEMS, VOL. 35, NO. 8, AUGUST 1988

ACKNOWLEDGMENT The authors would like to thank Dr. F. Bonzanigo and

A. Kaelin for critical review of the manuscript and useful sugges- tions.

REFERENCES P. R. Gray, D. A. Hodges, and R. W. Brodersen, a s . , Analog MOS Integrated Circuits. New York: IEEE Press, 1980. F. Maloberti, F. Montecchi, and V. Svelto, “Noise and gain in a SC integrator with real operational amplifier,” Aka Freq., L(1): 4-11, 1981. F. Krummenacher, “Micropower switched capacitor biquadratic cell,” J. Solid-state Circuits, vol. SC-17, pp. 507-512, June 1982. C. Gobet and A. Knob, “Noise analysis of switched capacitor networks,” IEEE Trans. Circuits Syst., vol. CAS-30, pp. 37-43, Jan. 1983. B. Furrer, “Rauschen von Filtern mit geschalteten Kapazitaten,” Ph.D. thesis, Swiss Federal Inst. of Technology, Zurich, Zurich, Switzerland, 1983. R. Gregorian, K. W. Martin, and G. C. Temes, “Switched capacitor circuit design,” Proc. IEEE, vol. 71, pp. 941-984, Aug. 1983. J. H. Fisher, “Noise sources and calculation techniques for switched capacitor filters,” J. Solid-state Circuits, vol. SC-17, pp. 742-152, Aug. 1982. J. Goette, “Noise performance of SC-integrators assuming different amplifier models,” Tech. Rep. 87/1, Electron. Lab., Swiss Federal Inst. of Technology Zurich, Zurich, Switzerland, 1987. P. M. van Peteghem and W. M. C. Sansen, “T-cell integrator synthesises very large capacitor ratios,” Electron. Lett., vol. 19, no. 14, pp. 541-543, July 1983. W. M. C. Sansen and P. M. van Peteghem, “An area-efficient approach to the design of very-large time constants in switched-capacitor integra- tors,” J. Solid-State Circuits, vol. 19, no. 5, pp. 772-780, Oct. 1984. J. Goette, “On the computation of the stationary capacitor voltage variance matrix of an active RC network driven by white noise,’’ IEEE Trans. Circuits Syst., vol. CAS-34, pp. 331-333, Mar. 1987. F. C. Schweppe, Uncertain Dynamic Systems. Englewood Cliffs, NJ: Prentice-Hall, 1973. R. H. Bartels and G. W. Stewart, “Algorithm 432: solution to the matrix equation A X + XE = C,” Collected Algonthmsfrom ACM, 2:432-P 1-7, 1980. G. Kreisselmeier, “A solution of the bilinear matrix equation A Y + YB - - Q,” SIAM J . Appl. Marh., vol. 23, pp. 334-338, Nov. 1972. A. J. Laub, “Numerical linear algebra aspects of control design compu- tations,’’ IEEE Trans. Auromt. Contr., vol. AC-30, pp. 97-108, Feb. 1985. M. Vlach, J. Vlach, K. Singhal, and R. Chadha, “WATSCAD User’s Manual,” Faculty of Eng., University of Waterloo, Waterloo, Canada, July 1983.

Edge-Preserving Noise Filtering Based on Adaptive Windowing

WOO-JIN SONG AND WILLIAM A. PEXRLMAN

Abstrucf -A new adaptation procedure is introduced for determining in real-time the extent of the analysis window in point estimation of signals compted by additive noise. In the tasks of restoring a noisy one-dimen- sional test signal and a two-dimensional noisy image, mean, median, and MMSE filters are compared with fixed and adaptive window implementa- tions. The visual results of the signal and image restorations show convinc- ingly the superior preservation of edge and detail and suppression of noise for the filters with adaptive windows.

Manuscript received February 4, 1987; revised September 23, 1987 and November 25, 1987.

The major portion of this work was performed at the Electrical, Computer and Systems Engineering Department, Rensselaer Polytechnic Institute, Troy, NY 12180 and supported by Rome Air Development Center, Griffiss Air Force Base, NY 13341 under Contract F30602-K-0151. This paper was recom- mended by Associate Editor H. Gharavi.

Woo-Jin Song is with Polaroid Corporation, Cambridge, MA 02139. William A. Pearlman is with Rensselaer Polytechnic Institute, Electrical,

Computer and Systems Engineering Department, Troy, NY 12180-3590. IEEE Log Number 8821687.

I. INTRODUCTION Estimating a signal-waveform of interest embedded in random

noise is a classical problem in signal processing. Earlier tech- niques used for noise filtering were global filters based on the assumption of a stationary signal model. Noise filtering is a particularly difficult task when the original signal contains sharp edges and impulses of short duration. In image processing, for instance, it is very important to preserve these discontinuities while filtering out noise, since the human visual system is very sensitive to edge information [l]. Many adaptive schemes have been proposed recently to overcome this problem. Most of these techniques are based on local statistical operations over a fixed running window to extract useful information or obtain the estimate.

The simplest method of smoothing is taking the sample mean or median from a running window. Although the running mean filter provides excellent noise suppression over slowly varying signals, it smears edges. In median filtering proposed by Tukey [2], the filtered output at each point is the sample median value inside a running window which is centered at the filtering point. The application of running median has been made in many areas of digital signal processing, including speech processing [3] and image processing [4], [5] . One useful property of median filters is their ability to preserve signal edges while filtering out impulses. Thus median filtering can be used to remove impulsive noise while preserving signal edges. Unfortunately, it is well known that median filtering is not effective in suppressing additive white Gaussian noise [6]. Furthermore, sometimes realistic image sig- nals may contain impulse-like structures. An attempt to combine the edge-preserving feature of median

filtering and the noise-suppressing property of mean filtering is a class of edge-preserving filters called trimmed-mean filters [7]-[9], where a small portion of the data inside a running window is removed and the remainder is averaged to avoid smearing of edges. The trimmed-mean filter [8], [9] uses the median obtained from the small inner window to remove outliers which deviate more than three times of the noise standard deviation from the median value. However, if no edges or outliers are present in the signal or within the window, the trimmed mean filter is in effect reduced to the ordinary running mean filter because all the data inside the large window are used in computing the mean. In the context of schemes utilizing more than first-order observation statistics within the sample window, Lee [lo], [ l l ] used both the sample mean and variance in a fixed window to give priority weighting to the pixel values to beestimated and hence suppress the effect of an edge at the window boundary. An iterative smoothing scheme by Nagao and Matsuyama [12] employs a rotating mask about a pixel point and replaces the input pixel value with the sample mean value within the mask at the orienta- tion giving the minimum sample variance.

For the type of filtering schemes utilizing a running mean or median as a smoother, the use of a large window produces a loss of fine details in the signal. On the other hand, a small window implies insufficient noise suppression. Therefore, as far as noise- filtering is concerned, choosing an adequate window size is very important because the degree of smoothing is directly propor- tional to the size of the window. Yet little work has been done on the problem of determining the desirable window size for smooth signals. In this paper we depart from previous work with fixed windows and introduce a new adaptive windowing scheme, where the window size varies for each filtering point. The window is expanded or contracted according to the computed value of the

0098-4094/88/0800-108$01.00 01988 IEEE

Page 2: Edge-preserving noise filtering based on adaptive windowing

IEEE TRANSACTIONS ON CIRCUITS A N D SYSTEMS, VOL. 35, NO. 8, AUGUST 1988 1049

signal activity index compared to an adaptive threshold. In this way, the size of the window is adjusted automatically so that edges are on its perimeter and are excluded from the calculation of filtering parameters. Using this adaptive windowing algorithm, different point filtering operations are compared in order to show the noise smoothing and edge-preserving characteristics of the scheme.

The organization of this paper is as follows. In Section 11, we describe the adaptive procedure for changing the size of the running window and discuss the application of the adaptive algorithm with different estimators for the cases of additive white noise, impulse noise, and signal-dependent noise. The experimen- tal results on a simulated one-dimensional feature waveform in additive noise are presented in Section I11 to show the per- formance of the proposed filters and to compare with their fixed-window counterparts. In Section IV, we also present the two-dimensional version of the technique and the result of its application to a noisy image.

11. ADAFTIVE WINDOWING Let us consider a one-dimensional signal sequence { x ( k ) } .

Then the noisy data sequence { y( k)} is represented by

v( k) = x ( k ) + PZ( k ) , k = 1 , 2 , 3 , ... ( 1)

where n ( k ) is uncorrelated additive white noise of zero mean and variance u2. We assume that the signal sequence { x ( k ) } is nonstationary in general and contains sharp edges and pulses.

If a globally nonstationary sequence cau be considered to be locally stationary and ergodic over a small neighborhood, the sample statistics required for noise filtering can be obtained from a window in which the signal values have the same statistical characteristics. To achieve an murate statistical operation, the size of the window should be determined such that the window falls on the correct neighborhood. If we use a relatively large window, the window may include data values from different regions and smearing may take place over edges and infor- mation-bearing regions. On the other hand, the use of a small window can reduce the loss .of fine details, but the lack of samples in generating the sample statistics may result in insuffi- cient noise suppression over flat regions. Hence we should use a small window for busy signals and a large window for flat or slowly varying signals. However, there exists no systematic rule to determine the window size for a given signal, although there have been many adaptive techniques based on the running window scheme. E v h though one can determine a fixed window size, a window size desirable for one part of a signal could be inap- propriate for other parts of the signal. This is why many noise filtering methods based on the running window are still con- ceived as ad hoc methods which should not be used blindly. One way to overcome this problem is to use windows of various sizes for different regions in a signal. In this section, we will introduce an adaptive method in which the size of the window varies according to signal activity as the window centered at a filtering points slides along the noisy data. This means that the filters will be able to detect edges and modify the window, depending on the characteristics of the signal.

The sample mean of the noisy data at time k is given by

where Lk is the length of the window at k and Nk is an integer

which determines the neighborhood around k. Nk is related to Lk by

Lk=2Nk-k1. ( 3 ) The mean of the squared data at k, R,, is also obtained from the same window as

Lk j - k - N k

Then the sample variance v k is given by

V, = R , - Zk.

(4)

One of the most important statistical parameters representing the variational characteristics of a random variable is the vari- ance. Thus a good measure of signal activity in the noisy observa- tion would be the sample variance of the signal. since the zero-mean noise is uncorrelated to the signal, the signal variance is equivalent to the difference between v k and the noise variance. Hence we define the signal activity index s& by

s k = mm [ v k - u2,0] (6) where u2 denotes the noise variance. The activity index reflects the degree of local roughness of the signal within the window. Thus if s k is sufficiently large, the window size should be reduced to suppress the blurring neighborhood effect. On the other hand, if Sk is found to be small compared to a threshold, the window size will be increased to provide sufficient noise suppression. Fortunately, the distortion due to large windows over flat regions is not serious because those regions carry relatively small amounts of information. The window sizes are sequentially determined according to signal activity as follows.

At the initial filtering point k -1, the window contains one data sample. That is,

NI = 0. ( 7) Then for k > 1, the next window-length will be either incre- mented or decremented depending on whether the present signal activity index is less than or greater than the present threshold. That is,

if S, < Tk N k + l = maX[Nk-l ,O], ifsk>Tk ( 8) i Nk + l '

where Tk denotes the threshold at time k. Obviously, the smooth- ness of the filtered output depends on the threshold value T,. As Tk becomes large, the window-length tends to increase and the output becomes smoother and vice versa. The threshold at time k , T,, is formulated as

UL

T,=T)- (9) Lk

where is a constant which controls smoothness of the output. Note that if the noise variance increases, the thresholds become high, the windows are more likely to be incremented than decre- mented, and the smoothing action of the filter becomes more effective. On the other hand, if u2 = 0, no noise is present and the output is the input itself. The threshold is also set to be inversely proportional to the current window-length because a change in the value of a signal sample inside the window is reflected to the signal activity index by an order of I / & . It should be noted that this relation makes the window-length converge to a certain range. That is, if the present window contains too many samples, the resulting lowered threshold tends to reduce the next window-length.

Page 3: Edge-preserving noise filtering based on adaptive windowing

1050 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, VOL. 35, NO. 8, AUGUST 1988

The constant 7 of the threshold in (9) can be considered as a design parameter which controls the smoothness of the filtered output. Repeated experiments show that integer values from 4 to 10 make little difference in the result.

When a discontinuity point is detected at the forward end of a window and the lengths of the following windows are succes- sively decreased, the windows will retain the discontinuity point as the filtering point approaches the discontinuity point. If, however, a succeeding window is to be incremented, it may contain two aberrant samples with its leading end touching an edge. In order to guarantee that the on-coming windows contain only one edge sample, the following provision will be added to the previous algorithm: if the present window-length has been incremented but the corresponding signal activity index is found to be greater than the threshold, and thus the next window-length is to be decremented, then the present window-length is adjusted and the activity index is recomputed before proceeding to the next filtering point. That is, if and only if N k P l < Nk and S, > T,, the present window-length Nk is decremented by one and the corresponding activity index Sk is recomputed from the adjusted window-length to determine the next window-length Nk+l.

Endpoints of the data sequence are treated as barriers for the windowing algorithm. In such cases, the current data point is not at the center of the window, since a window edge is not allowed to extend beyond the data endpoint.

Finally it should be noted that the noise variance u2 can be estimated and updated from a very large window over a flat part of the noisy signal.

A. Adaptive Mean Filtering

Once the length of the window at the present point is de- termined, the statistical operation intended for estimating the signal value at the point can be done inside the window. But we will exclude the front-end point of the sliding window in the estimating procedure, since that sample is likely to be aberrant. Therefore the output of the adaptive mean filter, which is simply the sample mean, is obtained such that

(‘k? if Nk = 0

(10) - y ( k + N , ) , otherwise.

B. Adaptive Minimum Mean Square Error (MMSE) Filtering

At a given time k , the noisy signal y ( k ) can be treated as a random variable which is the sum of two random variables, x( k ) and n ( k ) . If we denote the mean of the signal random variable x ( k ) at time k by m and the variance by v , then the linear minimum mean-squared error (MMSE) estimate [13] is obtained as

2 ( k ) =m+:( ( k ) - m ) v + u 2

and its mean-squared error is given by

vu* E { [ x ( k ) - 2( k ) ] * } =-

u t a 2

where E denotes the statistical expectation. It is obvious that the apriori mean and variance, m and U, can be estimated from the observation in the same window as for the mean filter. As we see in (ll), the MMSE estimate value is a linear combination of the mean and the innovation term which is scaled by a factor equal to the ratio of the signal variance to the a posteriori variance.

-ORIGINAL TEST SIGNAL 220 r ---_ NOISY SIGNAL

200 - 180

S 160- I

G 140- N

-

; 120-

v 100- A L 80 U E 60 -

-

20 40 60 BO 100 120 140 160 180 200 220 240 260 DATA POINT

Fig. 1. Test waveform original and corrupted by additive noise of variance 64.

Therefore, the only difference between the adaptive mean filter and the adaptive MMSE filter is the relative emphasis on the data at the filtering point. If the mean value is considered too suppressive, then the MMSE estimate can be an alternative for the output of the filter. The mean m at time k is approximated by the sample mean 2, computed from the same time-varying window such that

( ‘k 9 if Nk = 0

Similarly, the signal variance uv is given by if Nk = 0 ( 0 9

1 (14) -y2( k + N,) - m 2 , otherwise.

One advantage of the use of the MMSE estimate for smoothing is the fact that with the expression in (12) we can predict the approximate mean-squared estimation error in the filtered out- put.

C. Adaptive Median Filtering

Instead of the mean, the median value of all the data inside the time-varying window can be chosen as the output of the filter. Given the window-length determined from the previous window- length, the estimate of the signal is taken to be the median value of the noisy data inside the window which is centered at the present filtering point. Hence,

The adaptive windowing technique for signal-dependent noise can be readily developed with a slight modification. One type of signal-dependent noise is Poisson noise, which is the source of degradation in photon-limited images. The application of the adaptive windowing technique to this problem can be made similarly and was presented in [14].

111. SIMULATION RESULTS In this section we present several simulation results which

illustrate the performance of the adaptive filters we have dis- cussed. Fig. 1 shows an artificially created test sequence which contains sharp discontinuities and monotonic changes and the test input which is obtained by adding zero-mean white Gaussian noise of variance 64 to this test sequence. In Fig. 2 are shown the

Page 4: Edge-preserving noise filtering based on adaptive windowing

BEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, VOL. 35, NO. 8, AUGUST 1988 1051

no 200-

180

s 160- I

G 140-

t 1 2 0 -

v 100- A L 8 0 - U E 8 0 -

............. LOCAL MMSE FILTERING LOCAL MEAN FILTERING MEDIAN FkTERlNG TRIMMED MEAN FILTERING

---- -

-

-

200-

180 - s 180- I G 140-

t 120-

v 100- A L 8 0 - U E 8 0 -

I I I I I I 1 1 I I , I 20 40 80 110 100 120 140 160 180 200 220 240 260

DATA POINT

Fig. 2. Resgts of different fixed window point filters for noisy test signal.

. Fixed (Window s i z e )

Adapt i ve (11.61

110.8 11.1 15.1 7.61 (7) (9) (7) (399)

6.18 9.6 9.0

- ADAPTIVE MEAN FILTERING ............. ADAPTIVE MMSE FILTERING ---- ADAPTIVE MEDIAN FILTERING

40

2:....m DATA POINT

Fig. 3. Results of adaptive windowing with different point filters for noisy test signal.

results of filtering with the different fixed window estimators: mean, median, MMSE, and trimmed-mean. The window sizes used were those which produced the least mean-squared error (MSE) and turned out to be 7 or 9. Although the use of MSE as a measure of filter performance is questionable in some applica- tions, we have calculated the empirid MSE's between the origi- nal sequence and the filter output and listed them in Table I. The relatively large MSE of 110.8 for mean filtering is contributed in large part by the broadening of the two sharp, steep edges. The trimmed-mean filter achieved the best MSE and visual perfor- mance. The length of the inner window for the median was 3 and that of the outer window was 9. The size of the inner window was fixed to 3 because pulses of duration greater than 2 are to be preserved for the general use of the filter. Various sizes of the outer window were tested for the best result.

In Fig. 3 are shown the results of the filters with adaptive windows. As we see, the noise is sufficiently smoothed over the entire sequence while the discontinuities are well preserved. The measured MSE of 6.18 for the adaptive mean filter is less than one-tenth of the original noise variance and one-eighteenth of the MSE of the fixed window mean filter. It turns out that values of the smoothing constant from 4 to 10 make little difference in the measured MSE. Fig. 4 shows the variation of the window size that was sequentially determined during the filtering procedure. The average length of the window was 16.5. The window adapta- tion has also lowered the MSE's to 9.0 from 15.1 for the MMSE filter and to 9.6 from 11.1 for the median filter. The reduction of

WINDOW VARIATION (SMOOTHING CONST. = 6.0)

0- 0 2 0 1

DATA POINT

Fig. 4. Window length variation in adaptive windowing for noisy test signal.

TABLE I MEAN SQUARED ERRORS FOR RESTORATIONS OF TEST SEQUENCE*

Est imator Window Mean Median #SE Trimned

I Method \ b a n I

*Noise variance = 64

the MSE's through the window adaptation here is not as dramatic as that for the mean filter whose fixed window implementation turned out to be particularly poor for the given test signal.

Since the presence of several flat tops in the test signal makes locally fixed signal in noise a better model of the observation, a mean or median estimator produced lower MSE than MMSE, which is optimum for stationary random signal in noise. The conclusion from these experiments is that the adaptive window estimator not only produce lower MSE's than their fixed window counterparts, but preserve the salient features and details of the signal while smoothing the noise. The truth of this statement will be better illustrated in the two-dimensional image restorations to follow.

IV. ~~O-DIMENSIONAL FILTERING FOR IMAGES One of the most immediate applications of the noise filtering

technique discussed in the previous sections is in the area of image processing. Real-life images contain many sharp changes in gray-level and the human visual system is known to be sensitive to such edges. Linear filters applied in image processing remove high-frequency noise components but smear out edges and details, since such filters are based on the band-limiting property. Since the adaptive windowing technique developed so far removes random noise while preserving the sharp changes in an input signal, the two-dimensional extension of the filtering scheme is expected to provide an excellent noise reduction proce- dure for images.

The noisy image y ( i , j) at (i, j) point is represented by the sum of the ideal image x ( i , j) and the noise n( i , j) such that

y ( i , j ) = x ( i , j ) + n ( i , j ) , i =1,2,. ., P; j =1,2; -.,e (16)

where n( i , j) is a zero-mean white noise sequence uncorrelated

Page 5: Edge-preserving noise filtering based on adaptive windowing

1052 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, VOL. 35, NO. 8, AUGUST 1988

. .

. . x . .

x x .

x x x

x x x

x x x

x x x

x x x

X

. ; . . . . . ) . I . . I I ,

I 1 . j . . . . . ! . I . .

Fig. 5. A two-dimensional window touching an edge

with x ( i , j ) . The noise variance a2 can be estimated from a large window in a flat area of the noisy image.

The sample statistics needed for generating the signal activity index are computed over a window centered at the filtering point is shown in Fig. 5. We adopt square windows, since the square windows make the algorithm simple and isotropic. The picture elements inside a LIJ X LIJ window at (i, j ) point can be repre- sented by a set

W ( i , J ) = { ( k , l ) l i - N,, Q k < i + N,,,J- N,, < 1 < j + N,,} where

L , , = 2 4 , + 1 .

Among the elements in f#’(i,j), only the samples on the boundary of the window are used in measuring signal activity because they can effectively represent the characteristics of the neighborhood as the size of the window changes. We denote the boundary elements in a window by a set

F ( i , j ) = { ( k , l ) E W ( i , j ) l k = i - N , , o r k = i + N , ,

o r / = j -4 , o r / = j + y J } .

Then it can be shown that the number of elements in F(i, j ) is 4(LI, -1) and the sample mean for the signal activity index is given by

The mean of the squared data, R i J , is also computed from the same elements as

Then the sample variance for the signal activity index is given by

Since the sample variance consists of the signal variance and the noise variance, the signal activity index is defined as

If a two-dimensional data array is handled by a digital com- puter in the column-by-column fashion, 4 , ’ s are sequentially determined as follows:

(b) Fig. 6. (a) Original image. (b) Original image in additive, white Gaussian

noise of variance uz = 256 (SNR = 10 dB).

where N,, is 1 and N,, is to be determined according to computational requirements of an application. N,, = 5 may be sufficient in most applications. When j=1, i.e., for the initial element of each column, the window may be reset to a moderate size.

The adaptive windowing procedure will assume that it meets an edge area when the signal activity reflects no less than three aberrant samples in the present F(i , j ) , since the smallest window size is 3 x 3. Thus the threshold at (i, j ) point is formulated as

3 T, = 9- (22) 4( LIJ - 1)

As in the one-dimensional version, the following step is in- cluded in the above procedure to make the inner region of the window free of aberrant data when it touches an ideal edge. That is, if and only if Iv- , < qJ and SI, > T,, N,, is decremented by

Page 6: Edge-preserving noise filtering based on adaptive windowing

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, VOL. 35, NC. 8, AUGUST 1988 1053

(c) Fig. 7. (a) Results of 5 X 5 mean filter on image in Fig. 6(b). (b) Results of 5 X 5 median filter on image in Fig. 6(b). (c) Result of 5 x 5

MMSE filter on image in Fig. 6(b).

one and the present activity index is recomputed from the adjusted window before determining the next window.

The proposed sequential variation of the window takes place for all interior points (i, j ) where a full (2Nm, + 1) x (2Nm, + 1) square window is available. The window shape is modified ap- propriately for other points in the vicinity of a boundary of the image.

Once the window size of the present point is determined, the estimate, or the output, is evaluated from the inside of the window excluding the boundary elements that are used to gener- ate the signal activity index. For example the sample mean for point ( i , j ) , which is also the output of the two-dimensional adaptive mean filter, is obtained as

2(i, j ) =(L, , -2)- ' y ( k , l ) . (23) ( k , O ~ W ~ > j ) ( k , O Q F ( r . 1 )

Similarly, the sample variance is given by the analogous two- dimensional modification of (4) and (5). To calculate the MMSE estimator for point (i, j ) , these mean and variance estimates in the interior of the window replace rn and U in the one-dimen- sional formula of (11) with index k replaced by (i, j ) .

V. IMAGE SIMULATION RESULTS White Gaussian noise of zero mean and variance u2 = 256 was

added to the 512 X 512 test image of 256 gray levels in Fig. 6(a) to produce the noisy image of Fig. 6(b). The signal-to-noise ratio (SNR), referenced to the signal variance of 2734, is 10 dB. The noisy image was filtered using a fixed square window and, in turn, the mean, median, and MMSE estimators. The window dimensions of 5 x 5 proved by trial and error to be the best choice. The results for these fixed window estimators are shown in the same order in Fig. 7. As reported in Table 11, the MSEs

Page 7: Edge-preserving noise filtering based on adaptive windowing

1054

MSE

SNR Improvement (dB)

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, VOL. 35, NO. 8, AUGUST 1988

Fixed (5x5) Window Adaptive Window Estimators Est imators

i

MMSE Mean Median MMSE Mean

55.9 82.5 71.9 47.2 45.3

6.61 4.92 5.52 7.35 7.52

Fig (b)

8. (a) Results of adaptive mean filter on image in Fig. 6(b). (b) Result of adaptive MMSE filter on image in Fig. 6(b).

for these three filters are 82.5, 71.9, and 55.6, corresponding to SNR improvements of 4.9, 5.5, and 6.6 dB. The MMSE filter is visually and measurably the best of the three, but does not suppress noise near edges due to the large sample variances generated in the fixed windows across the edges. Reducing the window size to remedy this situation would produce insufficient noise smoothing in flat contrast regions. The median filter pre- served edges and detail quite well, but was inferior to the other two filters in suppression of background noise.

The two-dimensional adaptive windowing algorithm with N,, = 5 was then applied to the noisy image in conjunction with the mean and MMSE estimators. The median estimator was not attempted with adaptive windowing, because the sorting required for finding the median intensity in an area up to 11 x 11 pixels for every image point was judged to be too heavy a computa- tional load. In Fig. 8 are shown the visual results of the two- dimensional adaptive mean and MMSE filters and in Table I1 the numerical results. The MSE‘s of these filters are very close, 45.3 for the adaptive mean for an SNR improvement of 7.25 dB, and 47.2 for the adaptive MMSE for an SNR improvement of 7.35 dB. The visual characteristics and quality of these restorations are nearly identical.

What is striking in comparison to the fixed window restora- tions is the suppression of noise in the relatively flat contrast areas of the image and the preservation of edges and detail. In fact, most of the estimation error is on the edges within the image where its effect is mitigated by the masking characteristics of the human visual system.

VI. CONCLUSION A new class of edge-preserving filters, based on adaptive

windowing, has been presented in this paper. It was observed that the adaptive filters utilizing the time-varying window effec- tively reduce random noise in an input signal while preserving edges and details of the signal. Simulation results with noisy images have shown that the two-dimensional extension of the proposed technique is suitable for noise reduction in images.

REFERENCES R. Held, Image, Object, and Illusion (Readings from Scientific Ameri- can). J. W. Tukey, “Nonlinear (nonsuperposable) methods for smoothing data,” in Conf. Rec., 1974, EASCON, p. 673. L. R. Rabiner, M. R. Sambur, and C. E. Schmidt, “Applications of a nonlinear smoothing algorithm to speech processing,” IEEE Trans. Acourt., Speech, Signal Processing, vol. ASSP-23, pp. 552-557, Dec. 1975. T. S. Huang, G. T. Yang, and G. Y. Tang, “A fast two-dimensional median filtering algorithm,” IEEE Trans. Acourt., Speech, Signal Processing, vol. ASSP-21, pp. 13-18, Feb. 1979. W. K. Pratt, Digital Image Processing. J. W. Tukey, Exploratory Data Analysis. Reading, MA: Addison- Wesley, (prelim, Ed. 1971), 1977. J. B. Bednar and T. L. Watt, “Alpha-trimmed means and their relation- ship to median filters,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-32, pp. 145-153, Feb. 1984. C. A. Pomalaza-Raez and C. D. McGillem, “An adaptive, nonlinear edge-preserving filter,” IEEE Trans. Acourt., Speech, Signal Processing, vol. ASSP-32, pp. 571-576, June 1984. Y. H. Lee and S . A. Kassam, “Generalized median filtering and related nonlinear filtering techniques,” IEEE Trans. Acourt., Speech, Signal Processing, vol. ASSP-33, pp. 672-683, June 1985. J.-S Lee, “Digital image enhancement and noise filtering by use of local statistics,” IEEE Trans. Patt. Anal. Mach. Intell., vol. PAMI-2, pp. 165-168, Mar. 1980. J.-S. Lee, “Refined filtering of image noise using local statistics,” Com- puter Graphics and Image Processing, vol. 15, pp. 380-389, 1981.

San Francisco, CA: Freeman, 1974.

New York: Wiley, 1978.

Page 8: Edge-preserving noise filtering based on adaptive windowing

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, VOL. 35, NO. 8, AUGUST 1988 1055

0098-4094/88/0800-1055$01.00 01988 IEEE

[12]

[13]

[14]

M. Nagao and T. Matsuyama, “Edge-preserving smoothing,” Computer Graphics and Image Processing, vol. 9, pp. 399-407, 1979. A. Papoulis, Probability, Random Variables, and Stochastic Processes. New York: McGraw-Hill, 1965. W.-J. Song, “New robust estimators for image restoration,” Ph.D. dis- sertation, Rensselaer Polytechnic Institute, Troy, NY, Aug. 1986.

Minimal State-Space Realization of Factorable 2-D Transfer Functions

G. E. ANTONIOU, P. N. PAk4SKEVOPOULOS, AND S. J. VAROUFAKIS

Abstmct -An algorithm is presented for the minimal statespace realiza- tion of a two-hensional (ZD) h s f e r function for the special case when the numerator or the denominator of the 2 D transfer function is factor- able. ’zbe statespace representation is directly derived by inspection from a circuit Mock diagram realization of the 2 D system. ’zbe present al- gorithm does not require that the numerator or the denominator poly- nomial is factored out, as opposed to known techniques.

I. INTRODUCTION The state-space realization of a 2-D transfer function is of

considerable theoretical and practical importance. The notion of minimality of the state-space model [1]-[5], plays an important role in the analysis and design of 2-D systems, because of the large amount of data involved in 2-D digital signal processing. It has been shown that minimal state-space realizations are not always possible for all rational 2-D transfer functions [5]-[7]. However, minimal state-space realizations have been determined for the following special cases:

a) All-pole systems [8]-[9]. b) All-zero systems [8]-[9]. c) Systems that can be expanded to continued fraction

d) Systems with separable numerator [6], [9], [12]. e) Systems with separable denominator [2], [6], [9], [12]-[14].

[lo], [111.

Also possible techniques for achieving minimal realizations are given in [15] and [16]. In [17] a counterexample has been given for the algorithm proposed in [16]. Furthermore, recently, results have been presented for separately balanced realizations of sep- arable denominator transfer functions from input-output data

In this paper we study the case of 2-D transfer functions that their numerator or denominator are product factorable and it is not required to be separated. First, a factorability test is intro- duced which allows one to determine whether or not the numera- tor or the denominator can be factored out. A similar test is given in [19]. Next, an algorithm is presented which circumvents the tedious and sometimes difficult task of factoring out a 2-D polynomial, while it yields a minimal state-space realization. This algorithm is based on a circuit block diagram realization of the

1181.

Manuscript received May 20,1987. This paper was recommended by Associ- ate Editor P. K. Rajan.

G. E. Antoniou and P. N. Paraskevopoulos are with the Department of Electrical Engineering, Division of Computer Science, National Technical University of Athens, 157 73 Zographou, Athens, Greece.

S. J. Varoufakis is with Telecommunications and Informations Institute, NRCPS “Democritos”, 153 10 Ag. Paraskevi, Athens, Greece.

unfactored 2-D transfer function and gives the state-space matrices by inspection.

Overall the present algorithm constitutes an alternative circuit approach to the state-space realization of separable numerator or denominator 2-D transfer functions reported in [6], [9], [12]-[14].

11. STATEMENT OF THE PROBLEM

Consider the linear time invariant 2-D system, described by the spatial transfer function [20]:

m n c gijw-iz-J

(1) I-0 j - 0

T ( w - 1 , z - l ) = m n

h,,w-’z-’ i - 0 j - 0

The problem considered in this paper is to determine a minimal state-space model of system (1) for the following two cases.

Case I: The denominator coefficients of (1) can be arbitrary, while the numerator coefficients satisfy the following relation- Ship:

with g, assumed,, for simplicity, one. Case 2: In the present case the numerator coefficients of (1)

can be arbitrary, while the denominator coefficients satisfy the following relationship:

kij = h , k , (3)

with k , assumed, for simplicity, one.

numerator or denominator factorable.

[l] described as follows:

Note that (2) and (3) can be used to determine if (1) is

The state-space model sought is of the Givone-Roesser type

where

xh( i , j ) ER,, is the horizontal state vector, x”(i , j ) ER, is the vertical state vector, u ( i , j ) ER^ is the input vector, and y ( i , j ) E R, is the output vector.

The matrices _A, _b, _c‘ have appropriate dimensions and are partitioned accordingly as

111. STATE-SPACE REALIZATION Case I The spatial transfer function (l), when g,, = g o go, (and

g, = l), may be presented in a block diagram as in Fig. 1. A 2-D minimal state-space realization may be written by the inspection,