35
Mat/d. Comput. Modelling Vol. 27, No. 9-11, pp. 257-291, 1998 @ 1998 Elsevier Science Ltd. All rights reserved Printed in Great Britain PII: SO895-7177(98)00064-S 08957177/98 $19.99 + 0.09 Traffic Incident Detection: Sensors and Algorithms R. WEIL, J. WOOTTON AND A. GARCIA-ORTIZ Advanced Development Center, Systems & Electronics Inc. 201 Evans Lane, Saint Louis, MO 63121-1126, U.S.A. Abstract-Incident detection involves both the collection and analysis of trailic data. In this paper, we take a look at the various traffic flow sensing technologies, and discuss the effects that the environment has on each. We provide recommendations on the selection of sensors, and propose a mbc of wide-area and single-lane sensors to ensure reliable performance. We touch upon the issue of sensor accuracy and identify the increased use of neural networks and fuzzy logic for incident detection. Specifically, this paper addresses a novel approach to use measurements from a single station to detect anomalies in tragic flow. Anomalies are ascertained from deviations from the expected norms of tragic patterns calibrated at each individual station. We use an extension to the McMaster incident detection algorithm as a baseline to detect trsffic anomalies. The extensions allow the automatic field calibration of the sensor. The paper discusses the development of a new novel time indexed anomaly detection algorithm. We establish norms as a time dependent function for each station by integrating past “normal” tragic patterns for a given time period. Time indexing will include time of day, day of week, and season. Initial calibration will take place over the prior few weeks. Online background calibration continues after initial calibration to continually tune and build the global seasonal time index. We end with a discussion of fuzzy-neural implementations. @ 1998 Elsevier Science Ltd. All rights reserved. Keywords-Algorithm, Incident, Performance, Sensor, Trafllc. 1. SENSOR APPLICATIONS AND TECHNOLOGIES Drew [l] defines traffic engineering as “the science of measuring traffic characteristics and applying thii information to the design and operation of traffic systems”. Early trafhc flow “measurements” were rather subjective and involved visual assessments by police patrols and helicopter pilots. The realization occurred from this early experience that efficient system operation demanded objective assessments based on quantitative metrics. Surveillance of roads via CCTV (Closed Circuit Television) was first implemented in the U.S. in Detroit, Michigan, in 1961. Four years later, that system included trafhc detection and mea- surement, variable speed signs, and lane and ramp control. Other projects of the time included the Chicago Area Expressway Surveillance Project (1961), the Port of New York Authority’s Holland Tunnel (1963), and the Gulf Freeway Surveillance Project (1963). One of the traffic control schemes featured on the Gulf Freeway was a traffic merge control system that used a sonic sensor mounted above the road, in a side-fired configuration, and inductive loops embedded in the road. The sonic sensor measured vehicle speed and gap upstream of the merge area, the inductive loops detected vehicle presence and measured queue length on the frontage road and upstream of the merge area. Today there is a myriad of traffic control and management systems operating in the U.S. and around the world, with many more in the works. One thing remains constant in all of them, the need for sensors that provide quantitative measurements of traffic flow. 257

Traffic incident detection: Sensors and algorithms

  • Upload
    r-weil

  • View
    218

  • Download
    1

Embed Size (px)

Citation preview

  • Mat/d. Comput. Modelling Vol. 27, No. 9-11, pp. 257-291, 1998 @ 1998 Elsevier Science Ltd. All rights reserved

    Printed in Great Britain

    PII: SO895-7177(98)00064-S 08957177/98 $19.99 + 0.09

    Traffic Incident Detection: Sensors and Algorithms

    R. WEIL, J. WOOTTON AND A. GARCIA-ORTIZ Advanced Development Center, Systems & Electronics Inc.

    201 Evans Lane, Saint Louis, MO 63121-1126, U.S.A.

    Abstract-Incident detection involves both the collection and analysis of trailic data. In this paper, we take a look at the various traffic flow sensing technologies, and discuss the effects that the environment has on each. We provide recommendations on the selection of sensors, and propose a mbc of wide-area and single-lane sensors to ensure reliable performance. We touch upon the issue of sensor accuracy and identify the increased use of neural networks and fuzzy logic for incident detection.

    Specifically, this paper addresses a novel approach to use measurements from a single station to detect anomalies in tragic flow. Anomalies are ascertained from deviations from the expected norms of tragic patterns calibrated at each individual station.

    We use an extension to the McMaster incident detection algorithm as a baseline to detect trsffic anomalies. The extensions allow the automatic field calibration of the sensor.

    The paper discusses the development of a new novel time indexed anomaly detection algorithm. We establish norms as a time dependent function for each station by integrating past normal tragic patterns for a given time period. Time indexing will include time of day, day of week, and season. Initial calibration will take place over the prior few weeks. Online background calibration continues after initial calibration to continually tune and build the global seasonal time index. We end with a discussion of fuzzy-neural implementations. @ 1998 Elsevier Science Ltd. All rights reserved.

    Keywords-Algorithm, Incident, Performance, Sensor, Trafllc.

    1. SENSOR APPLICATIONS AND TECHNOLOGIES

    Drew [l] defines traffic engineering as the science of measuring traffic characteristics and applying thii information to the design and operation of traffic systems. Early trafhc flow measurements were rather subjective and involved visual assessments by police patrols and helicopter pilots. The realization occurred from this early experience that efficient system operation demanded objective assessments based on quantitative metrics.

    Surveillance of roads via CCTV (Closed Circuit Television) was first implemented in the U.S. in Detroit, Michigan, in 1961. Four years later, that system included trafhc detection and mea- surement, variable speed signs, and lane and ramp control. Other projects of the time included the Chicago Area Expressway Surveillance Project (1961), the Port of New York Authoritys Holland Tunnel (1963), and the Gulf Freeway Surveillance Project (1963). One of the traffic control schemes featured on the Gulf Freeway was a traffic merge control system that used a sonic sensor mounted above the road, in a side-fired configuration, and inductive loops embedded in the road. The sonic sensor measured vehicle speed and gap upstream of the merge area, the inductive loops detected vehicle presence and measured queue length on the frontage road and upstream of the merge area. Today there is a myriad of traffic control and management systems operating in the U.S. and around the world, with many more in the works. One thing remains constant in all of them, the need for sensors that provide quantitative measurements of traffic flow.

    257

  • 258 R. WEIL et al.

    There are two major areas where sensors are used in traffic engineering, (1) highways, and (2) road intersections. The objective in both csses is to maximize vehicle flow; however, the operational requirements for each are dr~atic~ly different. On a highway application the traffic parameters of interest are: volume, speed, density, lane occupancy, travel time, vehicle class, and vehicle headway. Typical traffic volumes on a five-lane highway in North America are 100,000 vehicle per day. Traf?lc speeds range from 30 to 85 miles per hour, though posted limits are 40 to 65 in most of the U.S. On an intersection application the traffic parameters of interest are: vehicle count (volume), speed, queue length, delay, gap, headway, and turning movements counts (U, left, through, and right). Traffic volumes in this case are nowhere near those of highways, and speeds are typically 25 to 45 miles per hour. The two parameters that play the most important role are queue length and turning movements, ss they dictate the cycling of a traffic light.

    We can further categorize vehicle sensing in each of these areas by the permanency of the installation as temporary or permanent. Temporary sensing applies primarily to equipment that is deployed to perform detailed traffic studies in order to ascertain the need for expanding the road infrastructure or install a new traffic signal. The equipment is always located above ground, and the primary technology involved is pneumatic. A rubber hose is secured across the lane of interest and a counter placed on the side of the road; the counter responds to pressure variations when a tire comes in contact with the hose, thus it counts vehicle axles. In recent years a self-contained, el~troma~etic sensor roughly the size of a pocket calculator has become quite popular because of its simple installation. In addition to counting vehicles, this sensor also measures the vehicle length which can be used to classify the vehicle.

    In the case of intersections, permanent sensing is used exclusively to establish vehicle presence and cycle the traffic light. On highways, this type of sensing is used to determine peaks and valleys in traffic volume for the purpose of setting road maintenance schedules, and to keep an eye on the traffic volume demand. For this reason, the sensors are sparsely located; Figure 1 shows the approximate location of permanent sensors on the interstate highway system in St. Louis County, Missouri. The traditional sensor technology used in permanent installations is the inductive loop. The sensor is embedded into the road, and the sensor electronics sit in a large enclosure on the side of the road. Data is collected for an extended period of time, e.g., two weeks, and averaged over predefined time intervals, e.g., 60 minutes. Weekly, monthly, and annual reports are generated from the collected data. A typical diurnal traffic volume profile is shown in Figure 2. While this sensing modality is prevalent in all major metropolitan areas, it is hardly used in real-time except for major cities with extremely congested roads like Chicago, Los Angeles, and Toronto. In such cases, the data collection interval is typically 30 seconds, and the sensors are placed at l/3 to l/2 mile intervals because the primary objective is prompt incident detection. Detection is achieved primarily through the use of the CALTRANS or McMaster algorithms, which will be discussed later in this paper. Panda has reported the use of the AIDA (Autoscope Incident Detection Algorithm) for incident detection on I-35W in Minneapolis, Minnesota, and in the Wattwil tunnel in Switzerland [2].

    In incident detection applications, the inductive loop is complements with strategically placed surveillance cameras that relay video to the Traffic Management Center. They are used to confirm the existence of an incident, and assess its severity. This strategy is actively being pursued by a variety of cities and metropolitan areas [3-51. Table 1 summarizes the use of sensors in traffic engineering according to the deployment setting and type of installation. The remainder of this paper focuses on the application of traffic sensors to incident detection, i.e., permanent installations on a highway setting.

    We have already mentioned some types of traffic sensors. Table 2 summarizes the breadth of sensing technologies applicable to incident detection. The table describes the mode of operation of the sensor: active or passive. An active sensor is one that senses its environment by processing the echo of a known signal transmitted by it. A passive sensor is one that processes the natural radiation present in the environment. The inductive loop has tradition~ly provided vehicle count

  • Ikaflk Incident Detection

    Figure 1. Permanent Automatic TkaEc hording (ATR) stations are sparsely dis- tributed across a metropolitan area.

    1oRM

    9000 I 6000.

    7000.

    ?

    6coO-

    !I

    SOOO-

    a 4000.

    ........ . . . . . , . .., , * ,

    012345670 9 10 11 12 13 14 15 16 17 19 19 20 21 22 23 TlmrP&d

    Figure 2. A distinct diurnal traffic volume with a distinct traffic signature can be associated with every ATR station.

    Table 1. Application of sensors to traf%c engineering.

    Type of Installation I

    and lane occupancy data, and when two loops are used in tandem, speed data can be generated. These three parameters have influenced the development of incident detection algorithms to date. As the new generations of sensors appear on the market, more parameters are being measured

  • 260 R. Wmt et al.

    or derived. Table 3 details the trafkc parameters available from commercially available sensor technologies. With the exception of the inductive loop and perhaps the magnetometer+ all are installed off-road and above the tr&c flow, and are thus affected by the weather.

    Table 2. Detector technologies used in trafhc sensors.

    Technology

    Inductive loop

    Magnetometer

    Infrared

    Infrared

    Acoustic

    Ultrasonic

    CCD camera

    Radar-Doppler

    Radar-FMCW

    Laser-Pulsed

    Mode

    Active

    Passive

    Passive

    Active

    Passive

    Active

    Psssive

    Active

    Active

    Active

    Reeponds To

    Ferrous mass

    Ferrous mass

    Contrast in thermal radiation

    Reflected signal

    Sound

    Reflected sound

    Contrast in visible light

    Fkequency shift of refkcted

    signal, i.e., motion sensor

    Frequency of reflected signal

    Reflected signal

    wavelength: 6-14 mm

    wavelength: O.&l.1 mm

    frequency: 10.24 GHz

    InGaAs diode laser

    Table 3. lkaflic flow parameters measured or derived by the various sensor technolo- gies.

    Technology

    Inductive loop

    Magnetometer

    Infrered----Passive

    Infrared-Active

    Acoustic-Passive

    Ultrasonic

    CCD camera

    Redar-Doppler

    Radar-FMCW

    Laser-Pulsed

    Volume

    J

    J

    J

    J

    J

    J

    J

    J

    J

    J

    2 sensors

    J

    J

    J

    J

    J

    J

    J

    J

    Class occupancy Density

    J

    J

    J

    Headwey

    J

    J

    The graphs in Figure 3 show the effect of particular weather conditions on a signal as a function of its frequency; the data is adapted from The infrared & Electrv-Optical Systems Handbook [S]. Notice that rainfall tends to affect all wavelengths equally except milliieter waves where the attenuation varies somewhat linearly with frequency. Ground fog a&&s the infrared spectrum less than the visible light spectrum, but more than the millimeter wave spectrum. The impact on millimeter waves is once again almost linear with frequency. Signal attenuation in infrared sensors is due primarily to the water vapor content in the air, and by rain, snow, dust, fog, ocean spray, and smoke [7]. Because they operate over a large bandwidth, molecular absorption is not a concern. Lasers, on the other hand, are impacted by molecular absorption and small particles present in the atmosphere, e.g., snow, dust, and smoke [S]. When comparing sensor technologies based on the effects of atmospheric attenuation, one must keep in mind that with the exception of road surveillance using CCTV, which involves distances of around 1.6km, most sensor applications comprise a one-way distance of 10-50 meters so that signal attenuation is small.

    Whereas sensors that operate in the electromagnetic spectrum are affected by particle scatter- ing and molecular absorption, acoustic and ultrasound sensors are affected by temperature and wind. Sound (acoustic) waves are compressional disturbances that propagate through a solid or fluid medium. In the case of air at ambient pressure, the speed of sound depends primarily on

  • IMFic Incident Detection 261

    HEAVY RAIN (25 mmlhr)

    DRIZZLE(025mmlbr)

    WAVELENGTH [mm]

    Figure 3. The impact of weather on a sensor depends on the kind of weather end the portion of the electromagnetic spectrum involved.

    temperature [9,10]. At 0 C the speed is about 332 m/s, and at 20 C it is about 344 m/s. These variations impact the performance of the sensor by introducing a temperature dependent time delay in the vehicle detection process. For an active sensor mounted 6 meters above the road, a 20 C change (summer to winter) translates into a one second change in the two-way signal travel time. A vehicle on the road below moving at 100 kph would travel approximately 28 meters in that amount of time. Wind, however, is perhaps a more obvious offender than temperature because, as a compression disturbance created by differences in atmospheric pressure, it tends to direct the sought after sound waves away from the sensor.

    Traffic sensors can also be grouped according to the road surface coverage they provide: single lane and wide srea. Single lane sensors as the name implies have a field-of-view that covers only one lane of trafllc. They are typified by the likes of the AGD 300 Doppler radar sensor (AGD Systems Ltd, UK), the Autosense II laser sensor (Schwartz Electra-Optics Inc., U.S.), and the Model IR-222 infrared sensor (ASIM Engineering Ltd, Switzerland). Wide area sensors, on the other hand, have a field-of-view that covers several lanes, and they include the RTMS radar sensor in a side-fire configuration (EIS Electronic Integrated Systems Inc., Canada), the BEATRICS radar sensor (Thompson-CSF, France), and the INSIGHTTM CCD sensor (Systems & Electronics Inc., U.S.).

    A variety of informative articles on trafllc sensors appear regularly in Z?x@c Technology In- ternational magazine. The interested reader is referred to the 1996 annual issue which contains several articles dealing with various radar, infrared, and video sensors [ll-181. Given all of the alternatives shown earlier, the obvious question to ask is, which one is the best?

    Several studies have been funded by the U.S. Department of Transportation, Federal Highway Administration, to answer this question. Three such studies are the one by Hughes Aircraft Co. [19], the one by Bolt, Beranek and Newman (BBN) [20], and the one by the Minnesota Department of Transportation (MnDOT) [21,22]. The Hughes Aircraft Co. study evaluated

  • 262 H. WEIL et al.

    various commercial, overhead, traffic sensors in diverse geographic locations, and it addressed both highway and road intersection applications. Table 4 lists the test locations, and describes their geographic characteristics. Their goal was to see where each traillc sensor technology would best serve the US. ITS needs. The BBN study had more of an applied research orientation. Its objective was the evaluation of several overhead, vehicle sensing technologies from the point of view of measurement accuracy, sensor cost, and communication bandwidth and computational throughput requirements. All the sensors and algorithms used were designed and implemented by BBN. The MnDOT study, like the Hughes study, deals with commercial, overhead, traffic sensors. It compares their performance to that of the traditional inductive loop sensor. The study, however, goes beyond just traffic parameter measurement and addresses sensor cost, installation, calibration, and maintenance.

    Table 4. Characteristics of the operational environments addressed by the Hughes Aircraft sensor study [23].

    Location

    Phoenix, Arizona

    Tucson, Arizona

    Orlando, Florida

    Minneapolis,

    Minnesota

    Mean

    Elevation

    1,117ft

    2,584 ft

    108 ft

    834 ft

    Climate

    Arid

    Arid

    Subtropical

    Humid Continental

    Relative

    Humidity

    18-49

    16-40

    47-61

    54-72

    I Average I Average Temperature Annual January July Snowfall

    35-64 75-105 0

    37-63 74-98 I -l--t 50-71 73-93 0 2-22 61-84 45 Average

    Annual

    Rainfall

    7

    11

    51

    24

    Technology

    Inductive

    / loop

    Magneto-

    meter

    Infrared

    (passive)

    Infrared

    (active)

    Acoustic

    ( passive)

    Ultrasonic

    CCD camera

    Radar-

    Doppler

    Hadar- FMCW

    Leser-

    Pulsed

    Table 5. Environmental niches for the various sensor technologies.

    Xear

    Day

    J

    Clear

    Uight

    J

    >old

    Day

    J

    - Hot

    W - J

    J

    J

    J

    Environmental Condition

    Light High ,ight

    Nind Nind Eain tain

    J J

    J

    ,ight lard

    inow

    J

    -

    ?og - J

    J

    J

    J

    4

    J

    4

    -

    Smoke Neather

    vlonitor

    J

  • Traffic Incident Detection 263

    Figure 4. View of uncongested traffic flow provided by a CCD sensor.

    Figure 5. View of rush-hour congested traffic flow provided by a CCD sensor.

    All of these studies certainly help to increase the awareness and understanding of sensor technol-

    ogy by the traffic engineering community. However, we believe that their somewhat uncommital

    and inconclusive nature may lead the casual reader astray, and perhaps even discredit valuable

    sensor technologies. They all implicitly search for the one technology that will solve all needs

    and replace the inductive loop.

    In selecting a traffic sensor, the engineer must have a clear understanding of the particulars

    of the application for which a sensor is sought, and second, he/she must also understand the

    environment in which the sensor will operate. Earlier in the paper, we addressed some of the

    key environmental issues that affect sensor performance. Based on these, some guidelines can be

    established for the selection of sensors for use in incident detection. Table 5 shows a variety of

    environmental situations and the sensor technologies that would serve them best.

    Where obtaining a visual reading of the road is of interest, the two technologies of choice

    are passive infrared and CCD sensors. The added value of this capability is demonstrated in

    Figures 4 and 5. The first one shows a view of uncongested traffic flow; the latter shows a view of

    congested, rush-hour traffic flow. Areas frequently subjected to fog, such as coastal cities, would

    do best with an infrared sensor. Localities that experience large changes in temperature, high

  • 264 R. WEIL et al.

    winds, or wind gusts should refrain from using acoustic (or ultrasonic) sensors. Radar sensors

    do well in most localities; the only drawback to their use is perhaps the cumulative effects of low

    level radiation on the driving population.

    Places where snowfall is frequent should favor the use of wide-area sensors. In these areas the

    snowfall temporarily wipes out the road lane markings [22]. This causes drivers to carve out

    lanes which tend to straddle the real lanes. Single lane sensors like the inductive loop and some

    radar will fail to provide an accurate reading of the traffic flow under such conditions. Wide

    area sensors, on the other hand, can be equipped to sense where the traffic flow is occurring, and

    automatically adjust their traffic detection zones [24].

    Where a visual confirmation of an incident is desired, the use of a CCD sensor is recommended

    over an infrared (IR) sensor. The CCD sensor provides a visible light image that is easily

    understood by the viewer. The IR sensor, on the other hand, provides an image that is based

    on temperature variations across the scene. This is much harder to analyze visually because

    it is not the way humans see their surroundings. The view from a CCD sensor mounted on

    I-70, in St. Louis, is shown in Figure 6. Even the most casual viewer can see that an accident

    has occurred involving the two leftmost lanes. The CCD sensor also lends itself very well for

    making assessments of the impact of weather on the road infrastructure. This particular use was

    brought to our attention by the Ministry of Transportation of Ontario, Canada. In its ultimate

    configuration, a CCD sensor could report not only the occurrence of an incident, but also the

    condition of the road.

    Figure 6. View of incident related traffic flow provided by a CCD sensor.

    Our experience with sensors in military applications has been that in general no one sensor

    ever meets all the operational requirements of a given application. Instead, a suite of sensors is

    usually required. Every indication we have seen so far points to traffic management as being no

    different. To ensure reliable operation, a system-level concept of how to mix and match sensors

    needs to be developed, and some degree of sensor redundancy is required. Currently, for incident

    detection, sensors are spaced l/3 to l/2 mile apart. This allows prompt detection of the shock

    wave that arises when an incident takes place. We envision the best traffic sensor configuration as one where the l/2 mile spacing is maintained, wide-area sensors are used at every station, and

    every other sensor station is equipped with single-lane sensors. Such a configuration retains quick

    detection of incidents, allows automatic reconfiguration of the detection zones, and provides for

    graceful degradation of the system in case of sensor failures or inclement weather.

  • Trafiic Incident Detection 265

    Besides the operational environment, there are also considerations of public safety and envi- ronmental impact that need to be addressed when choosing a sensor. One that has already been alluded to is low level radiation. Thii issue has plagued electric utilities for decades, especially where high voltage, electric power transmission lines exist. More recently, it has appeared in cellular telephone operations where brain tumors have been blamed on the close proximity of the transmitter to the brain. In the case of trafhc management, radar sensors radiate everything on the road. So far, their deployment has been limited, so cumulative emission levels have been low. But, as the deployment density increases, to perhaps every l/2 mile, so will the background ra- diation level. The impact that thii will have on people and animals, and on the communications infrastructure, whether real or implied is yet to be established.

    1.1. Sensor Performance

    A topic to which the trafhc engineering community has devoted considerable attention is that of sensor performance. Although a thorough system performance analysis would normally address the reliability, availabiiity, and maintainability of the system, in the case of traffic sensors the focus has been primarily on measurement accuracy.

    But measurement accuracy should not be the only metric used for selecting a sensor. The experiences related by those involved in sensor performance studies gives us considerable insight into some of the other issues which need to be examined. Consider the study by MnDOT. The report of this rather comprehensive study helps us identify at least two critical sensor operation issues:

    (1) proper sensor setup and calibration, and (2) driver behavior.

    It does not matter if a sensor is capable of generating 100% accurate measurements, if it is not properly installed and calibrated, it will never achieve its full potential. And even if it is properly configured, since it has no understanding of the process being monitored, it can be misled by unusual driver behavior. One should choose a sensor that is sufficiently accurate for the application, easy to install and configure, and adaptable to changes in the environment.

    Attention should also be paid to how sensor performance is evaluated. Three observations can be made about the MnDOT study:

    (1) it used actual traflic conditions, (2) it used inductive loops es the performance baseline, and (3) it collocated highway and road intersection sensors.

    Let us look briefly at each one of these issues. It would seem that the obvious thing to do when assessing the performance of a sensor is to

    subject it to real traffic conditions. The problem with this approach is that it lacks control over the traffic situations presented to the sensor. Not a good experimental approach. The correct way is to present to the sensor stimuli that has been ground-truthed in terms of the number and type of vehicles. The stimuli should also be catalogued by weather or trafhc conditions, e.g., clear day, rainy night, or rush-hour traffic. The output of the sensor can then be meaningfully compared to the events which took place on the road.

    This brings up the issue of recording the stimuli. The people involved in the BBN study used Digital Audio Tape (DAT) and video tape to record the raw output of the nonimaging sensors and maintain a visual record of events. Their objective was not to generate stimuli for the sensors, but rather to record data for post processing in the lab. The approach nevertheless sets a good example. The recorded signals can be played back to the sensor using a suitable transducer, much like music emanating from a speaker. Any temperature and humidity effects present at the time the recording was made would be intrinsic to the stimuli. The physical effects of the environment on the sensor hardware should be evaluated using an environmental chamber.

  • 266 R. WEIL et al.

    Ground-truthing data is a labor intensive exercise. As part of our own sensor evaluation efforts, we collected some 50 hours of traffic video during 1995-1996. Our experience ground-truthing this data has been that its takes about 1 hour of labor for every three to five minutes of video. Long term sensor evaluation in this manner is clearly infeasible. That is perhaps the reason why the inductive loop has emerged as a calibration standard. The problem with baselining performance against an inductive loop is that it tacitly fails to recognize that loops are themselves inaccurate, and very much subject to the environment. When a measurement disagreement arises in the data collected, it is impossible to tell whether it is the loop or the sensor-under-test that is incorrect. For this reason, we feel strongly that long-term field tests are suitable only to ascertain the resiliency of a particular piece of hardware to the environment, and is not the way to evaluate accuracy.

    The performance of a traffic sensor depends much more on the raw data processing than on the capabilities of the detector itself. This is evidenced in the work by BBN where they adjusted the sensor algorithms until meaningful performance was obtained. Presenting a battery of test cases, i.e., recorded stimuli, to the sensor is a much better way of establishing performance because the ground truth is known. It also allows direct comparison of two sensors since they would both be subjected to exactly the same events. In our work, we use catalogued video sequences of some 5 minutes in duration. The output of the sensor is then processed by a data association program that quantifies the sensor performance against the ground truth.

    The final point we want to make about interpreting sensor evaluation reports pertains to the sensor design itself. As mentioned at the beginning of this paper, the conditions that prevail in a highway are not the same as those on a road intersection. The traffic parameters of interest are generally different too. When a sensor designed for one type of traffic application is subjected to the rigors of another, except for a degree of luck, it will show substandard performance. A sensor designed for collector road speeds of 5 to 50 mph will not measure highway speeds of 35 to 90 mph very accurately. One meant to sense approaching or departing traffic will not necessarily work well in a transverse configuration. A sensor should only be tested against the operational conditions for which it was designed. Extreme care should be exercised when extrapolating test results.

    The unfortunate consequence of improper sensor evaluation, or test result analysis, is that the potential end-user can be unwittingly biased toward one particular sensor technology, forever discrediting other viable and potentially better technologies. But let us assume that the correct approach is followed to establish the measurement accuracy of traffic sensors. The next question to ask is, how much accuracy is really needed?

    In addressing this issue for traffic management and incident detection applications, we have found that for many traffic engineers the subjective metric of Level Of Service is deemed suffi- cient [25]. This metric is rather loose and does not require high sensor accuracy. It classifies congestion into six classes, A through F, based on a somewhat subjective visual evaluation on the traffic on the road. To understand where these engineers are coming from, consider the overlay of traffic volume data sets shown in Figure 7. The data is for the same day of the week, Tuesday, so it represents the typical work day. And, it was collected by a permanent recording station on I-70 in St. Louis, Missouri, that is equipped with inductive loops. Note that a consistent trafhc pattern evidences itself at this road location. The mean and the standarddeviation for this data are shown in Figure 8. Small deviations from the mean are likely due to trip route choices made by some drivers. Thii accounts for the slight differences in the daily volumes measured. Large and prolonged deviations, on the other hand, indicate an incident, be it planned like a lane closure, or unplanned like a stalled car. The data shown in Figure 9 serves as an example.

    What seems to be of interest to these engineers is not so much the measured value of traffic flow but rather the trend in traffic flow. This is something that lends itself well to the application of neural networks and fuzzy control [26]. Lee and Krammes have reported the use of fuzzy logic for incident detection in diamond interchanges (27). Hill has also reported on the application

  • Traflic Incident Detection 267

    9ooo

    6ooo

    7om

    6000

    P

    cm0

    I

    3 4ow

    3ooo

    -. . 0 12 3 4 ; s ; a s lb 1; 1; 1; 1; 1; ii 1; Ii lb 2i 2.1 2i 2i

    Tlmo Pwlod

    Figure 7. The traffic volume pattern at a given highway location is fairly consistent from week to week.

    , , , , , , , , . ~, , *, , ., , ,

    0 1 2 3 4 6 6 7 6 0 10 11 12 13 14 16 16 11 (6 19 20 21 22 23

    TIma Pulod

    Figure 8. The average or median tr&ic volume is a good indicator of the normalcy of traffic flow at 8 particular highway location.

    of neural networks to incident detection on the Long Island Expressway [24]. We ourselves are collaborating with the Center for Optimization and Semantic Control, at Washington University in St. Louis, to develop system-wide, neural network and fuzzy approaches to highway traffic

  • R. WEIL et al.

    3000-

    MOO-

    - - - - - -Average -07~Mar

    0

    0 12 3 4 5 6 7 6 9 10 11 12 13 14 15 16 17 16 19 20 21 22 23

    Tim* PerlocI

    Figure 9. A traffic incident manifests itself ss a significant deviation from the traffic volume pattern.

    parameter estimation [28-311. Later in this paper, we discuss some of the approaches we are following to estimate traffic volume and speed at a road site, and from these do incident detection.

    The point to be made about sensor accuracy is that if a sensor can provide us with a consistent measure of the traffic flow, not necessarily an exact one, then it is a good sensor for traffic management applications. Of course, a consistent and accurate sensor is the ultimate bonanza. But the data so far reported by MnDOT shows that most sensors lack consistency, and it always seems related to weather conditions.

    2. INCIDENTS

    The single most important problem in urban freeway traffic operations is the timely detection of unscheduled incidents. Humans most readily observe incidents and this is, perhaps, the most compelling reason for the recent proliferation of surveillance cameras in urban areas. However, the work force necessary to completely survey urban traffic rapidly becomes cost prohibitive. Thii has led the industry to seek automatic incident detection mechanisms from data derived from measurements made at stations along the freeway. In place of extensive coverage requiring a large dedicated staff continuously watching traffic monitors, many departments of transportation rely on a relatively small number of motorist-aid vehicles. These vehicles circle critical highways around the urban area. As in most cases, this approach has tradeoffs. The obvious advantage is that the motorist-aid vehicle is there on the spot, and for simple incidents (such ss out-of-fuel cases) can render immediate assistance and clear the incident. In other cases, the motorist-aid can call for appropriate assistance. The disadvantage of such a system relates to the coverage required. The probability of the motorist-aid vehicle being in the right place at the right time to render the assistance is a function of the number of vehicles and the road miles required to be covered.

    The most important incidents are those that result in stopped vehicles (either by breakdown, out of fuel, or by accident). The rapid detection of these situations and the early removal

  • lk&?c Incident Detection 269

    of the offending vehicles is most critical [32]. A highway automatic incident detection system is imperative. Coupling this system to a means to automatically relay imagery of a detected incident to a traffic control center significantly reduces the staff necessary to monitor urban traffic.

    The key to the acceptance of such a system is its ability to detect incidents reliably, rapidly, and with a low false alarm rate. The authors have been working on such an automatic incident detection system as an adjunct to our INSIGHT TM traffic management sensor. This system uses the basic derived highway attributes per lane of speed, flow, and occupancy to ascertain the presence of traffic pattern anomalies, recognizing that traffic patterns are not stationary. The drive to perform automatic incident detection from derived highway data is not new, and it is worthy to reflect on the work of past researchers [33-411. We note their successes and rationalize when and why their algorithms work and breakdown.

    Cook and Cleveland [33] record that traffic incidents occur predictably in frequency once every 20,000 to 30,000 vehicle-miles (30,00&50,000 vehicle km) on heavily traveled urban freeways, but unpredictably by time, location, and impact. The number seems inordinately high, because during rush hour in urban America, it is not uncommon for a five-lane highway to peak daily at 10,000 cars per hour. This would suggest a daily traffic incident every two or three miles of urban highway. There may be overstatement of incidence frequency, but clearly, unscheduled incidences on urban highways dominate the course of traffic delays. These delays dominate over the more predictable delays due to traffic congestion, or scheduled incidences due to road repair, etc. The predictable occurrences of these latter traffic delays make them easier to manage. Therefore, the focus is on the automatic detection of unscheduled incidences.

    Freeway incident management foremostly concerns the detection of traffic flow anomalies and the identification of specific traffic flow patterns that belong to the class of incidents. Upon incident recognition, a full incident management system has a set of strategies and tactics. Im- plementation may be through some form of communication to the highway users, enabling timely alternative route planning, and early removal of the congestion forming incident by such mecha nisms as a motorist-aid subsystem.

    Some clarification of terms is necessary. A traffic flow anomaly is an unexpected flow pattern for a given location on a given highway (modified by the weather condition) for a given day and time of day. An incident, on the other hand, is any nonrecurrent event which causes reduction of roadway capacity or abnormal increase in demand [42, p. l]. Incidents may be predictable or unpredictable as shown in Table 6 [42, p. 31. In Figure 10, the pie chart gives the distribution of incident types [42, p. 31. The chart illustrates that the majority of incidents are minor, such as flat tires, overheating, and out of gas. Minor incidents will, in general, only result in a vehicle on the hard shoulder. However, of the total delay caused by incidents, 65% are attributable to this minor category (see Table 7 and [42, p. 31). Characteristically, minor incidents last less than half an hour and reduce the capacity of a three-lane highway by typically 25%-26%. Accidents (which constitute only 15% of all incidents) fall into the major category. Major incidents contribute 35% of the overall incident caused delays, constituting severe capacity reduction according to how many lanes are blocked or whether there are accompanying injuries associated with the accident. Table 8 [42, p. l] provides an example of the capacity reduction associated with different types of incidences on the three-lane Gulf Freeway in Houston.

    Table 6. Incidents may be predictable or unpredictable as incident types.

    Predictable Unpredictable

    l Mfdntenance activities

    0 Construction

    l Accident

    l Special events (ball games, fairs,

    parades, Olympics, concerts)

    l Stalled vehicle

    l Weather (rain, ice, snow, fog) . Bridge or roadway collapse

    0 Spilled load

  • 270 R. WEIL et al.

    MECHANICAL

    FLAT TIRE 34%

    OVERHEAT 8%

    ACC.,,.. . !InFNT wh

    OUT OF GAS

    ABANDON

    4%

    r OTHER

    Figure 10. Incident type distribution (percent).

    Table 7. Incident magnitudes.

    Characteristic I Minor I Major

    Duration

    Blockage

    < l/2 hour > l/2 hour

    Shoulder area only One or more traveled lanes

    Contribution to Overall

    Incident-Caused Delay 65% 35%

    Table 8. Typical capacity reduction.

    Incident Type Capacity Reduction (percent)

    Normal flow (three lanes) _

    Stall (one lane blocked) 48

    Noninjury accident (one lane blocked) 50

    Accident (two lanes blocked) 79

    Accident on shoulder 26

    The distributions of the times of occurrence vary on a daily cycle (as shown in Figure 11, see [42, p. 41, which is an example from a Toronto study). The pattern relates, not surprisingly, to peak usage, and seems to reflect the impact of driver fatigue. Note the morning between 7:00 and 9:00 A.M., when the volume is high, but presumably people are going to work and they are fresh after a nights rest. Compare this with the evening rush hour, where apparently they are less alert. That tiredness factor coupled to high volume creates a higher proportion of incidences.

    One study of incidents on a four- or five-mile section of Highway 401 in Toronto between June 8 and September 4 in 1987 (as given in Table 9) [42, p. 21 h s ows about 130 incidences a week. The majority of these are of the shoulder blocking variety and less than 2% are severe enough to block two lanes or more. Even though this is a small percentage, the fact is that these will occur on average twice a week, will be difficult and time consuming to clear, and will represent a large traffic delay.

    Not only is the severity of the incident and the time of an incident important, but the time to recognize that an incident has occurred is also critical. Luckily, nature works for us in this instance. A direct relation exists between the severity of a situation and the severity of the traffic flow anomaly. For example, an accident resulting in a two-lane highway closure has a correspondingly dramatic change in capacity. This infers that it is easier, and therefore one could surmise that it would be quicker, to detect severe anomalies and incidents. Speed of recognition is of paramount importance. Consider a three-lane highway during rush hour with an approximate flow of 6,000 vehicles per hour. Cars arrive at a station on the road at an average of 100 cars per

  • Daff~c Incident Detection 271

    11

    10

    9

    %8

    7

    8

    5

    HOUR ENDING

    Figure 11. Times of incident occurrences. As might be expected, incidents exhibit similar peaking characteristics to traihc volumes, and the Toronto study reported an hourly breakdown of incidents.

    Table 9. Incidents on fivemile section of Highway 401, Toronto, June &September 4, 1987.

    Incident Severity

    Shoulder blocking

    1 lane blocking

    2 lane blocking

    > 2 lane blocking

    Total

    Incident Type

    Reportable

    Accidents

    Nonreportable

    Accidents

    Noncolliion

    Incidents

    ~ Number 1 % 1 Number 1 % 1 Number I %

    Total No. Per

    Week

    Number %

    minute. A reduction in throughput capacity of 80% caused by a two-lane closure incident will allow servicing only 20 cars per minute. This results in a queue being built at around 80 cars per minute. A 30 second delay in detecting such a problem will impact 40 vehicles. The time to clear or reroute traffic and to clear the incident determines the secondary impact of the incident.

    A hard shoulder problem does not severely impact the capacity, and it is a more difficult task to determine the anomaly in flow. It could take on the order of four times longer to detect and four times longer to clear this incident before impacting traffic at the same level as a severe incident. The time to clear an incident is critical. Delays increase geometrically with the time it takes to clear an incident.

    Automatic Detection Critical and Measures

    process determining presence an is The step a termination congestion. an of congestion if cause an

    All the are and rely the variables, rate occupancy One presume this a preference that rate occu-

    are most measurements single sensors. preference these variables probably in future todays trafhc provide addition

    reliable of speed other such homogeneity traffic

  • 272 I%. WEIL et al.

    Eight key factors [35,36,40,41] as summarized in Table 10 are responsible to a first order for the performance of all incident algorithms. Table 10 needs a little further explanation. A key factor is the operating state of the highway in relationship to its total capacity. As discussed earlier, it is more difficult to determine that an anomaly has occurred directly from sensor measurements on a highway that is operating well below capacity. Fortunately, it is considerably less critical to make this determination under low capacity conditions than at the other extreme. Variation in operating conditions is a daily event with peaks nominally related to rush hour. For example, Figure 12 represents a monthly flow pattern for westbound traffic on Highway 70 into St. Louis. Observe that there is a marked similarity (and even predictability) on a work day basis. There is an even higher consistency Monday to Monday, Tuesday to Tuesday, etc., with the weekend traffic showing greater variability, however, with the absence of a peak associated with rush hour. This daily and hourly variation in traffic precludes any simple single thresholding algorithm that does not take cognizance of the time varying nature of the underlying traffic flow pattern. In an almost parallel manner, the deviation of the incident as well ss its location relative to the measuring station is significant. Any incident that is short (say, less than one minute) in nature will be difficult to detect (in all but the densest of traffic) especially if it is some distance from the measuring station. Furthermore, by the time of detection, the incident will have cured itself, and requires no action. This is obviously the extreme case. At the other end of the spectrum, an incident that exists for upwards of an hour will have an accumulation effect on trafhc that will be evident in all but the lightest of traffic conditions.

    Table 10. Factors affecting all algorithms.

    I. Operating conditions

    A. Heavy B. Medium C. Light D. At capacity

    E. Well below capacity

    II. Duration of the incident

    III. Geometric factors A. Grade

    B. Lane drops

    c. Ramps

    IV. Environmental

    A. Dry

    B. Wet C. Snow D. Ice

    E. Fog

    V. Severity of the incident

    VI. Detector spacing

    VII. Location of the incident relative to detector station

    VIII. Heterogeneity of the vehicle fleet

    Geometric factors will influence incident detection algorithms. The grade of the road, lane drops, and ramps will all have a tendency to alter the traffic patterns such that uncongested trafIlc might mimic the pattern of an incident occurrence in the traffic flow. Correspondingly, the environment, particularly the road surface condition and weather (in association with or independent of the road surface), will impact traffic flow patterns independent of the presence of an incident [39,43]. Heavy snowfalls will reduce average vehicle speed, as will any icy road

  • Traffic Incident Detection 273

    Figure 12. I-70 west of Lucas and Hunt Station 605, eastbound.

    surface. Rain on a road surface after a long dry spell (where the oil residue has not been washed

    from the road surface for some time) also reduces average vehicle speeds. These reductions are

    independent of any incident.

    As discussed earlier, the severity of the incident will impact the capability of the algorithm.

    An incident detection algorithm optimized to detect incidents causing two-lane highway closures

    will have few false alarms, but will have a tendency to miss incidents that result from a vehicle

    stopped on the shoulder. Correspondingly, an algorithm optimized to detect even minor incidents

    on the hard shoulder will have a tendency to yield a higher false alarm rate.

    For algorithms that rely on comparison of reports from measurements made at two or more

    spaced detectors, the spacing impacts the incident algorithm performance. The wider the spacing,

    the greater the opportunity for the underlying assumption that the patterns came from essentially

    the same flow conditions is less likely to hold true. First-order factors affecting the performance of

    these incident algorithms include the presence of intermediate ramps, merged highways, and lane

    drops. Finally, the heterogeneity of the vehicle fleet will impact the performance of the incident

    detection algorithms. Most incident algorithms assume a high proportion of cars that dictate the

    traffic flow pattern. A disproportionate percentage of large trucks will have a tendency to slow

    up traffic, reduce arrival rate, and increase headway. This can alter the flow pattern sufficiently

    to mimic an incident flow pattern.

    The above discussion provides an illustration of the dimensionality of the problem faced when

    trying to ascertain from traffic patterns the existence of an incident. It is not sufficient to know

    only the presence of an incident, but also to be able to locate to some degree of accuracy the

    location of an incident relative to a given sensor. Summarizing, there are basically two types of

    detection algorithms, viz.,

    (1) those that rely only on the measurement from one station, and

    (2) those that use a comparison method of the readings from two stations spatially separated

    along the highway.

    The latter are known as comparative algorithms and generally expect

    (1) an increase in occupancy upstream of an incident (and where speed is used, a drop in

    vehicle speed), and

    (2) a drop in downstream occupancy (together with an increase in vehicle speed).

    Clearly, of the factors given in Table 10, operating conditions, duration of the incident, detector

    spacing, and location of the incident are critical for comparative algorithms (single station or

    comparative). Within the two types of algorithms, there are generally two classes of algorithms,

  • 274 I%. WEIL ei! d.

    those that use instantaneous measurements (albeit integrated over a short period of 30 seconds to minutes), or those that use a filter methodology, such as a recursive linear filter (e.g., CY~ type or even a low order Kalman), to in some way balance the measurement uncertainty with the fundamental noise (e.g., uncertainty of arrival rate) in the underlying generalized pattern flow assumptions.

    It is difficult to generalize as to which approach (single detector or comparison of two detectors) is better. Clearly, the comparison method relies upon the ability of the two detectors to commu- nicate, which intrinsically increases cost of the system and reduces reliability. In spite of these shortcomings, the comparison methods, as implemented in the California algorithms (developed by TSC), are perhaps the most widely accepted set of automatic incident detection algorithms. The prevailing wisdom is that the comparison method is a better device over a single station algorithm which has a tendency to generate excessive false alarms.

    The two main problems in developing a single station algorithm are:

    (1) the complexity of distinguishing incident from nonincident (or recurrent) congestion, and (2) the difficulty of adjusting for incident related changes in trafhc operation because of factors

    such as weather.

    Progress has been made, however, in single station detectors because of the recognition that a single station detector is an economically better proposition, because it does not require con- tinuous communication between two adjacent sensors. An example is the McMaster algorithm which uses flow and occupancy to determine the presence of an incident. This algorithm has been continuously improved and includes speed when available from the detector. We will introduce later in this text a novel single station detector that is self-calibrating and seeks anomalies in trafhc flow patterns. When such anomaly occurs, then and only then does the system elevate itself to look at the adjacent sensors to determine if the anomaly is an incident and, thereafter, seeks to locate the incident.

    Evaluating incident detector performance is not totally subjective. Before proceeding further, we briefly describe the appropriate measures to determine whether one set of algorithms performs a better job of incident detection.

    The most commonly used performance measures are the ability of an algorithm to detect an incident (detection rate) as opposed to its false alarm rate, see Table 11. Detection rate is the number of incidents detected as a percentage of the number of incidents occurred. The false alarm rate is the number of false alarm signals as a percentage of tests performed by the algorithm. The evaluation of algorithms is still somewhat subjective as evaluations must allow for spatial and temporal deviations allowable between the incident occurring and its detection. Usually the temporal leeway is rather generous in most evaluations. Other measures include the mean time to detect an incident and the accuracy of location of an incident. The above performance measures are not one-dimensional. A fundamental algorithm will perform differently according to the thresholds chosen. Evaluation is often a function of thresholds and severity of the incident. Detection ability and false alarm are generally functions of incident severity.

    Table 11.

    Condition Indicated Actual Condition by Algorithm

    Incident-k 1 Incident

    Incident-F&e Incident

    Missed detection

    FUse alarm Detection I

    2.2. Adjacent Sensor Algorithms

    The adjacent sensor algorithms exploit the fact that an incident reduces the capacity of the freeway at the site of the incident. If the resulting capacity is less than the volume of the traffic

  • lIaffic Incident Detection 275

    upstream, there is a build-up of congestion. The boundary of thii congested region propagates in an upstream direction dependent upon the values of capacity and volume. Note that the severity of the incident will affect this boundary. How far the boundary propagates depends both on how long the incident is resident, as well as the capacity and upstream volumes.

    0 l INCIDENT FREE CONCITION 1 n lNClDENTCONDillON

    TYPfCAL FIGURES Tl - 6 T2 = 0.6 13 = 0.15

    Figure 13. Basic California algorithm.

    Table 12. California algorithm.

    Station indices (i) increase in the direction of travel.

    Definition:

    OCC(i, t) Occupancy at station i,

    for time interval t (percent)

    DOCC(i, t) Downstream occupancy OCC(i + 1, t)

    OCCDF(i, t) Spatial differences in OCC(i, t) - OCC(i + 1, t)

    OCCRDF(i, t) Relative spatial differences OCCDF(i, t)/OCC(i, t)

    in occupancy

    DOCCTD(i, t) Relative temporal differences (OCC(i + 1) - OCC(i + 1, t)] in downstream occupancy -OCC(i + 1, t - 2)

    Note: Occupancy ie measured ae an average over all instrumented lanes at a single location on the freeway over a one-minute interval.

    The most well-known of adjacent sensor algorithms are the California Algorithms developed by TSC. These algorithms are occupancy-based algorithms. These were found to be superior over volume-based algorithms. Figure 13 shows the most basic form with the parameters described in Table 12 [35]. The traffic condition passes three sequential thresholds viz. the spatial difference in the twostation occupancy measures. The presence of an incident is declared by the relative difference and the relative temporal difference in downstream occupancy all exceeding respective thresholds. The advantage of the California algorithm in its most basic form is its simplicity and its intuitiveness. The algorithm has been subjected to modifications and extensions since its first introduction. These modifications are as simple as shown in Figure 14 and as complex as shown in Figure 15 [35]. The rationale for the later augmentations is not only to understand when an

  • 276 Ft.. WEIL et al.

    Figure 14. Simple modified California algorithm.

    incident has occurred, but also to know when that incident has terminated. Other extensions

    relate to smoothing the surveillance data and providing a forecast of the error in the measure of

    spatial differences in the occupancy of adjacent sensors.

    The basic California algorithms work well when conditions before the upstream station are very

    similar to the conditions before the downstream station. The fact that it uses differential readings

    over absolute measurements makes it less susceptible to such issues as weather induced trafllc flow

    patterns. It performs less reliably when there are intermediate ramps, or intermediate freeway

    to freeway mergers. The overpowering rationale to seek a single station algorithm to detect

    incidents is cost. The cost of requiring two stations to perform this task immediately doubles

    the capital outlay. The cost pales in comparison with the need for continuous communication

    between stations to implement the algorithm. For these reasons, work on single station algorithms

    continues to grow.

    2.3. Single Station Algorithms

    The prevailing wisdom is that single station algorithms tend to generate excessive false alarms.

    Clearly, this wisdom contains an element of truth. However, single station algorithms have come

    a long way since their first introduction. The most famous of these algorithms is the McMaster

    Algorithm [37]. Since its first introduction, it has grown and been modified almost on a yearly

    basis [43-481. The basic algorithm uses flow, occupancy, and, where available, speed. It uses 30- second flow and occupancy data to ascertain whether trafhc congestion is present. The algorithm uses a graph of flow versus occupancy (Figure 16) to determine whether the cause of the transition

    into congested flow is an incident or recurrent congestion. One weakness of the algorithm stems

    from its use only of temporal data, and not spatial differential data. This causes it to have

    difficulty in differentiating between changes in trafllc operation due to such factors as weather,

    as distinct from incident related changes. Nevertheless, the original algorithm developed by Hall et al. has been tested against real tra6ic data and found to have good false alarm characteristics.

    Recently, Hall and others have modified their approach by applying catastrophe theory to the

    underlying traffic patterns [44,46,47].

  • Traffic Incident Detection

    ..m..-..-..-..a..,

    : I : I : I

    i B : % !*

    i : *I I

    i

  • R. WEiL et al.

    ABOUT 26%

    278

    C AITI 4 F

    ..-...

    80 80

    _I

    At t fi UI . .

    OCCUPANCY 46

    Figure 16.

    2.4. Incident Patterns and Causes of False AIarms

    There are five basic incident patterns. The characterization of the first (and there is no sig- nificance to the number~g) is capacity at the site of the incident being less than the volume of oncoming traffic. As a result, a queue develops upstream and simultaneously a region of light traffic develops downstream. This is the easiest of incident trafIic patterns to detect.

    A second pattern occurs when the prevailing condition is freely flowing but the impact of the incident is less severe. Thii situation occurs when the capacity of highway after the incident is greater than the volume of oncoming traffic (e.g., one blocked lane in a four-lane hiihway, with the capacity of the remaining three lanes still exceeding the demand).

    A third pattern is a more extreme case of the second type. Here the traffic is freely flowing with impact of the incident not noticeable in the traffic data. This situation might occur in moderate traffic with a disabled vehicle on the side of the road. This situation is extremely difficult for any automatic incident detection system to detect.

    The fourth and fifth types relate to incidents occurring in heavy trafhc. In the fourth case, the capacity at the incident site is less than the volume (and the capacity) of the traffic downstream. This difference will lead to a clearance of the downstream problem. This traffic pattern evolves slowly unless the incident is severe.

    The fifth type happens when the capacity at the site of the incident is greater than the down- stream volume. The effect of this situation is that the pattern is localized and not discernible in traffic data. Similar to the third type of incident pattern, this is difficult for any automatic algorithm to have any degree of success.

    There are four main causes of false alarms in automatic incident detection systems. The first, rather obvious, is a malfunction in the basic detector. Detectors are not infallible, and obviously

  • TkafFic Incident Detection 279

    false significant to false alarms, occurs in heavy traflic in which vehicles experience speed variations such as stop-go pockets of traffic. This shows up in traffic data as waves that propagate in a direction counter to the flow of the traffic. The third cause of false alarms is abnormal

    such as intermediate bottlenecks usually in locations that have volume of

    on ramp trafbc. This will be most when the total demand exceeds the capacity of the freeway.

    2.5. Summary of the Art Today

    The use of occupancy dominates todays Few are speed based. In some respects, the authors feel that thii reflects upon the basic sensor used (i.e., loops provides a more reliable figure for occupancy and basically needs two sensors to provide speed). Our belief is that we are going to see a greater proliferation of video sensors that provide occupancy, volume, and speed, and provide measures of the of the traflic with greater reliability.

    comparative algorithms appear to work better for incident detection. However, the cost of the continuous communications

    confirmation of an incident. This allays the continuous cost of the communication between multiple sensor system, and simultaneously experienced with single sensor detector

    3. INCIDENT DETECTION APPROACH

    Our approach tries to capture the best features of both the single station (e.g., McMaster) and comparative (e.g., California) architectures. We capture the cost and communication bandwidth benefits of the single station configuration. At the same time, we retain the lower false alarm rates of comparative configurations. The key is coupling single station autonomous anomaly detection with remote fusion. Each sensor autonomously detects the presence of local traffic anomalies. Multiplc+sensor anomaly reports are transmitted to and fused at a remote base-station for incident detection.

    Figure 17 illustrates our approach, consisting of a network of autonomous trallic sensors dis- tributed over a highway system. Each sensor works independently of all other sensors. The only communication is between the base station and an individual sensor. The individual sensors au- tonomously monitor traffic for anomalous situations. At the sensor level, an anomalous situation may or may not be an incident. The only requirement is that trafIic patterns differ significantly from the norm. Once a sensor detects an anomaly, it contacts the base station and transmits appropriate tra.& status information. Contact can take place over a variety of communication mediums.

    Current sensor prototypes use telephone; future options will allow the use of ISDN, or networks protocol such as SONET. The main tradeoff is the ability to send back live imagery, the number of sensors that can be communicating with the base station simultaneously, and the initial delay for a connection between a sensor and the base station. POTS delays can be in the order of l/2 minute. ISDN and SONET connections are almost instantaneous, but there is a cost complexity penalty. Telephone is also limited to still or slow update rate imagery, while ISDN and SONET can obtain different levels of real time imagery.

    The base station receives all anomaly reports and must perform a fusion to determine if an in- cident actually occurred. Other information extracted by the base station includes an estimate

  • R. WEIL et al.

    Incident Alert, Type,

    Location Autonomous Sensor Anomaly Reports

    ST. LOUIS COU

    TRAFFIC SENSORS 4

    Figure 17. Coupling single station autonomous anomaly detection with a remote fusion of multiple-sensor anomaly reports at a base-station for incident detection.

    of the type and location of the declared incident. Information pertinent to the incident decla ration include the traffic data returned by the individual sensors, as well as the time of report, and the location of the reporting sensors. The base station also has the ability to query sensors that have not yet reported a traffic anomaly. Notification of the traffic manager occurs once the base station determines and declares an incident. The traffic manager can then query the base station for traffic flow pattern statistics. He may also view still images taken both at the time the sensors detected traffic anomalies as well as current still images. Figure 18 illustrates the base station user interface.

    Communication takes place only between the base station and the sensors. There is no sensor to sensor communication. Typically, communication only occurs when a sensor has determined a traffic pattern anomaly.2 This alleviates the continuous communication problem of the adjacent sensor approach.

    Once the base station has received a first report of a traffic flow anomaly from a sensor, shortly thereafter3 other clustered sensors will also start reporting. The base station may also poll any sensor in the network for traffic status information. The base station then uses data fusion techniques to determine if the anomalies from the sensors actually indicate the presence of a traffic incident. Other information extracted may include the location and type of incident. The base station may also recommend a course of action. It is the fusion of these multiple anomaly reports that alleviates the high false alarm rate inherent in single sensor approaches.

    If image based sensors are used. *Occasionally the base station may poll a sensor to gather trafiic state information. 3The time delay would of course depend on the sensor spacing, traffic speed, volume, etc.

  • TrafEc Incident Detection 281

    Figure lg. Base station interface.

    Currently the INSIGHTTM sensor and accompanying base station infrastructure is under phased development. Initial work developed the sensor measurement hardware and software. The INSIGHTTM is an image based sensor; currently it has the ability to measure volume, speed, headway, occupancy, link travel time, and determine vehicle class mix. Software to the units is remotely upgradable, and new measurements can be added as deemed appropriate.

    Phase II entails the development of the anomaly detection algorithms. The first step of Phase II was the development of a self-calibrating McMaster Incident Detection algorithm. In this case, the algorithm is used only to declare anomalies, not incidents. The self-c~ibrating ability of the algorithm is currently beginning testing; results will be published in future publications. The results will be used to develop new and enhanced anomaly detection techniques. The approach for these algorithms will be discussed in later sections.

    The last stage will entail the development of sensor fusion methods for the detection, classi- fication, location, and recommended response to freeway incidents. The base station will have the ability to fuse data from remote sensors reporting anomalies and poll nonreporting sensors adjacent to reporting sensors. Sensor reports, along with sensor locations, and the temporal arrival of the alerts will be used in generating the incident alarms and recommendations.

    The rest of this paper will outline the status and details in the current stage of this systems implementation.

    3.1. Anomaly Detection

    For the purpose of this paper, traffic anomaly detection is the determination of a traffic situation differing significantly from normal. An anomaly includes but is not limited to the class of traffic incidents. A nonincident anomaly might be the effect of a rain or snow storm on traffic, or the closure of a lane for scheduled maintenance. The distinction is important because at the sensor level, the requirements for screening for anomalies is less stringent then screening for incidents. The problem of false alarms is alleviated.

  • 282 R. WEIL et al.

    This section describes the development of methodologies for detecting anomalies. The require- ments for anomaly detection are first spelled out, next a modification to the McMaster algorithm is described, finally future directions for new detection algorithms are explored.

    Anomaly detection requirements

    Since a traffic anomaly is a situation differing significantly from normal, it is important to determine what normal is, and what effects normal. Clearly the geometry of the location being monitored by the sensor has a unique normality. This means that for each site the sensor must be uniquely calibrated. Since a fielded system would have hundreds if not thousands of sensors, it would not be feasible to manually calibrate each sensor. The first requirement is that any algorithm be self-calibrating.

    Self-calibration is not a trivial task. Though Figure 12 shows that there is clearly a pattern to describing normal traffic, there are two main problems. The first is the ability to screen the data, separating it into normal and anomalous conditions. The raw data being collected will contain both types of observations with a random mix. The problem of prescreening the data was solved for the McMsster algorithm by having human operators log all events for the collection and calibration period 24 hours a day. Clearly this is untenable for a fielded system. An algorithm must be able to self-calibrate without supervision or a training set of exemplars of any kind.

    Figure 12 showed a temporal pattern to trafllc flow patterns. Casual observation shows there is a weekly and daily cycle of flow with minor variations. These variations can be natural variations or caused by anomalies. Seasonal changes, holidays, special events (ball games), even weather can have an effect on normal flow, and should be accounted for in any algorithm. Thus, any algorithm should be temporally dependent with additional ability to account for special events.

    Speed of detection is the next important requirement. Since a queue can build at up to 40 cars per minute per lane, a sensor should respond as timely as possible to an anomaly. Since we are only screening for anomalies, this should help alleviate the high false alarm rates normally associated with short detection periods.

    Last, the algorithms must function autonomously. The large quantity of sensors in a fielded system imply each sensor must act autonomously from any central control. The algorithms must be such that no coor~mation between sensors is needed, no regular updates or control can be given by any outside means. It must be self-bootable, and able to determine both the start and stop of an anomaly independently.

    3.2. McMaster Algorithm

    The McM~ter algorithm is based on a catastrophe theory model developed by Persaud and Hall [47]. They use catastrophe theory to model the relation between speed, occupancy, and flow. Figure 19 illustrates the basis of the approach. The figure consists of data collected before, during, and after several incidents. Area 1 consists of uncongested normal occupancy-flow data. Operation moves into Area 2 or Area 3 during the presence of congestion (an incident). Not illustrated but equally important is a sudden drop in speed. The basis of the McMaster algorithm is the calibration of the curve separating Area 1 and Area 2. C~bration can be automatic or manual, but depends on the calibration data being recorded as an incident, or an incident-free observation. Typically, a function is fit to the lower bound of the incident-free observations. One form used is

    Flow = c + dl(occupancy) + da(occupancy)2, (3.1)

    where c, Qi, and d2 are the calibration c~~cien~. Additions calibration includes a lower limit of uncongested speed (SPL), and a constant difference coefficient (CDL). Calibration of the example shown in Figure 19 results in (see [37, p. 171))

    Flow = 0.7 -t 1.29(occupancy) - 0.00i(occupancy)2,

    SPL = 91 (km,hr), CDt = 2.9.

  • Trsffic Incident Detection

    AREA3

    283

    0 20 40 60 80

    ~~~~CY,~

    Fi8ure 19. Example of a 30-second flow-occupancy relationship from Station NB-7 of the Burlington Skyway, Ontario Canada [37].

    Figure 20 illustrates the logic of the MeMaster algorithm. Basically there are two states, congested (incident has occurred) and uncongested (incident-~~). To go from the uncongested to congested state, the congested test based on equation (3.1) must hold persistently for a count of Persistent_Limit. The opposite is true in going from the congested to uncongested state.

    Traffic-Status = Uncongested; PersistenceXounter = 0; ~ile(Obse~~ Traffic) (

    switch (TrafficStatus){ case: Uncongeeted

    if ((Speed < SPL) or (c$d~(Occupancy)+d~(Occupancy)a -I-CDL ZFlow)), Persistence_Counter++;

    else Persistence-Counter = 0;

    if (PersiatenceXounter > Persiatencelimit) { TrafficStatus - Congested; Persistence-Counter = 0;

    1 break;

    case: Congested if ((Speed> SPL) and (c~d~(Occup~cy)~d~(Occup~cy)* +-CD& ZFlow)),

    Persistence_Counter++; else

    Persistence-Counter = 0; if (Persisteace_Counter > Persistence4imit) {

    TrafficStatus = Uncongested; PersistenceXounter = 0;

    1 break;

    1 1

    Figure 20. McMaster incident detection logic.

    One of the main weaknesses of the ~plementation of the algorithm is the need for prescreening the calibration data, The screening removes any incident contaminated measurements from the calibration data. Screening require a human observer sitting at the highway and logging incidents during the entire data collection process. Thii was possible for the initial development of the algorithm because. of the manning of the test and data collection site (the Burliin Skyway), 24 hours a day, seven days a week [37, p. 1721. The operator logs all events, i.e., stalled vehicles

  • WEIL et at.

    on the shoulder, accidents, etc. The manning, logging, and screening of calibration data is clearly not practicable for a fielded system that requires hundreds of sensors uniquely calibrated for each individual sensor location.

    A second drawback of the calibration methodology is the constancy of the process. Calibration of each sensor occurs once at installation. Furthermore, either the sensor itself must be capa- ble of collecting the calibration data, or some external calibration data collection device must be used. Constancy of the process implies the sensor cannot respond to changes in season or highway geometry without the user intervening and recalibrating the sensor. The addition of an exit or entry ramp upstream or downstream of a sensor may certainly change the slope of the uncongested flow-occupancy relationship. Ibrahim and Hall [39] have shown the effect on the speed-flow-occupancy relationship by weather. The question still remains on the seasonal effect on the relation. One could reasonably conjecture that in Minneapolis, Minnesota, the relation- ship in December (with two feet of snow plowed onto the shoulder) must differ from the August relatio~hip.

    Our self-calibrating and self-tuning technique for the McMaster algorithm solves the constancy problem. The motivation to calibrate the line separating Area 1 and Area 2 in Figure 19 remains central to our technique. Additional requirements are that the sensor collect its own calibration data and that the collected data not be screened to remove incident related data. Furthermore, calibration should continue inde~nitely in the b~~round to tune the sensor to changes in the environment. Figure 21 illustrates the online calibration technique.

    3460 3360 3240 3120 3m 2666 2760

    ;E 24aJ 2260 2160 2040 1926

    :tz 1660

    1% 1200 1060

    E 720 600 460

    :: 27 la? 1129 0 3363

    0

    Weekly Acc~ulatio~ And Calculation Of Volume Cuttoff Per 1 :

    10 3

    Lane

    Calculate 3 6 for 3 16263s 66

    ff4 m 143 to Occupancy 54669276 9 2 1

    19 677 446 18 2362714 32 2 11 1

    134 976 43 1 1 1032 49 4 1 : 2 66 2 12 1

    1 2 3 4 5 6 7 8 9 10 11 12 13 I4 15 16 17

    Figure 21. Self-calibrating McMester algorithm.

    Flow and occupancy levels are split into an la x m matrix of bins. These bins are then used to build a ~~~~ of the number of observations of each. flu-~up~cy level during an accu- mulation interval. For example, during the weak long accumulation interval of Figure 21, there were 319 observations of a flow rate of 1320 vehicles per hour, and occupancy level of 8%. At the end of the accumulation period, calculation determines a noncongested cutoff flow level for each bin. The first step in determining the cutoff flow levels require calculating the mean (pO) and standard deviation (co) of the flow observations for each occupancy bin (of. The cutoff level (X,)

  • TkaEic Incident Detection

    for the occupancy bin o is then

    &=/-Jo--oao,

    where cy is a sensitivity factor (typically 3).

    Again, by example from Figure 21, we have for the 8% occupancy bin,

    285

    (3.2)

    43 - 1440 + 319 - 1320 + 300 - 1200 + 50 + 1080 + 7.960 cla = = 43 + 319 + 300 + 50 + 7

    1257 7

    43. 14402 + 319 - 13202 + 300. 12002 + 50. 10802 + 7.9602 0s = -

    43 + 319 + 300 + 50 + 7 12572 = 89,

    and&= 1257 - 3 + 89 = 990.

    The end result, a vector of flow cutoffs, levels as a function of occupancy bins4 If no previous

    data has been accumulated for calibration, this vector is used directly. Otherwise, low pass

    filtering of the vector occurs on a bin by bin basis. Using linear interpolation, a cutoff flow

    level can be determined from this vector for any occupancy level. Denote this function by

    F& (occupancy).

    Computation of a cutoff speed occurs similarly. The mean speed (psp&) and speed stan-

    dard deviation (.Q,&) are determined during the accumulation period. Again a cutoff speed is

    calculated as

    x speed = kpeed - @-speed, (3.3)

    where p is again a sensitivity factor. Xspd is again low pass filtered to determine a cutoff speed

    A speed * The logic to the McMaster algorithm stays basically the same. The only change is to replace

    SPL by Asp&, and c+di(Occupancy)+dz(0ccupancy)2+CDr, by Fl,,t(occupancy) in Figure 20.

    Evaluation of the algorithm is ongoing.

    The self-calibrating McMaster algorithm is to go under field trials shortly. Two INSIGHTTM

    sensors are currently being installed in the northbound lanes of I-270 at Olive Road and Page

    Blvd. in St. Louis. These units will be used to field test the self-calibrating McMaster Algorithm.

    The results will be presented in a future publication, and will serve as a baseline for comparison

    for future algorithm development.

    3.3. Future Directions

    A clear shortcoming of the McMaster algorithm is the lack of temporal dependence. The

    conditions that describe an anomaly are the same at the height of evening rush hour as at mid-

    afternoon and 3 A.M. The boundary condition separating congested and uncongested trafiic

    flow is essentially an artificial mechanism with no intuitive context. Our belief is that a fuzzy

    logic approach is one promising approach to overcoming inherent limitations of the McMaster

    algorithm.

    Zadeh [49] introduced fuzzy sets to overcome the criticism of classical set theory for being an

    extreme simplification of reality. The black and white nature of classical set theory prevents the

    handling of real world uncertainty. Uncertainty implies that it may not be known with true or

    false certainty whether an object belongs to a set.

    Fuzzy set theory extends regular set theory by allowing each element to have a degree of

    membership in a fuzzy set. Let X be a universe of discourse and let S be a crisp (traditional)

    subset of X. The characteristic function associated with S is a mapping

    PS : x --) (0, 11, % is possible to directly accumulate the flow cutoffs without first building the histogram. We felt the histogram approach to be more intuitive for explanation.

  • 286 R. WEIL et al.

    PS

    T

    rl- 20 X

    Figure 22. Characteristic function of a crisp set.

    such that Vx 5 X, ps(x) = 1 if x E S and ps(x) = 0 if x $ S. For example, let X = R be the real numbers and S = (x 1 0 5 x 5 20, x < a}, then the characteristic function is shown in Figure 22.

    The fuzzy set extends the notion of a crisp subset by extending the range of the characteristic function from the pair (0, 1) to the unit interval I = [0, 11. A fuzzy subset A c X is associated with a characteristic function

    /.&X-[O,l].

    This characteristic function is called the membership jkrction of A. The membership function indicates for each z E X, the degree of membership in A. If PA(z) = 1, z belongs completely to A. If PA(z) = 0, x does not belong to A. Inbetween 0 and 1 is a continuum of degrees of membership. Figure 23 illustrates some typical fuzzy sets on the real line 3.

    PB T

    llzl- X-

    Figure 23. Typical fuzzy sets on the real line.

    One of the primary applications of the fuzzy subset is its use in conveying a concept. One could define the phrases early morning rush hour and mid-morning rush hour as subsets of driving time of day. Abnormal Flow and Normal Flow could be defined as subsets of Flow. Figure 24 illustrates potential fuzzy subsets of driving time of day. This representational ability of fuzzy subsets helps define linguistic values. Let T stand for the variable driving time of day, then a statement can be made such as

    T is mid-morning rush hour.

  • Tra& Incident Detection 287

    The value of T, mid-morning rush hour, is an example of a linguistic value. Linguistic values are in essence the association of a fuzzy subset with the value of the variable.

    FUW Subeete Of Driving Time Of DaY I -+early morning msh& -mid morning rurh-hour

    0:oo 2:oo 400 6:OO 8:OO 1o:oo 12:oo woo l&O0 l&O0 20:oo 22:oo

    Driving Time Of Day

    +late morning rush-hour

    *late morning +lunch-time

    drnki afternoon

    +early evening rush-houf

    &late evening rush-hour -early evening

    +-late evening

    Figure 24. Fuzzy subsets of driving time of day.

    The point or power of assigning a fuzzy subset as the value of a variable is to incorporate the uncertainty in our knowledge of the actual value of the variable, If the value of T is mid-morning nicsh hour, we are certain that T is not 2:00 P.M., but 7:00 A.M. to 8:30 A.M. are possible values for the variable T.

    Zadeh [50] introduced the theory of Approximate Reasoning (AR) based on fuzzy sets to provide a framework for reasoning in the face of uncertainty. This theory represents propositions as statements assigning fuzzy sets as values to variables. Figure 25 illustrates the form of an AR mechanism applied to incident detection.

    I FUZZIFICATION

    1 ANOMALLYOUTPUT

    Figure 25. Anomaly approximate ressoning mechanism.

    The fuzzification function translates inputs such as time in seconds, flow rate in vehicles per hour, into a vector indicating the degree of membership in each of the fuzzy sets. For example, if the time is 9:00 A.M., then from Figure 24, the vector indicating the travel time of day is {0,.5, .5,0,0,0,0,0,0,0). The defuzzification function transforms the fuzzified internal member- ships back into a crisp form for external output. Thii may simply be a threshold test to declare the presence of an anomaly.

    The heart of the mechanism the Fuzzy Rule Base System. The rule base consists of a set of fuzzy rules of the form:

    IF A is Vj AND B is uk THEN C is rjv,,

    and a fuzzy inference mechanism for chaining between the rules. The chaining mechanism is based on Zadehs AR theory. A typical anomaly rule could include:

    IF travel time of dap is mid-morning rush hour AND flow is Low

    THEN Anomaly is sigh.

  • 288 R. WEIL et d.

    The determination of the fuzzy sets and rule base, though certainly not a trivial task, is not beyond the efforts of reasonable traiIic engineering expertise. The real problem is the determi- nation of the membership functions. It might be possible to determine one set of membership functions for the tived time of dag identical for every sensor location. The problems is the de- termination of membership functions for parameters such as ftovr Me, occupancy, speed, etc. As noted earlier, these values are dependent on each sensors location, A mechanism is needed to autonomously calibrate these membership functions for each sensor.

    Artificial Neural Networks (ANN) are an appropriate way of solving the calibration problem of the rn~be~~p unction. One of the key capabiities of ANN is their ability to train and learn. A judicial application of ANN techniques could be to learn the membership functions. The training would of course need to be unsupervised.

    Simpson [51, p. 31 gives the following simple definition of an Artificial Neural Network, . . . an ANN is a nonlinear directed graph with edges that is able to store patterns by changing the edge weights and is able to recall patterns from incomplete and unknown inputs.

    A biological neuron is the basic building block of the nervous system. Figure 26 shows a sim- plified view of a biological neuron. The body cell of the neuron is termed the soma. Connected to the soma are multiple dendrites and an axon. These serve as the mechanism of commu- nication to other neurons. The dendrites are spine-like connections that receive stimulus from other neurons. Each neuron has a single axon that serves to transmit the same stirn~~ to all the other connected neurons. The connection between a neurons axon and another neurons dendrite is termed a synapse. Each synapse has a level of transmission of the stimulus on the axon to the connected dendrite. Only when a neuron receives sufficient stimulus from other neurons connected at the dendrites does the neuron become active and send a stimulus out on its own axon. Since each neuron accepts a different level of stimulus from a connected neuron based on the tr~m~ion level at the synapse, the network knowledge is equivalent to these transmission levels. Biological neural networks work by a biochemical process that is beyond this surveys scope of interest.

    Figure 26. Biological neuron.

    The basic model for an artificial neuron is shown in Figure 27. Lines (or wires) replace the functions of biological axon and dendrites. Synapses are replaced by weights (or resistors). The body of the neuron or soma is split into two components. The first component is an adder that sums up the weighted outputs of other neurons. The second component is the activation function. This function determines the artificial neurons activation level based on weighted net input. Typically, this function is an s-shaped function known as a squashing function (e.g., f(z) = (ez - l)/(e + 1)).

    The method of connote and addressing the network neurons and weights deiines the net- wo: k topology. Two general categories of networks exist: recurrent and nonrecurrent. Recurrent networks have cycles in the network connections. These cycles make recurrent networks dy- namical systems. Nonrecurrent networks have no cycles and can be viewed as a transformation from n-dimensional Euclidean space to m-dimensional Euclidean space. Figure 28 illustrates a feedforward nonrecurrent network and a fully recurrent network.

  • Traffic Incident Detection 289

    Figure 27. Artificial neuron model.

    FEEDFORWARD Note:wai@knotsham FULLY NON-RECURRENT RECURRENT

    NETWORK NEIWORK

    Figure 28. Network topologies.

    Backpropagation [52] is the most popular training algorithm in the training of multilayer artificial neural networks. The learning procedure, called the generalized delta rule, involves the presentation of a set of pairs of input patterns and target output patterns. The system of multilayered networks with randomly (or otherwise) initialized interconnection weights uses the given input vector to produce its own output vector and compares this with the target output pattern. When there is a difference, the rule for changing weights is given by

    where tkj is the target output for the jth COXIIpOnent of the output pattern for pattern k, okj is the jth element of the actual output pattern produced by the presentation of the input patte