[edit]The Northeast blackout of 2003 was a widespread power outage that occurred throughout parts of the Northeastern and Midwestern United Statesand Ontario, Canada on Thursday, August 14, 2003, just before 4:10 p.m.EDT (UTC−04).[1] At the time, it was the second most widespread blackout in history, after the 1999 Southern Brazil blackout.[2][3] The blackout affected an estimated 10 million people in Ontario and 45 million people in eight U.S. states.


NOAA satellite imagery one day before and the night of the blackout.
Background
NOAA satellite imagery one day before and the night of the blackout. | ||
Electrical power cannot easily be stored over extended periods of time, and is generally consumed less than a second after being produced. The load on any network must be matched by the supply to it and its ability to transmit that power. Any overload of a power line, or underload/overload of a generator, can cause hard-to-repair and costly damage, so the affected device is disconnected from the network if a serious imbalance is detected.
As power lines carry more current, they get hotter. This causes them to lengthen and sag between towers. They may safely reach a specified minimum clearance height above the ground. If the lines sag further, a flashover to nearby objects (such as trees) can occur, causing a transient increase in current. Automatic protective relays detect the high current and quickly act to disconnect the faulted line from service. To maintain the lines' specified operating clearance, the right-of-way must be kept clear of vegetation.
Should a fault occur and take a line out of service, the change in current flow is compensated by other transmission lines, which must have enough spare capacity to carry the excess current. If they do not, overload protection in those lines will also trip, causing acascading failure as the excess current is switched onto neighbouring circuits running at or near their capacity.
System operators are responsible for ensuring that power supply and loads remain balanced, and for keeping the system within safe operational limits such that no single fault can cause the system to fail. After a failure affecting their system, operators must obtain more power from generators or other regions or "shed load" (meaning cut power to some areas) until they can be sure that the worst remaining possible failure anywhere in the system will not cause a system collapse. In an emergency, they are expected to immediately shed load as required to bring the system into balance.
To assist the operators there are computer systems, with backups, which issue alarms when there are faults in the transmission or generation system. Power flow modeling tools let them analyze the current state of their network, predict whether any parts of it may be overloaded, and predict what the worst possible failure left is, so that they can change the distribution of generation or reconfigure the transmission system to prevent a failure should this situation occur. If the computer systems and their backups fail, the operators are required to monitor the grid manually, instead of relying on computer alerts. If they cannot interpret the current state of the power grid in such an event, they follow a contingency plan, contacting other plant and grid operators by telephone if necessary. If there is a failure, they are also required to notify adjacent areas which may be affected, so those can predict the possible effects on their own systems.
Local operators are co-ordinated by regional centers, but the operating principle is the same whether the network is large or small.
[edit]Investigation efforts
A joint federal task force was formed by the governments of Canada and the U.S. to oversee the investigation and report directly to Ottawa and Washington. The task force was led by then-Canadian Natural Resource Minister Herb Dhaliwal and U.S. Energy Secretary Spencer Abraham.
In addition to determining the initial cause of the cascading failure, the investigation of the incident also included an examination of the failure of safeguards designed to prevent a repetition of the Northeast Blackout of 1965. Issues of failure to maintain the electrical infrastructure, failure of upgrading to so-called "smart cables," failure of shunting and rerouting mechanisms, AC vs. DC intersystem ties, and substitution of electricity market forces for central planning were expected to arise. The North American Electric Reliability Corporation, a joint Canada-U.S. council, is responsible for dealing with these issues.
On November 19, 2003, U.S. Energy Secretary Spencer Abraham said his department would not seek to punish FirstEnergy Corp for its role in the blackout because current U.S. law does not require electric reliability standards. Abraham stated, "The absence of enforceable reliability standards creates a situation in which there are limits in terms of federal level punishment."[10]
[edit]Findings
In February 2004, the U.S.-Canada Power System Outage Task Force released their final report, placing the causes of the blackout into four groups:[11]
First, that FirstEnergy and its reliability council "failed to assess and understand the inadequacies of FE's system, particularly with respect to voltage instability and the vulnerability of the Cleveland-Akron area, and FE did not operate its system with appropriate voltage criteria".
Second, that FirstEnergy "did not recognize or understand the deteriorating condition of its system".
Third, that FirstEnergy "failed to manage adequately tree growth in its transmission rights-of-way".
Finally, the "failure of the interconnected grid's reliability organizations to provide effective real-time diagnostic support."
The report states that a generating plant in Eastlake, Ohio (a suburb of Cleveland) went offline amid high electrical demand, putting a strain on high-voltage power lines (located in a distant rural setting) which later went out of service when they came in contact with "overgrown trees". The cascading effect that resulted ultimately forced the shutdown of more than 100 power plants.
A preliminary report says four or five capacitor banks in the Cleveland-Akron area were removed from service for government inspection, including capacitor banks at Fox and Avon 138-kV substations. These reactive power sources are important for voltage support, but were not restored to service that afternoon despite the system operators' need for more reactive power compensation. The normal practice is to inspect in off-peak seasons, but government officials demanded the inspection when inspectors were available. The final report does not mention this government demand. The lack of reactive power compensation caused the protective relay trip that brought the system down. The same thing also caused the 1965 blackout.
This trip caused overloading of other transmission lines, tripping their relays. Once these multiple trips occurred, multiple generators suddenly lost parts of their loads, so they accelerated out of phase with the grid at different rates, and tripped out to prevent damage.
[edit]Computer failure
A software bug known as a race condition existed in General Electric Energy's Unix-based XA/21 energy management system. Once triggered, the bug stalled FirstEnergy's control room alarm system for over an hour. System operators were unaware of the malfunction; the failure deprived them of both audio and visual alerts for important changes in system state.[12][13] After the alarm system failure, unprocessed events queued up and the primary server failed within 30 minutes. Then all applications (including the stalled alarm system) were automatically transferred to the backup server, which itself failed at 14:54. The server failures slowed the screen refresh rate of the operators' computer consoles from 1–3 seconds to 59 seconds per screen. The lack of alarms led operators to dismiss a call from American Electric Power about the tripping and reclosure of a 345 kV shared line in northeast Ohio. Technical support informed control room personnel of the alarm system failure at 15:42.[14]
[edit]Restoration of service
By evening of August 14, power had been restored to:
- Many areas of the Niagara Region in Ontario;
- Areas of the Ontario Golden Horseshoe from St. Catharines to Burlington (supplied from the City of Niagara Falls Ontario, which never lost power);
- Parts of Southwestern Ontario, particularly areas near the Bruce Nuclear Power Plant, lost power for only 4–8 hours;
- parts of Mississauga;
- parts of London, Ontario;
- portions of western Ottawa including Kanata and south to Kingston;
- portions of downtown Toronto;
- Cornwall and Pembroke, Ontario;
- three-quarters of the millions of customers who had lost power in New Jersey;
- parts of Pennsylvania, Ohio and Michigan;
- parts of Long Island;
- Albany and its surroundings;
- New London County, Connecticut;
- Parry Sound, Ontario
Con Edison retracted its claim that New York City would have power by 1 a.m. That night some areas of Manhattan regained power at approximately 5 a.m. (August 15), the New York City borough of Staten Island regained power around 3 a.m. on August 15, and Niagara Mohawk predicted that the Niagara Falls area would have to wait until 8 a.m.
By early evening of August 15, two airports, Cleveland Hopkins International Airport, and Toronto Pearson International Airport were back in service.
Half of the affected part of Ontario had power by the morning of 15 August, though even in areas where it had come back online, some services were still disrupted or running at lower levels. The last areas to regain power were usually suffering from trouble at local electrical substations that was not directly related to the blackout itself.
By 16 August, power was fully restored in New York and Toronto. However, Toronto's subway and streetcars remained out of service until 18 August to prevent the possibility of equipment being stuck in awkward locations if the power was interrupted again. Power had been mostly restored in Ottawa, though authorities warned of possible additional disruptions and advised conservation as power continued to be restored to other areas. Ontarians were asked to reduce their electricity use by 50% until all generating stations could be brought back on line. Four remained out of service on the 19th. Illuminated billboards were largely dormant for the week following the blackout, and many stores had only a portion of their lights on.
Preparations against the possible disruptions threatened by the Year 2000 problem have been credited for the installation of new electrical equipment and systems which allowed for a relatively rapid restoration of power in some areas.
[edit]Sustained efforts
In Ontario, some cities took part in power conservation challenges or events to remind citizens of the blackout. The most well known event being the Voluntary Blackout Day hosted by the Ontario Power Authority. During these events, citizens were encouraged to maximize their energy conservation activities. Smaller cities such as London, Guelph, Woodstock and Waterloo took part in the challenges annually. The final Voluntary Blackout Day was held on August 14 2010.[28]
No comments:
Post a Comment