SANTA FE, N.M. — I found Mark Oswald’s article, “WIPP woes due to wrong word?” on the Journal’s front page of Feb. 10 quite interesting.
Unfortunately, I fear it may leave your typical reader inappropriately relieved that we finally have a perfectly reasonable explanation for the radioactive release which occurred at WIPP approximately one year ago. The article did not address the very important follow-up questions that the Los Alamos explanation should generate.
The article was informative on the Los Alamos National Laboratories admitted “mistake,” which ultimately led to the radioactive release incident that occurred at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, N.M., approximately one year ago. In the article, the “incident” was reportedly caused when a low level “note taker” failed to recognize the technical difference between the words “inorganic” verses “an organic” absorbent.
Ultimately, the “mistake” made it into the updated final procedures without review by an appropriate “subject matter expert.” Any reasonable person can understand how such a human “mistake” could occur.
According to the article, packing of drums per an erroneously updated procedure began in August 2012. This updated procedure specifies that nitrate salts are to be packed with ORGANIC absorbents. This is essentially instructions for constructing a weak pipe bomb, which must be expected to begin generating pressure within these waste drums at some unknown rate. This must be expected to rupture the waste container at some unknown time in the future. One of these drums has in fact already ruptured within about ONE year!
Now, the important questions are: (1) How many such “bombs” were constructed and placed into WIPP since August 2012; (2) How can they be safely approached and made safe; and lastly, the most important question, (3) What was the WIPP hazard assessment’s estimated probability of such a perfectly reasonable “human error” leading to a release of radioactivity from WIPP during the next 10,000 years plus?
I believe that, prior to opening WIPP, it was claimed that a single minor release might be expected in something over 5,000 years. However, since this understandable “mistake” occurred, we have observed an actual release in a little over one year!
We should be, and should have been, asking how many other simple “human errors” might reasonably lead to a radioactive release. In reality, it is generally precisely this type of simple “human errors,” which are highly probable, that are the most difficult to predict, control, analyze and quantify.
However, because they are so probable, they will generally be the prime drivers in estimating the overall probability of an adverse outcome (a release).
It seems to me that the Los Alamos explanation raises serious questions about the method and validity of the risk assessment that was used to evaluate the basic safety of the WIPP project.
Don’t get me wrong – I am not anti-WIPP. I still believe that, overall, the WIPP concept is close to the best we could do.
However, recent experience clearly indicates that it is not nearly as safe as it has been represented. Seeking numbers like 1 in 10,000+ years will result in obtaining those numbers. These kinds of probabilities can only be derived by minimizing, masking and sugar-coating the relatively high probabilities of “human errors” within the system.
Given the human component in the WIPP system, I doubt it could ever achieve a safety record better than experiencing some minor release about every 50 years. This ought to be at least a factor of 10 better than simply piling drums in storage lots or buildings and watching them for 20,000 years.
However, if the WIPP workers believe the bureaucrat’s safety numbers (1 in 10,000 years), they won’t recognize the danger and importance of the work they are doing. Such a false sense of safety can ultimately only increase the likelihood and severity of any inevitable future radioactive releases.
This reminds me of another well-known case: the space shuttle risk assessment. In the mid-1980s, the NASA administration wanted the probability of a major shuttle accident to be something like 1 in 10,000 to 100,000 flights so that they could justify putting a school teacher on board. (Why would you prefer experimental test pilots to fly the most powerful flying machine on the planet?) When NASA’s rocket engineers were later polled for their qualitative estimates, their answers averaged about 1 in 25 flights (which turned out to be remarkably accurate). The NASA administration got, and routinely used, their desired safety numbers prior to the Challenger accident!
There are numerous other examples of ignoring the risks of “unforeseen” human behavior leading to catastrophe. Risk assessment often must minimize these risks because humans are so unreliable and unpredictable.
Unfortunately, you just can’t get the desired risk numbers when honest numbers for human behavior are part of the system. Under pressure from administration for lower risks, risk assessors often minimize the human element to obtain the desired numbers. These people know that if they refuse to provide the desired numbers, the administrators can always find someone else that will.
Now, we see that the cost of recovery from the Los Alamos mistake is estimated at over $500 million and three more years by the Department of Energy. This is the cost of minimizing and ignoring “human behavior issues” because they make it impossible to achieve the desired safety numbers. It would cost less to acknowledge, identify and manage these potential “human error” issues. Doing so would also make the project safer.
James Moore is a retired engineer who lives in Albuquerque.