by Edward M. Marszal, PE, ISA84 Expert
Gas detection is ubiquitous in a wide range of process industry applications where leaks of process equipment can result in either toxic or flammable gas clouds that can hurt people or property. Even though the detectors are ubiquitous now, their application is relatively new. In fact, only a few dozen years ago in the offshore oil and gas industry, the state-of-the-art gas leak detection was to measure the pressure in vessels in piping. If it was found to be below the trip point, then there had to be a leak somewhere. You would probably be shocked to know how many existing platforms in the Gulf of Mexico are still operating under this philosophy. As a result of the relative novelty of gas leak detection systems, the methods utilized to determine how many detectors are required and where they should be installed are also still in their infancy. But with the help of organizations like ISA, companies are starting to get more technically rigorous in gas leak detection and adoption is growing. In this blog series, I’d like to present a brief history of how detectors were placed in the past, discuss gas detection best practices, and then talk about where the industry is headed.
When gas detection instruments such as catalytic bead systems, point IR detectors, and electrochemical cells were first put into use in process industries, their placement was much more of an art than science. Industry employed veteran instrumentation and control engineers and safety engineers to use their “experience” to place detectors. These experts considered rules such as finding points where gases would accumulate, points where gases would be released, and measured the density of the released gas versus air to determine where detectors would be required. Unfortunately, these rules were often inconsistent among different experts, and led to widely different designs for similar facilities. Furthermore, studies, including ones performed by the UK Health and Safety Executive (HSE) found that the placement of fixed gas detection systems only identified less than 70% of the “major” gas releases that occurred in the process industries. This type of performance was deemed to be not acceptable, especially after a number of process industry accidents where detection systems failed to identify problems.
The poor performance of the “expert judgment” or heuristic placement techniques resulted in the first-wave application of more quantitative scientific analysis of detector placement. The next generation of detector placement is what we refer to as “the grid.” Loss prevention engineers, with a great deal of help from the UK HSE, made the decision that a good philosophy for the placement of gas detection equipment would be that if a gas cloud exists in a facility that is large enough to cause damage, it should be detectable by the gas detection array. HSE then undertook a program of study that determined that this objective could be achieved if gas cloud diameters could be limited less than 7 meters. If limited to this size, then the distance that a flame could travel through a gas cloud would be less than 7 meters which has been experimentally determined to not result in a vapor cloud explosion in most typical release scenarios.
Once the distance across the cloud was determined, the objective was to make sure that a cloud of that critical size could not hide in-between detectors without being identified.
The Dirty Bubble
Some of us practitioners affectionately call this cloud of critical size, “The Dirty Bubble” – a reference to a super-villain in the SpongeBob Square Pants cartoon that most of our children watch (and usually, us with them…). In order to ensure that the dirty bubble would always be identified by at least one detector, HSE determined that detectors placed on a five-meter grid would be required. Subsequently, a lot of designs – at least for offshore oil and gas production – were based on a five-meter grid.
A couple of problems were identified in the five-meter grid process. First off, the five-meter grid only worked for point detectors, and a lot of operating companies were starting to use open-path detectors. Second, the five-meter grid process did not differentiate areas of plants where leak sources did not exist, and thus leak detection would seem inappropriate. The grid system also was only applicable to combustible gases, and application of this approach to toxic gases was not effective, as there is no “safe” toxic gas cloud size. And finally, even if the hazard of concern was combustible gas, the efficacy five-meter grid is dependent on a number of specific parameters, such as the reactivity of the hydrocarbon gas and the amount of confinement of the gas cloud. For highly reactive hydrocarbons – such as ethylene oxide – five meters allows too big of a cloud, whereas methane in an open area would require a cloud much larger than five meters to be dangerous. The five-meter grid was a great starting point, but the limitations quickly became obvious, and solutions to the limitations were rapidly developed.
The next evolution of methodology for gas detector placement was the advent of what is now referred to as fire and gas mapping. This evolution focused on considering what fraction of an area is “covered” by a gas detector array. For gas detectors, this is essentially a function of the size of the “dirty bubble.” If the critical gas cloud size is five meters, then if a detector is within five meters of the center point of the cloud, it is “detectable.” Based on this theory, one could then plot the areas surrounding a detector that could be covered. The most primitive forms of this type of coverage analysis simply drew circles around point detectors on a plot plan drawing. As the technique developed, computer software was developed that would draw the coverage areas and calculate the fraction of covered area, along with distinguishing areas that were covered by a single detector from areas that were covered by two or more detectors. The advantages of this approach over simple grid placement were quickly apparent and quickly and widely adopted. The advantage of the coverage approach was that different critical cloud sizes could be defined for different hazard scenarios, and the total area that is desired to be covered could be limited to “graded areas” where hazards are known to exist – allowing non-hazardous areas to be ignored in the analysis. An example of a geographic coverage map created in the Kenexis Effigy™ FGS mapping software is shown in the figure above.
Gas Detection Coverage Map – Geographic Coverage – Kenexis Effigy™
At present, geographic coverage mapping for gas detector placement is the most commonly deployed sophisticated methodology for gas detector placement. But, other methodologies that are technically more robust are in rapid development and deployment. Our next blog post will address these new next-generation technologies.