by Lee Ju Young, Senior Account Manager, South Korea, Emerson Automation Solutions
Most of us know that conductivity is an excellent way to detect the interface between a non-conductive liquid, such as a hydrocarbon, and a conductive aqueous solution. Even more impressive, however, is hearing how this vital analysis is saving time and money for real companies. Here’s a great example.
Hanwha Total Petrochemical is headquartered in Seoul, South Korea, but operates a large petrochemical complex, consisting of 13 separate plants, at Daesan, in South Korea’s Chungnam Province. The company manufactures building block chemicals that go into the making of a host of other chemicals needed for various consumer products. It starts with a naphtha cracker, yielding propylene and ethylene, which are the raw materials in the production of many of polymers, like naphtha.
The naphtha is kept in storage tanks before use. During storage, water accumulates and sinks to the bottom of the tank. Because water interferes with the cracking process, it must be periodically removed. Conductivity is ideal for monitoring the drain. The water has a conductivity between 650 and 1000 uS/cm, and the naphtha has essentially no conductivity. As the water drains, the conductivity is high. When the water/naphtha interface is present, the naphtha in the interface, being non-conductive, causes the conductivity to drop. When naphtha alone is present, the conductivity is practically zero. Thus, by stopping the drain at the first sign of a conductivity drop, the operators are ensured that only water has been drained with minimum loss of naphtha.
Prior to Hanwha Total Petrochemical’s decision to use the conductivity analyzer, draining the tank of water was manual, requiring substantial human intervention. One person was positioned at the control valve at the tank outlet to watch the water drain. This person used a visual check to make sure that only water drained out. If naphtha was observed, the person called to the DCS to close the valve in order to minimize the loss of naphtha.
The simple addition of a Rosemount 1066 conductivity analyzer and sensors has significantly reduced the personnel hour demands on the plant’s staff, and even more significantly, has dramatically reduced leakage of costly naphtha from the tank. In addition, naphtha in wastewater increases the load on wastewater treatment and makes it more difficult to comply with environmental regulations – possibly leading to fines.
Conductivity analysis is one of the most used liquid measurements – for good reason. A simple addition of instrumentation can significantly improve the process efficiency, quality, and reliability.
by Sara Wiederoder, Product Manager, Rosemount Combustion Products, Emerson
Hello. My name is Sara Wiederoder and I’m your Analytic Expert for today. I’d like to share an interesting and innovative waste-to-energy application in which a number of Emerson products are used including the Rosemount OCX8800. The company in this application is Sustainable Waste Power Systems (SWPS) and they’re building a garbage in/power out (GIPO) system, or more specifically, a two-stage, wet, thermal conversion of wet carbon-based waste into synthesis fuel gas (SynFuel). In the system, a devolitization stage reduces the wet feedstock into a bio-char and light volatiles. A large pressure drop between the stages causes fluidization and pulverization of the flow. Gasification completes the fuel synthesis through the water-gas shift and cooling of hot oil and process systems provides thermal power to the customer.
One of the key challenges of the system is to provide stable burner and air control. In general, it can be said that the concentration of excess oxygen is one of the best indicators of how efficient a combustion process is. Industry quickly discovered that if you do not have excess oxygen in your flue gas (indicating there is too much fuel), there is a good chance your boiler will explode, so excess air/oxygen is required. Adding too much air/oxygen will cool down the combustion process, which is undesirable since combustion is being used to produce thermal energy, and the more you cool it, the less heat you can get out of it. Typically, combustion processes are controlled between 2-5% excess oxygen. The actual value varies upon the type of fuel. Gaseous fuels combust very efficiently and quickly, so there is less excess oxygen required for optimal efficiency. However, solid fuels don’t combust as well, so adding extra oxygen ensures the solid particles are fully combusted. Having a continuous measurement of oxygen, and inputting this data into a control system that automatically controls air flow to the combustion burner, ensures the combustion process is operating at optimal efficiency.
The Rosemount OCX8800 Combustion Flue Gas Transmitter provides a continuous, accurate measurement of not only the oxygen, but also the combustibles remaining in flue gases from a combustion process. The renowned zirconium oxide sensor is the basis for the oxygen measurement. This, combined with the combustibles sensor, detects oxygen and combustibles concentrations in flue gases with temperatures up to 2,600°F (1,427°C).
For this application, complete and efficient combustion within a very tight set point was crucial. So in addition to oxygen, carbon monoxide (CO) measurements give greater detail into the current condition of a combustion process. CO is an indicator of un-combusted fuel. When processes are short on oxygen it’s usual to see a lot of CO, but it’s normal to have small concentrations of CO right near the optimal combustion set point. Right around the point of greatest efficiency, it’s typical to see trace amounts of CO around 200 ppm. Using both CO and oxygen measurements gives the user greater control over their combustion process.
The solution in the SWPS application included:
Ultimately, the commercial scalability of the GIPO was proven including:
While your application may not resemble this unique GIPO system, talk to our analytic experts about achieving efficient combustion in your demanding applications.
By Dave Joseph, Emerson
Conductivity measurement is one of the most ubiquitous in industry so the issue of how to correctly calibrate a toroidal conductivity sensor comes up for a lot of plant personnel. Because the purpose of the measurement is to get information about the total concentration of ions in solution (i.e., the conductivity), the effect of sensor dimensions and windings must be accounted for. The correction factor is the cell constant. Conductivity is equal to the measured conductance multiplied by the cell constant.
There are two basic ways to calibrate a toroidal sensor: against a standard solution, or against a referee meter and sensor. A referee meter and sensor is an instrument that has been previously calibrated and is known to be accurate and reliable. The referee instrument can be used to perform either an in-process or a grab sample calibration. In-process calibration involves connecting the process and referee sensors in series and measuring the conductivity of the process liquid simultaneously. Grab sample calibration involves taking a sample of the process liquid and measuring its conductivity in the laboratory or shop using the referee instrument. No matter which calibration method is used, the analyzer automatically calculates the cell constant once the known conductivity is entered.
The cell constant is also influenced by the nearness of the vessel walls to the sensor, the so-called wall effect. Conductive and non-conductive walls have opposite effects. Metal walls increase the amount of induced current, which leads to an apparent increase in conductance and a corresponding decrease in cell constant. Plastic walls have the opposite effect.
Calibration against a standard solution requires removing the sensor from the process piping. It is practical only if wall effects are absent or the sensor can be calibrated in a container identical to the process piping. The latter requirement ensures that wall effects during calibration, which are incorporated into the cell constant, will be exactly the same as the wall effects when the sensor is in service.
Calibration against a referee – in-process – involves connecting the process and referee sensors in series and allowing the process liquid to flow through both. The process sensor is calibrated by adjusting the process analyzer reading to match the conductivity measured by the referee instrument.
Calibration against a referee – grab sample – is useful when calibration against a standard is impractical or when in-process calibration is not feasible because the sample is hot, corrosive, or dirty, making handling the waste stream from the referee sensor difficult.
If you click HERE, you’ll find a very useful white paper that walks you through the issues related to calibration of a toroidal conductivity sensor and explores each method in some depth. I hope this is useful in simplifying your conductivity measurements.
Please let me know if I can answer any questions. Thanks for stopping by.
By J. Patrick Tiongson, Product Manager, Emerson
At first glance, calibrating conductivity sensors may seem straightforward; however, many times, this is not the case. I’d like to share with you some of the various methods for calibrating contacting conductivity sensors and outline some of the potential issues that can accompany such procedures.
There are three main methods for going about calibrating contacting conductivity sensors: 1) with a standard solution, 2) directly in process against a calibrated referee instrument and sensor, and 3) by grab sample analysis. Understanding the fundamentals of each method, and the issues associated with each, can help make the decision on how best to calibrate a sensor.
Calibration with a standard solution consists of adjusting the transmitter reading to match the value of a solution of known conductivity at a specified temperature. This method is best when dealing with process conductivities greater than 100 µS/cm. Use of standard solutions less than 100 µS/cm can be problematic as you run into the issue of contaminating the standard with atmospheric carbon dioxide, thereby changing the actual conductivity value of the standard. For maximum accuracy, a calibrated thermometer should be used to measure the temperature of the standard solution.
If dealing with low conductivity applications (less than 100 µS/cm), in-process calibration against a referee instrument is the preferred method. This method requires adjusting the transmitter reading to match the conductivity value read by a referee sensor and transmitter. The referee instrument should be installed close to the sensor being calibrated to ensure you are getting a representative sample. Best practices for attaining a representative sample across both the process and referee sensors include using short tubing runs between sensors and increasing sample flow. Though usually not as accurate as calibrating against a standard solution, this method eliminates the need to have to remove your sensor from process, and removes any risk associated with contamination from atmospheric carbon dioxide.
The least preferred method of calibrating a contacting conductivity sensor is by analyzing a grab sample. This method entails collecting a sample from process and measuring its conductivity in a lab or shop using a referee instrument. Calibration via grab sample analysis is only ever recommended if calibration with a standard solution is not possible or if there are issues with installing a referee instrument directly in process. The collected grab sample is subject to contamination from atmospheric carbon dioxide and is also subject to changes in temperature when being transported. In other words, there are many potential sources for error in the final calibration results.
More information regarding calibrating conductivity sensors can be found HERE.
By Randy Young and Pete Anson
My day started with a big cup of coffee in hand and an eye on my email. I was reading through them and prioritizing accordingly. Then, one email caught my eye. It was from Jaime in the Netherlands. He worked for a water bottling company and was looking to update their current Rosemount Analytical equipment (4-wire analyzer1055-01-11-26) with one of our newer models. He read about it while browsing through one of our technical blogs at www.AnalyticExpert.com.
I started by asking him questions regarding their current setup: the model number of the analyzer and the sensor; his power requirements; as well as the type and number of measurements. Upon receiving his fast reply, I immediately began working on it. I didn’t have much personal experience with his equipment since it was one of the older models, but I have the best resources. After a quick discussion with the product manager, Pete Anson, I determined the features and options of the model Jaime had been using and discovered that the most direct replacement for Jaime’s old model is the improved 1056 four-wire analyzer. But that raised some questions Jaime hadn’t known to ask.
You see, while the 1056 is extremely high performance for a “general use” instrument, there could be circumstances under which the higher-end 56 advanced dual-input analyzer might be the best and most cost-effective option. The reason is that the 56 offers capabilities that might reduce costs for the customer in other areas of the plant. By rethinking the way some essential functions are performed, plant managers like Jaime can turn their liquid analyzer into a sophisticated plant machine.
Many plants require the use of a data or event logger and/or a data historian to provide an audit trail for fulfillment of regulatory requirements or to meet internal reporting policies. I asked Jaime and discovered that his plant does require reporting. A standalone data logger can cost from $200 to $1,000, plus installation. The 56 analyzer, however, has a built-in data logger that can capture measurement data from both the process and the instrument – a dozen or more live values – from two channels every 30 seconds for 30 days. Jaime was pretty excited about this capability when I described it since he would get the reporting at no additional cost.
He also liked the idea of the two input channels. I explained to him that the channels can not only record more than one liquid parameter such as pH and conductivity or ozone, but also flow which has to be reported regularly. Using the 56 for this function can save the cost of additional analyzers. Since his outfall points are often on the periphery of the plant, I explained to Jaime that he would be able to use the 56 with wireless to transmit flow data from those points, saving him a ton of personnel and maintenance time.
I even dangled the possibility of using the 56 as a control device in certain functions. While the 56 has the traditional water treatment functions and control, which include on/off control, on/off control with delay (to allow time for mixing), and an interval time for sensor cleaning, there is a lot more control capability in the 56. It can not only do standard PID and TPC (Duty Cycle) control on one or all of its 4 analog outputs and relays, but can also power and receive the signal of any two-wire transmitter, input its measurement and apply PID or TPC control to the measurement, which can be pressure, temperature or whatever. The 56 can be a single station controller.
After our email discussion, Jaime considered the many high-end features of the 1056 versus the potential savings the 56 could represent both now and in the future. Wisely, I think in his case, he opted for the 56 since it gave him a huge jump in flexibility over his older Emerson analyzer.
A great solution-oriented conversation. Consider it solved. Okay, who’s next? Booyah!