JUL-AUG 2018

Issue link:

Contents of this Issue


Page 14 of 61

INTECH JULY/AUGUST 2018 15 PROCESS AUTOMATION The data signal may be a direct measurement or a virtual estimate of either a measurement or a key performance metric, which are inputs to process automation (control, historian) or enter- prise management systems. In many cases, some preprocessing (data filtering) is desired before the input is presented to a controller or used in a trend plot for process representation. Smoothing data and eliminating outliers have several advantages. For example, reducing noise in the derivative portion of a controller could lead to greater use of the derivative term in pro- portional, integral, derivative (PID) control (and, therefore, improved control). Data filtering also reduces distracting features in trend plots. And, less noise in a control loop's controlled variable can contribute to reduced variation in the con- troller output. However, data filtering can also have negative consequences, such as hiding real problems occurring or developing in a process or its equipment. It can also present a skewed (i.e., invalid) view of the magnitude and duration of real spikes occurring in the process. And, in gen- eral, data filtering causes a delay or a lag that can interfere with control. The engineer must under- stand that the real process value and the mea- sured or displayed value are not the same thing. In some applications, little or no data filter- ing is recommended. This is often the case with signals sent to alarm algorithms or to data collection systems, and those presented on a human-machine interface for critical process parameters (i.e., those impacting product qual - ity) in "current Good Manufacturing Practice" (cGMP) regulated processes. In such applica- tions, it is important to monitor and record true details of process excursions, not to reduce the perceived magnitude or extend the perceived duration of process spikes through filtering. In some cases, there may be value in using and recording two forms of a process variable, one being the raw data from an instrument (useful in analyzing the details of a process or system excursion) and the second a filtered version useful for PID control or trend plots for presentation and publication. Several different data filtering technologies are utilized in industrial automation systems. This article discusses some of them, including comments on pros and cons, and organizes them in categories of noise removal, outlier re moval, and stray signal removal. Most control systems (distributed control sys- tems, programmable logic controllers, or PCs) do not have a broad menu of data filtering op- tions to choose from; some have only one or two that are commonly applied in the process industry. However, most systems have some form of calcula- tion blocks available for users to program or configure as part of application software, so users can imple- ment whatever data filtering algorithm is deemed appropriate. Noise removal methods The concept is that the process is holding a steady value in time and that the signal is cor- rupted by random fluctuations. The objective of the filter is to reveal the underlying value. Moving average filter (MA): A moving average filter reports the conventional average of data in a window: in which i = 1 indicates the most recent data, and the average is over the past N data values. This is termed a window of the data. At each sampling, the window moves, the newest data enters, and the oldest leaves. The user needs to specify the window length either as a time du- ration or as the number of the data in the win- dow. The more data, the lower the variability is on av erage. At a steady condition (the nominal value is not changing, but the measurements fluctuate randomly due to noise effects), the variability of the average is related to the vari- ability of the individual data: The √ N impact is true for any measure of vari - ability, such as range. Note: Variation cannot be eliminated by averaging, just attenuated. The user needs to choose the value of N. A larger N means less variation, but it also means that it will take longer for the average to move to the proximity of the new value when the data value changes. Figure 1 reveals a process (data are the mark- ers) that is initially at a noisy steady state at a nominal value of 2, then makes a step change to a new value of 5 at the 30th sampling. The data value is on the vertical axis, and the sample number is on the horizontal axis. A data spike, a spurious signal with a value of zero, occurs at sampling 120. The thin dashed line connects the data dots to help reveal the trend. Subsequent FAST FORWARD l Filtering is valuable to remove unwanted components or features from a data signal before the input is used in a process auto- mation application. l Data filtering must be used properly, or it can have negative consequences, including hiding real problems. l This article discusses typical data filtering technologies used in industrial automa- tion systems, exploring pros and cons and categories of noise removal, outlier removal, and stray signal removal.

Articles in this issue

Archives of this issue

view archives of InTech - JUL-AUG 2018