InTech

SEP-OCT 2018

Issue link: http://intechdigitalxp.isa.org/i/1034708

Contents of this Issue

Navigation

Page 35 of 67

36 INTECH SEPTEMBER/OCTOBER 2018 WWW.ISA.ORG AUTOMATION IT operators to CFOs, capital planning ana- lysts, buyers, and others. We are also see- ing old assets from the '70s and '80s being wired into the main corporate networks for the first time thanks to more robust, less expensive wireless technologies and data management strategies. Golden rule for growth Information gets more valuable the more people consume it, so we should always look beyond the benefits of a single proj- ect. Building too quickly, however, can also cause projects to collapse. The pri- mary rule for balancing ambitious goals with real-world constraints on budget and time is this: Control your expenditures by rate of implementation, not your goals. Most projects I have seen fail to return large benefits fall into two categories: either the projects were more complex than anticipated, requiring extensive on-site remedial work, or the scope of the projects was too small, requiring outsized amounts of time, energy, and money. Both are failures that must be addressed at the architectural stage, not the implementation. In the first case, the job was not ad- equately scoped—it needed more than data alone could accomplish—and re quired expensive customization to meet user expectations. The second case was a failure to follow the rule above by attacking a problem that was too small to carry the proper benefits. There is a mistaken belief that a smaller goal in volves less risk. That is incorrect. S mall projects can contain as much technological risk as larger ones. With the proper goals, average engineer - ing procedures will eventually succeed, but with the wrong goals, no amount of brilliance will succeed. The ideal situation is to cre - ate a system that can meet imme - diate needs while leaving headroom for the future. To build a system that can scale, first look at the underlying models. In indus - trial facilities, there are three types of as set models: the physical model, the process model, and the product model. We first observe that historically scal - able systems are all based on a physical model—i.e., here is a sensor, record its output with as high a fidelity as you can, and keep the data for all time. (This is the basic design principle of supervisory control and data acquisition, program - mable logic controllers, distributed con- trol systems, and other automation equip- ment. The ap plications that use today's information infrastructure are nearly 100 percent software based, but this does not mean that they will be small value. After all, the Apple iPhone is mostly software, as is Uber, Airbnb, Lime, and others.) By contrast, manufacturing execu - tion systems are based on the process model and, while quite valuable in some cases, have inherent issues of scale and customization when you are trying to integrate metadata into the overall system. The same is true of product life-cycle management sys- tems, which are based on the product model. Both of these latter cases can provide immense value. Once the architecture for scaling has been decided, there are other meta- data models that need to be defined, e.g., digital twins to arrange, aggregate, cleanse, and view or analyze the data in such a way that it makes sense to people. A reliability application will pull from the same overall source as process views or other applications, but the calculations, individual data sources, and cleansing techniques can and will be different. The solution is to separate the process of data management from the applications that use the data and de velop a clean, open interface between them. Once you have reliably built a data management infrastructure that can truly scale and support data, shap- ing, reliability, security, and privacy, you have a massive data repository. As users expand to include other ap- plications, such as supply chain and en- terprise optimization, the scope of data projects often extends to include suppli- ers, vendors, and customers. Resistance to upgrades will frequently emerge. It is often rooted in the perceived reliability (or unreliability) of large software sys- tems, which in turn creates a perception that there will be increased costs or add- ed security complexities. This sounds like an impossible task until you go back to the original premise: to create an ar- chitecture that will scale, you need to control your expenditures by the rate of implementation, not scope. Done prop- erly, addressing proper scope should not cost significantly more money. How does this architecture and in- frastructure approach map to the four requirements of Industry 4.0? Interoperability At the lowest layer is streaming data management designed for the scope of the project. Streaming is characterized by extremely high data rates and new information coming in unsolicited. Cur- rently, there are systems that operate in the millions of new events per second, and this tendency will only increase as we dig deeper into the fidelity of the data and accommodate additional smart sensors and equipment. As noted above, take care to design information collection without regard to its use. For example, a comment from an opera- tions user that all he needs is 15-minute averages would create a data manage- ment system that would not be useful to automation or maintenance. Figure 1. Welcome to my time machine: Dofasco main site, 2002

Articles in this issue

Links on this page

Archives of this issue

view archives of InTech - SEP-OCT 2018