Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Artificial intelligence (AI), particularly generative AI apps such as ChatGPT and Bard, have dominated the news cycle since they became widely available starting in November 2022. GPT (Generative Pre-trained Transformer) is often used to generate text trained on large volumes of text data.
Undoubtedly impressive, gen AI has composed new songs, created images and drafted emails (and much more), all while raising legitimate ethical and practical concerns about how it could be used or misused. However, when you introduce the concept of gen AI into the operational technology (OT) space, it brings up significant questions about potential impacts, how to best test it and how it can be used effectively and safely.
Impact, testing, and reliability of AI in OT
In the OT world, operations are all about repetition and consistency. The goal is to have the same inputs and outputs so that you can predict the outcome of any situation. When something unpredictable occurs, there’s always a human operator behind the desk, ready to make decisions quickly based on the possible ramifications — particularly in critical infrastructure environments.
In Information technology (IT), the consequences are often much less, such as losing data. On the other hand, in OT, if an oil refinery ignites, there is the potential cost of life, negative impacts on the environment, significant liability concerns, as well as long-term brand damage. This emphasizes the importance of making quick — and accurate — decisions during times of crisis. And this is ultimately why relying solely on AI or other tools is not perfect for OT operations, as the consequences of an error are immense.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
AI technologies use a lot of data to build decisions and set up logic to provide appropriate answers. In OT, if AI doesn’t make the right call, the potential negative impacts are serious and wide-ranging, while liability remains an open question.
Microsoft, for one, has proposed a blueprint for the public governance of AI to address current and emerging issues through public policy, law and regulation, building on the AI Risk Management Framework recently launched by the U.S. National Institute of Standards and Technology (NIST). The blueprint calls for government-led AI safety frameworks and safety brakes for AI systems that control critical infrastructure as society seeks to determine how to appropriately control AI as new capabilities emerge.
Elevate red team and blue team exercises
The concepts of “red team” and “blue team” refer to different approaches to testing and improving the security of a system or network. The terms originated in military exercises and have since been adopted by the cybersecurity community.
To better secure OT systems, the red team and the blue team work collaboratively, but from different perspectives: The red team tries to find vulnerabilities, while the blue team focuses on defending against those vulnerabilities. The goal is to create a realistic scenario where the red team mimics real-world attackers, and the blue team responds and improves their defenses based on the insights gained from the exercise.
Cyber teams could use AI to simulate cyberattacks and test ways that the system could be both attacked and defended. Leveraging AI technology in a red team blue team exercise would be incredibly helpful to close the skills gap where there may be a shortage of skilled labor or lack of budget for expensive resources, or even to provide a new challenge to well-trained and staffed teams. AI could help identify attack vectors or even highlight vulnerabilities that may not have been found in previous assessments.
This type of exercise will highlight various ways that might compromise the control system or other prize assets. Additionally, AI could be used defensively to provide various ways to shut down an intrusive attack plan from a red team. This may shine a light on new ways to defend production systems and improve the overall security of the systems as a whole, ultimately improving overall defense and creating appropriate response plans to protect critical infrastructure.
Potential for digital twins + AI
Many advanced organizations have already built a digital replica of their OT environment — for example, a virtual version of an oil refinery or power plant. These replicas are built on the company’s comprehensive data set to match their environment. In an isolated digital twin environment, which is controlled and enclosed, you could use AI to stress test or optimize different technologies.
This environment provides a safe way to see what would happen if you changed something, for example, tried a new system or installed a different-sized pipe. A digital twin will allow operators to test and validate technology before implementing it in a production operation. Using AI, you could use your own environment and information to look for ways to increase throughput or minimize required downtimes. On the cybersecurity side, it offers more potential benefits.
In a real-world production environment, however, there are incredibly large risks to providing access or control over something that can result in real-world impacts. At this point, it remains to be seen how much testing in the digital twin is sufficient before applying those changes in the real world.
The negative impacts if the test results are not completely accurate could include blackouts, severe environmental impacts or even worse outcomes, depending on the industry. For these reasons, the adoption of AI technology into the world of OT will likely be slow and cautious, providing time for long-term AI governance plans to take shape and risk management frameworks to be put in place.
Enhance SOC capabilities and minimize noise for operators
AI can also be used in a safe means away from production equipment and processes to support the security and growth of OT businesses in a security operations center (SOC) environment. Organizations can leverage AI tools to act almost as an SOC analyst to review for abnormalities and to interpret rule sets from various OT systems.
This again comes back to using emerging technologies to close the skills gap in OT and cybersecurity. AI tools could also be used to minimize noise in alarm management or asset visibility tools with recommended actions or to review data based on risk scoring and rule structures to alleviate time for staff members to focus on the highest priority and greatest impact tasks.
What’s next for AI and OT?
Already, AI is quickly being adopted on the IT side. That adoption may also impact OT as, increasingly, these two environments continue to merge. An incident on the IT side can have OT implications, as the Colonial pipeline demonstrated when a ransomware attack resulted in a halt to pipeline operations. Increased use of AI in IT, therefore, may cause concern for OT environments.
The first step is to put checks and balances in place for AI, limiting adoption to lower-impact areas to ensure that availability is not compromised. Organizations that have an OT lab must test AI extensively in an environment that is not connected to the broader internet.
Like air-gapped systems that do not allow outside communication, we need closed AI built on internal data that remains protected and secure within the environment to safely leverage the capabilities gen AI and other AI technologies can offer without putting sensitive information and environments, human beings or the broader environment at risk.
A taste of the future — today
The potential of AI to improve our systems, safety and efficiency is almost endless, but we need to prioritize safety and reliability throughout this interesting time. All of this isn’t to say that we’re not seeing the benefits of AI and machine learning (ML) today.
So, while we need to be aware of the risks AI and ML present in the OT environment, as an industry, we must also do what we do every time there is a new technology type added to the equation: Learn how to safely leverage it for its benefits.
Matt Wiseman is senior product manager at OPSWAT.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!