The US Department of Homeland Security (DHS) has issued guidelines detailing secure methods for developing and implementing artificial intelligence (AI) within critical infrastructure. These recommendations are directed at every participant in the AI supply chain. Starting from cloud and computing service providers to AI developers, as well as owners and operators of critical infrastructure. Additionally, there are specific suggestions tailored for civil society groups and public-sector entities.
The voluntary guidelines outlined in the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” examine each role within five essential domains. By securing environments, promoting responsible model and system design, implementing data governance strategies, ensuring safe and secure deployment practices, and monitoring performance along with impact. Additionally, it includes technical recommendations and procedural advice aimed at enhancing the safety, security, and trustworthiness of AI systems.
According to a statement from DHS, AI is currently employed for resilience and risk mitigation in various industries. Applications include earthquake detection, power grid stabilization, and mail sorting.
The framework looks at each role’s responsibilities:
- Providers of cloud and computing infrastructure. They must thoroughly assess their hardware and software supply chains, enforce robust access management protocols. And also secure the physical protection of data centers that support AI systems. The framework additionally advises on assisting downstream customers by monitoring unusual activities and setting up clear procedures for reporting suspicious or harmful actions.
- AI developers. Ought to embrace a secure by-design methodology, assess the potentially dangerous capabilities of AI models, and ensure alignment with human-centric values. Additionally, the framework advises developers to implement robust privacy practices. Then perform evaluations that identify possible biases, failure modes, and vulnerabilities. And also facilitate independent assessments for models posing increased risks to critical infrastructure systems and their users.
- Owners and operators of critical infrastructure. They should ensure the secure deployment of AI systems by maintaining robust cybersecurity practices that address AI-related risks. They must protect customer data during the fine-tuning of AI products. And also offer clear transparency about their use of AI in delivering goods, services, or benefits to the public.
- Civil society. Which encompasses universities, research institutions, and consumer advocates focusing on AI safety and security issues. They should persist in collaborating with government and industry. They should engage in the development of standards as well as conduct research on AI evaluations that consider critical infrastructure use cases.
- Government bodies at the federal, state, local, tribal and territorial levels should enhance AI safety and security standards by implementing statutory and regulatory measures.
“The widespread adoption of this framework will significantly enhance the safety and security of vital services, such as clean water supply, reliable power, Internet access, and others,” stated DHS Secretary Alejandro N. Mayorkas in a release.
The DHS framework suggests a model where responsibilities for the safe and secure use of AI in critical infrastructure are both shared and distinct. It also uses existing risk frameworks to help entities assess whether deploying AI in specific systems. Or applications involves significant risks that might lead to harm.
“We consider the framework to be, quite frankly, a living document that will evolve alongside industry developments.” Mayorkas stated during a media call.