KG LEGAL \ INFO
BLOG

The new Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence issued in U.S. as a benchmark to future European AI Act?

Publication date: November 30, 2023

Cybersecurity aspects in the new Executive Order about AI

The race between US and UE

On 30 October 2023, the President of the United States issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Executive Order establishes new standards for Artificial Intelligence safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more. The Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.

The US has undoubtedly ‘overtaken’ the European Union and become a leader in the AI regulatory space. In April 2021, the European Commission issued a proposal for a regulatory framework on Artificial Intelligence, but it was the second half of 2024 that was anticipated as the earliest period in which the regulation could apply to operators that have standards ready and are conducting their first compliance assessments. A common final version of the act, agreed by the European Parliament, the Council of the European Union, and the European Commission in a so-called trilogue, is needed for the adoption of the European regulation. The trilogue is still going on, but this does not mean that the Union has lost its chance to create standards with which actors outside the Union would also have to reckon.

The US Executive Order is, however, a set of guidelines and a political declaration rather than a ‘hard’ law. While it is true that the institutions under the President’s authority are obliged to take the actions indicated in the document, in the case of Congress it is merely a call for legislative action. An Order does not have the force of a law and may not be sufficient to defend the legality of the administration’s actions in possible legal proceedings brought by a private company. Meanwhile, the EU Artificial Intelligence Regulation, will be directly applicable in all Member States (like, for example, GDPR). Moreover, it will take precedence over national laws. Furthermore, unlike an executive decree of the US President, the solutions adopted in the AI Act will be very permanent. The European Union therefore still has a chance to create the first comprehensive and effective act to regulate artificial intelligence.

Will the US regulation inspire the European Parliament when issuing the European regulation on AI?

There are no concrete answers to this question yet, but it does not change the fact that the US Executive Order will be helpful to the European Union.

The Executive Order demonstrates to the EU that it will not be alone in imposing restrictions on advanced AI systems. The Executive Order represents a move toward enforceable requirements from the U.S. government. The definitions and categories established in the Executive Order could be a helpful reference and the list of information the Executive Order requires from AI companies under the Defense Production Act could also be mirrored by the EU’s legislation. The Executive Order is not a model for EU action, and because it is not legislation, it is not the US version of the AI Act.

New standards for AI Safety and Security

It is worth looking at some regulations that are included in the US Order.

As the possibilities offered by artificial intelligence grow, so do the security implications of its use. A new Executive Order outlines steps to protect Americans from potential threats posed by artificial intelligence systems.

Regarding the aim to protect the use of AI systems, there is a requirement in the Executive Order that states that within 270 days of the date of the Order, the Secretary of Commerce, acting through the Director of the National Institute of Standards and Technology (NIST) should establish guidelines and best practices with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems, including, for example, developing a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models.

The dual-use foundation model means an AI model that is trained on broad data, which uses self-supervision, contains at least tens of billions of parameters, is applicable across a wide range of contexts and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

  • substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons,
  • enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks,
  • permitting the evasion of human control or oversight through means of deception or obfuscation.

Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities.

Sharing safety test results with US government

Another requirement is that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model with information, reports, or records regarding the following:

  • any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats,
  • the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights,
  • the results of any developed dual-use foundation model’s performance in relevant AI red-team testing based on guidance developed by NIST and a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security.

Also, the developers must share the results of all red-team safety tests. The term “AI red-teaming” means a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI. Artificial Intelligence red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.

Development of standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy

National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.

The Secretary of Commerce, acting through the Director of the National Institute of Standards and Technology (NIST), in coordination with the Secretary of Energy, the Secretary of Homeland Security, and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, will establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems, including:

  • developing a companion resource to the AI Risk Management Framework, NIST AI 100-1, for generative AI,
  • developing a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models,
  • launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.

Also, the NIST should establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems. These efforts will include:

  • coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models,
  • in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as to support the design, development, and deployment of associated PETs (privacy-enhancing technologies).

Protection against the risks of using AI to engineer dangerous biological materials

The Executive Order also provides for the protection of using AI to engineer dangerous biological materials. The protection would involve developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.

To better understand and mitigate the risk of AI being misused to assist in the development or use of CBRN threats — with a particular focus on biological weapons — in the Executive Order cited actions that should be implemented in this direction. Within 180 days of the date of the Order, the Secretary of Homeland Security, in consultation with the Secretary of Energy and the Director of the Office of Science and Technology Policy (OSTP), shall evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats. The Secretary of Homeland Security shall conduct consultations with AI and CBRN experts from the Department of Energy, private AI labs, academia, and external model evaluators to assess the ability of AI models to represent CBRN threats-exclusively to protect against those threats, as well as options to minimize the risk of AI models being misused to generate or exacerbate those threats and then submit a report to the President that describes the progress of there efforts.

The Executive Order also includes information on a framework that includes existing US government guidelines to encourage suppliers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable control mechanisms for ordering synthetic nucleic acids. Director of OSTP in this direction will:

  • Establish criteria and mechanisms for the ongoing identification of biological sequences that could be used in ways that pose a threat to U.S. national security,
  • Define methodologies and tools for conducting and verifying the results of procurement controls on sequence synthesis, including approaches to customer controls to promote due diligence in relation to the management of security risks posed by purchasers of biological sequences, and will define processes for reporting the actions of enforcers.

As for agencies that fund life sciences research, they should establish (where appropriate and in accordance with applicable law) that, as a requirement for funding, the sourcing of synthetic nucleic acids is done through suppliers or manufacturers that comply with the framework, for example, through a certificate from the supplier or manufacturer.

Protection from AI-enabled fraud and deception by establishing standards and best practises for detecting AI-generated content and authenticating official content

Another important aspect that was raised in the Executive Order is that standards will be set for the detection of content generated by artificial intelligence. The Secretary of Commerce is supposed to develop content authentication and watermarking guidelines to clearly mark content generated by artificial intelligence. Federal agencies will use these tools to help Americans make sure the communications they receive from the government are authentic – and set an example for the private sector and governments around the world.

Secretary of Commerce will submit to the Director of OMB (Office of Management and Budget) and the Assistant to the President for National Security Affairs identifying the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, for

  • authenticating content and tracking its provenance,
  • labeling synthetic content, such as using watermarking,
  • detecting synthetic content,
  • preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual),
  • testing software used for the above purposes,
  • auditing and maintaining synthetic content.

In so doing, the administration would add governance over AI’s ability to generate massive volumes of deep fakes and persuasive disinformation rapidly.  

UP