Navigating Cyber Challenges: Biden's AI Executive Order, Ransomware Attack on German Municipalities

tap 21 - 2023

President Biden Signs Executive Order to Enhance AI Safety And Security in The US   

On October 30, 2023, President Biden issued an Executive Order (EO) [1] focusing on the safe, secure, and trustworthy development and use of Artificial Intelligence (AI). The EO charges multiple US agencies with producing guidelines and taking actions to advance the safe and secure development and use of Artificial Intelligence.  

President Biden's Executive Order sets forth a comprehensive framework for AI, focusing on establishing new safety and security standards, enhancing privacy protections, and promoting equity and civil rights. It aims to safeguard consumers, patients, and students, support workers against AI disruption, and drive innovation and competition within the industry. The order reinforces America's global leadership in AI and mandates responsible adoption of AI technologies across government agencies. 

The order comprises directives for various agencies and organizations to conduct research or formulate more comprehensive guidelines. An obligation that likely has immediate impact on the AI industry is a set of requirements for AI developing companies or companies that intend to develop dual-use foundation models. These businesses will be required to disclose their AI development strategies to the U.S. authorities, along with protective steps they have implemented - encompassing both digital and physical security - to safeguard their AI systems, as well as any outcomes from safety evaluations conducted. The EO does not, however, specify the consequences for a company that discloses its model and might be hazardous. The EO is not a law, hence does not govern AI. The US congress holds hearings with experts to create legislation for putting up AI guardrails.  

The EO order is the beginning of a long international process to govern the use of AI. It complements international efforts through the G7 Hiroshima Process [2] aimed at mitigating risk of AI while also harnessing its potential. The suggested code of conduct, which is voluntary, is poised to become a significant reference point for the way prominent nations oversee AI, with the backdrop of data privacy and security risks. 

In the European Union talks for an AI Act are underway with EU countries in the Council. [3] The aim is to reach an agreement by the end of this year.  

Article Link: Navigating Cyber Challenges: Biden's AI Executive Order, Ransomware Attack on German Municipalities