Cyber Expert’s Opinion about President Biden’s Executive Order on AI Security

Cyber Expert's Opinion about Biden's Executive Order on AI Security

Before delving into the opinions of top cybersecurity experts, it is essential to understand what prompted the government to take such a step. Artificial intelligence has become part of innovations across various sectors, including healthcare, finance, manufacturing, and transportation, among others. However, the deployment of AI has not been without challenges, particularly when it comes to privacy, security, and ethical issues.

Are you interested in seeing the full video? Check out Delan, Oscar, and Kelly’s conversation.

According to a report by Accenture, between 2019 and 2020, security breaches resulting from AI increased by 460%. Moreover, AI-generated deepfakes have become prevalent, disturbing public discourse and increasing public mistrust in news sources. It is against this backdrop that President Biden’s executive order on AI Security comes in.

The risks to security as a result of unintentional (or intentional) errors are prominent. President Biden’s executive order on “Safe, Secure, and Trustworthy AI” outlines the steps organizations must take to ensure the security of these systems. In this blog post, we will delve into the order and provide comments and insights from top cybersecurity experts to better understand what it entails.

Understanding President Biden’s Executive Order on AI Security

President Biden’s executive order on “Safe, Secure, and Trustworthy AI”

The executive order issued by President Biden defines principles for the responsible development and use of AI by agencies within the federal government. It also tasks the National Institute of Standards and Technology (NIST) with producing standards and guidelines for reliable, robust, and trustworthy AI.

The document calls on agencies to prioritize safety, security, privacy, and transparency at each stage of the AI system’s lifecycle, governance, and management. AI systems must be developed, tested, and operated in ways that ensure that safety, security, and privacy risks are minimized and that their performance is explainable and understandable.

Further, the executive order aims to improve the protection of AI systems from misuse, intrusions, and vulnerabilities. It emphasizes the importance of transparency, accountability, and collaboration when designing and implementing AI systems. President Biden stated that the “goal is to ensure that AI can be trusted and relied upon to be deployed effectively and ethically.”

Key goals of the order include addressing security and safety considerations, promoting innovation, protecting privacy, and advancing American leadership.

The order establishes the National AI Initiative Office (NAIO) to oversee AI research and development and coordinate policy across the federal government to achieve these objectives. It also requires federal agencies to identify existing and potential areas where AI can benefit them and explain how they plan to implement AI securely. They must also ensure that any data used to train AI algorithms is adequately anonymized and safeguarded against unauthorized access or disclosure.

Diverse Perspectives on the Executive Order

Experts’ opinions on the executive order issued by President Biden have been shared across various platforms since the announcement in October 2023.

CISO of Tanium, Chris Hodson, applauded the executive order for setting the right tone and noted that AI’s successful deployment must address the challenges that affect AI’s trustworthiness.

Other leading cybersecurity experts expressed support for the executive order’s initiatives and called for future involvement. Mohammad Aburakia, Head of Security Operations at Prodaft, emphasized the importance of understanding that AI security goes beyond just technology. It’s essential to produce reliable and accountable methods that will safeguard the safety and protection of people" he said. ‘As advanced and exciting as AI is, it needs to have a human-centric approach.’

Ryan LaSalle, Global Managing Director for Accenture’s Applied Intelligence, highlighted concerns related to the ethical application of AI. He noted that the executive order appeared to put more emphasis on security and safety and less on ethical and moral questions about AI deployment. However, he acknowledged that the order was a crucial step towards responsible AI, and encouraged the government to deepen the conversation around AI’s ethical and moral use.

“The most recent AI Executive Order demonstrates the Biden administration wants to get ahead of this very disruptive technology for its use in the public sector and desires to protect the private sector by requiring all major technology players with widespread AI implementations to perform adversarial ML testing. The order also mandates NIST to define AI testing requirements, which is critical because no one can yet say with confidence that we, as a tech industry, exhaustively know all the ways these new AI implementations can be abused.”

Maia Hamin - an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab – stated, “I admire the White House’s goal to build AI to detect and fix software vulnerabilities, but I’ll be curious to see how they think about managing risks that could arise from powerful AI systems custom-built to hunt for vulnerabilities. I also hope they’ll tie new tools into existing efforts to “shift the burden of responsibility” in cyberspace to ensure AI vulnerability finders create secure-by-design software rather than endless patches.”

Paul Brucciani, a cybersecurity advisor at WithSecure, an information security company, said that proving their products are safe will be very difficult for AI vendors – both in the US and the European Union, which is pushing forward its AI Act.

“That is hard to do. It is much easier to prove to you that my gleaming Mercedes can accelerate from 0-60 miles per hour in less than 4 seconds than it is to prove that its anti-skid control system is safe. Proving negatives is hard,” said Brucciani.

Brad Smith, vice chair and president of Microsoft, said the executive order is “another critical step forward in the governance of AI technology.”

“This order builds on the White House Voluntary Commitments for safe, secure, and trustworthy AI and complements international efforts through the G7 Hiroshima Process,” Smith added. “AI promises to lower costs and improve services for the Federal government, and we look forward to working with U.S. officials to fully realize the power and promise of this emerging technology.”

Sreekanth Menon, Global AI/ML Services Leader at Genpact, commented, “The Biden administration’s wide-ranging executive order is a move to streamline the development and dissemination of AI systems, including but not limited to healthcare, human services, and dual usage foundation models. The executive order balances optimism about the potential of AI with considerations of risk, privacy, and safety from using such systems if unmonitored. The executive order stresses the need for existing agencies and bodies to come together and provides a directive for these organizations to formulate cohesive tools to understand AI systems better and create oversight.”

Jeff Williams, co-founder and CTO of Contrast Security was impressed that the White House has reacted to the risks posed by AI relatively quickly. But he questions how we can determine which AI systems pose a serious risk to national security and public health and safety.

“Even an AI used to create social media posts will have incalculable effects on our elections. Almost any AI could flood a critical agency with requests that are indistinguishable from real human requests. They could be realistic voicemail messages or videos of system damage that aren’t real. The opportunities to undermine national security are endless,” said Williams.

Newton H. Campbell is a nonresident fellow at the Atlantic Council’s Digital Forensic Research Lab and the director of space programs at the Australian Remote Operations for Space and Earth Consortium comments that “Today’s executive order from Biden on a safe, secure, and trustworthy artificial intelligence is quite aggressive and will likely encounter some hurdles and court challenges. Nonetheless, direction was needed from the executive branch. The order is necessary to strike a balance between AI innovation and responsible use in the federal government, where new AI models, applications, and safeguards are constantly being developed. It emphasizes safety, privacy, equity, and consumer protection, which are essential for building trust in AI technologies. I see the emphasis on privacy-preserving technologies and the focus on establishing new international frameworks as a positive step for global AI governance.”

“Anytime the president of the United States issues an executive order, government organizations and private industry will respond. This executive order signals a prioritization of artificial intelligence by the executive branch, which will most certainly translate into new programs and employment opportunities for those with relevant expertise,” Darren Guccione, CEO and co-founder at Keeper Security, recently told Dice.

“A.I. has already had a significant impact on cybersecurity, for cyber defenders, who are finding new applications for cybersecurity solutions as well as cyber criminals who can harness the power of A.I. to create more believable phishing attacks, develop malware and increase the number of attacks they launch,” Guccione added.

Brad Smith, the vice chair and president of Microsoft, hailed it as ‘another critical step forward in the governance of AI technology.’ Google’s president of global affairs, Kent Walker, said the company looks “forward to engaging constructively with government agencies to maximize AI’s potential—including by making government services better, faster, and more secure.”

Julie Owono, Executive Director of Internet San Fronitères, Member of the Meta Oversight Board, and Affiliate at the Berkman Klein Center for Internet and Society at Harvard University highlighted the change in adaptation “It’s as if the Biden administration is telling tech companies: “We’ve heard you, we’ll regulate you.” This E.O. kicks off a new era in the long history of the Internet: one where regulators don’t seem to be catching up with tech. We seem to be moving away from the “go fast and break things” attitude which existed through the social media era.”

“President Biden’s EO underscores a robust commitment to safety, cybersecurity, and rigorous testing,” said Casey Ellis, founder and CTO of Bugcrowd. “The directive mandates developers to share safety test results with the U.S. government, ensuring AI systems are extensively vetted before public release. It also highlights the importance of AI in bolstering cybersecurity, particularly in detecting AI-enabled fraud and enhancing software and network security. The order also champions the development of standards, tools, and tests for AI’s safety and security.”

Marcus Fowler, CEO of Darktrace Federal stated that he and his team believe that the industry can’t achieve AI safety without cybersecurity. “Security needs to be by-design, embedded across every step of an AI system’s creation and deployment,” said Fowler. “That means taking action on data security, control and trust. It’s promising to see some specific actions in the EO that start to address these challenges.”

Tom Quaadman, executive vice president of the U.S. Chamber of Commerce’s Technology Engagement Center, mentioned the intensity of the U.S. competition with China over AI development and that it’s “unclear which nation will emerge as the global leader, raising significant security concerns for the United States and its allies.”

“It is imperative for the United States to lead the effort to create a risk-based AI regulatory and policy framework that is reinforced by industry standards and promotes the safe and responsible development and use of this transformational technology,” Quaadman stated. “The Biden Administration’s AI Executive Order is a step towards achieving that goal, but more work needs to be done.”

It is worthwhile to add Elon Musk’s perspective to AI. Reporting on the Bletchly Park Summit on AI, The Daily Mail (UK) wrote, “Elon Musk warns AI poses ‘one of the biggest threats to humanity’ at Bletchley Park summit. The billionaire tech entrepreneur’s fears were echoed by delegates from around the world at the UK’s AI Safety Summit at Bletchley Park in Buckinghamshire.

Musk said that government must be a referee, a rule-making body. We need to establish a “framework for insight,” and develop fair rules that everyone should play by. Later, he said that “AI will eventually create a situation where ‘no job is needed.”

Speaking in a conversation with U.K. Prime Minister Rishi Sunak, Musk said that AI will have the potential to become the “most disruptive force in history.”

The Road Ahead: Securing AI for a Safer Future

Overall, these expert statements offer differing opinions on how to approach AI security and its associated risks. The executive order’s plan for securing AI, its focus on transparency and accountability, and aspirations for innovation seem to be warmly received by industry experts.

It is essential to note that while AI has potential, there are also risks associated with their use. Misuse, intentional or otherwise, can pose dangers to organizations and individuals alike. Emerging technology and evolving threats make it critical to establish common guidelines for safe, secure, and trustworthy AI.

President Biden’s executive order outlines essential steps in this direction, and industry experts have provided valuable insights and opinions about this critical challenge. The result of these efforts should be the deployment of AI that empowers rather than endangering people. We are looking forward to seeing the administration successfully leading the nation to a brighter AI future.

Begin your journey towards a more secure AI environment by starting a free trial with MergeBase today