The US just released ten principles that it hopes will make AI safer

[ad_1]

The White House has released ten principles for government agencies to adhere to when proposing new AI regulations for the private sector. The move is the latest development of the American AI Initiative, launched via executive order by President Trump early last year to create a national strategy for AI research and advancement. It is also part of an ongoing effort to maintain US leadership in artificial intelligence.

The principles, released by the White House Office of Science and Technology Policy (OSTP), have three main goals: to ensure public engagement, limit regulatory overreach, and, most importantly, promote trustworthy—including fair, transparent, and safe—AI. They are intentionally broadly defined, said Lynne Parker, US deputy CTO, during a press briefing, to allow each agency to create more specific regulations tailored to its sector.

In practice, federal agencies will now be required to submit a memorandum to OSTP when proposing AI-relevant regulation to explain how it satisfies the principles. Though the office doesn’t have the authority to nix regulation, the procedure could still provide the necessary pressure and coordination to uphold a certain standard.

“OSTP is attempting to create a regulatory sieve,” says R. David Edelman, the director of the Project on Technology, the Economy, and National Security at MIT. “A process like this seems like a very reasonable attempt to build some quality control into our AI policy.”

The principles (with my translation) are:

  1. Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.
  2. Public participation. The public should have a chance to provide feedback in all stages of the rulemaking process.
  3. Scientific integrity and information quality. Policy decisions should be based on science. 
  4. Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.
  5. Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.
  6. Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.
  7. Fairness and non-discrimination. Agencies should make sure AI systems don’t discriminate illegally.
  8. Disclosure and transparency. The public will only trust AI if it knows when and how it is being used.
  9. Safety and security. Agencies should keep all data used by AI systems safe and secure.
  10. Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.

The newly proposed plan signifies a remarkable U-turn from the White House’s stance less than two years ago when people working in the Trump administration said there was no intention of creating a national AI strategy. Instead, the administration argued then that minimizing government interference was the best way for the technology to flourish.

But as more and more governments around the world, and especially China, have heavily invested in the technology, the US has felt significant pressure to follow suit. During the press briefing, administration officials offered a new line of logic for its increased role in AI development. 

“The US AI regulatory principles provide official guidance and reduce uncertainty for innovators about how their own government is approaching the regulation of artificial intelligence technologies,” said US CTO Michael Kratsios. This will further spur innovation, he added, allowing the US to shape the future of the technology globally and counter influences from authoritarian regimes.

There are a number of ways this could play out. Done well, it would encourage agencies to hire more personnel with technical expertise, create definitions and standards for trustworthy AI, and overall lead to more thoughtful regulation. Done poorly, it could incentivize agencies to skirt around the requirements or slow down the passing of necessary regulations for ensuring trustworthy AI with additional bureaucracy.

Edelman is optimistic. “The fact that the White House pointed to trustworthy AI as a goal is very important,” he says. “It sends an important message to the agencies.”

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It’s free.

[ad_2]

Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.