Would You like a feature Interview?
All Interviews are 100% FREE of Charge
Microsoft, Google and OpenAI are among the leaders in the artificial intelligence field in the United States and have taken certain safeguards on their technology. Pressure from the White House. The companies voluntarily agree to abide by a number of principles, but the agreement expires when Congress passes legislation regulating AI.
The Biden administration is focused on AI companies developing their technology responsibly. Officials want to ensure that tech companies can innovate in generative AI in ways that benefit society without adversely affecting the security, rights and democratic values of their citizens.
In May, vice president Kamala Harris met with CEOs of OpenAI, Microsoft, Alphabet and Anthropic, saying they have a responsibility to ensure the safety of their AI products. Last month, President Joe Biden met with leaders in the field to discuss AI issues.
Tech companies agreed to eight proposed measures on safety, security and social responsibility. They include:
-
Have an independent expert test your model for incorrect behavior
-
Investing in cybersecurity
-
Encourage third parties to discover security vulnerabilities
-
Warn of social risks such as stigma and inappropriate use
-
Focusing on research on the social risks of AI
-
Share trust and safety information with other businesses and governments
-
Watermark audio and visual content to make it clear that the content is AI-generated
-
Challenging society’s greatest challenges with a cutting-edge AI system called the Frontier Model
“These initiatives, which both companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI: safety, security and reliability, and represent an important step towards the development of responsible AI,” the White House said in a statement. “As the pace of innovation continues to accelerate, the Biden-Harris administration will continue to remind these companies of their responsibilities and take decisive action to keep the American people safe.”
The fact that this is a voluntary agreement highlights the difficulty lawmakers have had in keeping up with the pace of AI development. Several bills aimed at regulating AI have been introduced in Congress. One aims to prevent companies from using Section 230 protections to avoid liability for harmful content generated by AI, and the other aims to require political advertising to include disclosures if generative AI is employed. Notably, House administrators have reportedly placed restrictions on the use of generative AI in Congressional offices.
Update: 2023/07/21: This article has been updated to include a statement from the White House.