top of page
  • Parv Jain

Secure & Clear: US Boosts AI Safeguards and Transparency

Artificial Intelligence

The White House announced on Thursday that it's making new rules for how government agencies use Artificial intelligence or AI. By December 1st, these agencies have to follow strict guidelines to make sure the technology is safe and doesn't violate people's rights. These rules are being set because the government wants to use AI for many different things.

The Office of Management and Budget, which helps manage the federal government's policies and spending, has told these agencies they need to keep a close eye on how AI affects people. They need to check for any unfairness caused by AI, explain to the public how they are using AI, and make sure it's safe. This includes looking at the risks closely and deciding on ways to measure how well AI is being managed and used.

The White House has announced new rules that government agencies must follow when using artificial intelligence (AI). These rules are designed to protect the rights and safety of Americans. Agencies will have to make sure they have strong safety measures in place and tell the public clearly about how and when they are using AI.

In addition, President Joe Biden signed a special order in October. This order uses the Defense Production Act to make it necessary for companies creating AI systems that could be risky to national security, the economy, health, or safety to share their safety test results with the U.S. government before they make these AI systems public.

These steps are being taken to make sure AI is used responsibly and safely by the government and companies, to keep the public informed and protected.

The White House announced on Thursday that travelers at airports will have the option to say no to facial recognition technology used by the Transportation Security Administration (TSA) without slowing down their security check. Also, when AI (artificial intelligence) is used in healthcare by the government, especially for helping with diagnosing, there will always be a person checking the AI's work to make sure it's right.

There's a lot of talk about a type of AI called generative AI, which can make text, pictures, and videos just by getting instructions from users. While many people are excited about what generative AI can do, there are also worries. Some are afraid it might cause people to lose their jobs, mess with elections, or even become so powerful that it could cause big problems for humans.

The White House is making sure that government agencies share information about how they're using artificial intelligence (AI). They need to tell everyone what AI projects they have, and how they're measuring the success of these projects, and share the AI programs, designs, and data they own, as long as sharing doesn't create any risks.

The government is already using AI for important tasks. For example, the Federal Emergency Management Agency (FEMA) is using AI to check how much damage hurricanes have done to buildings. The Centers for Disease Control and Prevention (CDC) is using AI to guess where diseases might spread next and to find opioid drug use. The Federal Aviation Administration (FAA) is using AI to manage air traffic better in big cities, which helps make traveling faster.

The White House is going to hire 100 experts in artificial intelligence (AI) to make sure AI is used safely. Also, it has told all government agencies that they need to choose a main person in charge of AI within the next 60 days.


2 views0 comments


bottom of page