Facebook's Parent Company, Meta, Makes Big Changes to Its Fake Media Rules
Before the upcoming U.S. elections, Meta, the company that owns Facebook, has announced important updates to its rules about fake or changed videos, pictures, and sounds. These changes are to help control misleading content made with new artificial intelligence (AI) technology. Starting in May, Meta will put "Made with AI" tags on videos, images, and audio made with AI that are shared on Facebook and its other platforms. This move makes their old rules, which only covered certain types of altered videos, broader. Monika Bickert, who is in charge of content policy at Meta, shared this news in a blog post.
Meta Adds Stronger Warnings to Highly Misleading Altered Media
Monika Bickert from Meta has announced that the company will start using clear and strong warning labels on fake or edited media that could seriously mislead people about important issues. This will apply no matter if the content was made with AI or other methods. Meta's new plan changes how they deal with such manipulated content. Instead of just taking down a few posts, they will now keep the content online but will tell viewers that it has been altered or is fake. This way, people can know when a video or image has been changed in a way that might try to trick them.
Meta Plans to Spot AI-Made Images with Special Markers
Meta announced earlier that it is working on a way to find out which images were created by AI tools from other companies. They plan to use special markers that can't be seen but are included in the image files. However, they didn't say when this would start. A spokesperson that Meta's new plan to put labels on content will be used for things shared on Facebook, Instagram, and Threads. Other Meta services like WhatsApp and the Quest virtual reality headsets will follow their own set of rules.
Meta Starts Using Clear Warnings on Risky Fake Media Right Away
Meta will start using clear "high-risk" warning labels on fake media that could mislead people. This update is happening as we get closer to the U.S. presidential election in November. Technology experts are worried that new AI tools could change how this election works. Political groups have started using AI in places like Indonesia, testing the rules set by companies like Meta and OpenAI, the leading name in AI technology.
Meta's Rules on Fake Media Called Unclear by Its Oversight Board
Meta's oversight board criticized the company's rules on fake videos and images, calling them unclear. This happened after a review of a video on Facebook showing U.S. President Joe Biden in a way that suggested he did something wrong, but it wasn't true. The video, which changed real footage, was allowed to stay on Facebook. Meta's current rules only remove fake videos made by AI or ones that show people saying things they didn't say. The board suggested that these rules should also cover videos and sounds that weren't made by AI but could still mislead people, including videos that show someone doing something they never did.
Reference
تعليقات