“These platforms have created a new and stunningly effective way for nefarious actors to attack and to harm,” said Senator Ben Nelson. The current efforts by the companies to remove content and cooperate with each other in doing so are strong but “not enough,” he said. YouTube is automatically removing 98 percent of videos promoting violent extremism using algorithms, said Public Policy Director Juniper Downs. But Senator John Thune, Chairman of the Commerce Committee, asked Downs why a video which showed the man who bombed the Manchester Arena in June 2017 how to build his bomb has repeatedly been uploaded to its website every time YouTube deletes it, as recently as this month. “We are catching re-uploads of this video quickly and removing it as soon as those uploads are detected,” said Downs.
Carlos Monje, director of Public Policy and Philanthropy for Twitter, said that even with all their efforts to fight terror- and hate-related content, “It is a cat-and-mouse game and we are constantly evolving to face the challenge.” “Social media companies continue to get beat in part because they rely too heavily on technologists and technical detection to catch bad actors,” said Clint Watts, an expert at the Foreign Policy Research Institute in the use of the internet by terror groups. “Artificial intelligence and machine learning will greatly assist in cleaning up nefarious activity, but will for the near future fail to detect that which hasn’t been seen before.” Last year Google, Facebook, Twitter and Microsoft banded together to share information on groups and posts related to violent extremism, to help keep it off their sites.
Also read: Flipkart Republic Day Sale: Top Upcoming Deals on Smartphones.
Watch: Tech and Auto Show EP 28 | 2018 Audi Q5, LG V30+ & More.
- artificial intelligence
| Edited by: —