Also read: Earnings Surge For Samsung Electronics on Huge Chip Profit
In fact, 99 percent of apps with abusive contents were identified and rejected before anyone could install them, he added. Google has developed new detection models and techniques that can identify repeat offenders and abusive developer networks at scale. Google also took down 100,000 bad developers in 2017.
“This was possible through significant improvements in our ability to detect abuse — such as impersonation, inappropriate content, or malware — through new Machine Learning models and techniques,” the post read. “Copycats” apps attempt to deceive users by impersonating famous apps. They do this by trying to sneak in impersonating apps to the Play Store through deceptive methods such as using confusable unicode characters or hiding impersonating app icons in a different locale.
Also read: Facebook Bans Ads For Bitcoin And Other Cryptocurrencies on Its Platforms
“In 2017, we took down more than a quarter of a million of impersonating apps,” Google said. When it came to inappropriate content such as pornography, extreme violence, hate and illegal activities, Google’s improved machine learning models sifted through massive amounts of incoming app submissions and flagged them for potential violations.
“Tens of thousands of apps with inappropriate content were taken down last year as a result of such improved detection methods,” the company noted. Potentially Harmful Applications (PHAs) are a type of malware that can harm people or their devices – like apps that conduct SMS fraud, act as trojans or phishing user’s information. “With the launch of Google Play Protect in 2017, the annual PHA installs rates on Google Play was reduced by 50 percent year over year,” the post said.
Watch: Smartron t.phone P Review | A Tough Budget Player With a Massive Battery