|Artificial intelligence (AI) engineers at Facebook have taken on self-supervised learning technology, which helps the social network's tech to adapt faster to challenges like detecting new forms of hate speech / Photo by: Pixelkult via Pixabay|
Artificial intelligence (AI) engineers at Facebook have taken on self-supervised learning technology, which helps the social network's tech to adapt faster to challenges like detecting new forms of hate speech.
Unlike the typical AI tech, which needs more training data to be effective, self-supervised learning needs less training data—allowing it to decrease the time needed to build training data and train a system.
In fact, self-supervising learning methods were able to decrease the amount of necessary training data by a factor of 10, said Facebook AI research leader Manohar Paluri at the company's F8 developer conference last week.
CNet says such a speed is crucial to ensure that Facebook is a fun and safe platform and prevent it from becoming a cesspool of toxic comments, misinformation, abuse, and scams.
During the conference, Paluri said Facebook's AI tech is addressing many issues on the biggest social network in the world: hate speech, bullying, violence, child nudity, adult content, terrorist propaganda, and fake accounts.
In spite of this, the speakers—Paluri and Facebook Chief Technology Officer Mike Schroepfer—acknowledged the fact that there is still much work to be done, especially in spotting problematic videos like the mosque shootings in New Zealand that were spread in March.
CNet adds that the progress doesn't even come close to addressing the privacy problems—an issue that Facebook Chief Executive Mark Zuckerberg said the company is trying to fix. The company's executives showed some remorse with their typical high-spiritedness at the conference, which suggests their understanding of not being completely out of trouble yet.
While AI can certainly address issues such as debugging a company's software, which Facebook has done, it also produces new ones. One of which is eradicating AI bias, which can underpin the problems or advantages that some classes of people experience.