Facebook To Design Its Own Chips For Real-Time Content Filtering

Facebook wants to be able to filter content, including live video streams, in real-time. The company said at a Paris technology conference that it’s going to design its own machine learning (ML) processors to achieve that goal. Facebook had previously designed its own server architecture, motherboards, and its own communication chips for the data center.

High-Performance Content Filtering

Facebook’s chief artificial intelligence scientist Yann LeCun said that the company would like to take down offensive videos, like someone live-streaming a murder or a suicide, as they happen. However, such capability would require “a huge amount of compute power,” as well as consume a lot of power.

At the Viva Technology conference, LeCun noted:

"There’s a huge drive to design chips that are more energy-efficient for that. A large number of companies are working on this, including Facebook.

You’ve seen that trend from hardware companies like Intel, Samsung, Nvidia. But now you start seeing people lower in the pipeline of usage having their own needs and working on their own hardware."

Being able to filter content in real-time may not sound like such a bad idea when the company actually takes down violent videos or hate speech, but the same technology could also allow the company to censor certain free speech or other types of non-aggressive content with lightning speed. That means few people would be aware that important content is censored unless the person who posted that content can raise awareness about it through other channels and expose Facebook's actions.

Software Companies Embrace ML Hardware

Google is another example of a company that now builds its own ML hardware that better fits its own requirements. Google surprised many when it built its Tensor Processing Unit (TPU) in 2015 (and announced it in 2016) to run its AlphaGo AI on faster hardware. The chip promised to be a three-generation leap forward with orders of magnitude higher inference performance than Nvidia’s Tesla K80 GPUs.

In 2017, Google launched the even more powerful TPU 2.0, which prioritized training over inference. The company recently announced the TPU 3.0, too, which promises another big increase in performance.

It’s not just large online services focusing on ML hardware, either. Smartphone makers, for instance, have increasingly adopted ML processors into their devices, promising much better efficiency for certain ML tasks such as improving photo quality, managing battery life, organizing your photos with client-side intelligence, and so on.

This trend doesn’t show any signs of a slowdown, so we should be expecting even more companies adopt ML accelerators in their data centers or consumer products soon.

Create a new thread in the UK News comments forum about this subject
This thread is closed for comments
No comments yet
Comment from the forums
    Your comment