Fabula AI’s researchers have developed a way to make sense of the deluge of information that is shared on social networks, and detect network manipulation.
The move comes as tech giants face mounting pressure to take control of disinformation and other online harms, recently included as part of a Government white paper that could fall under the watch of a regulator in the UK with the power to issue substantial fines and make individual senior managers criminally liable for any breaches.
Fabula AI will work under Twitter engineer Sandeep Pandey, “to focus on a few key strategic areas such as natural language processing, reinforcement learning, ML ethics, recommendation systems, and graph deep learning,” chief technology officer Parag Agrawal announced.
“We are really excited to join the ML (machine learning) research team at Twitter, and work together to grow their team and capabilities,” said Michael Bronstein, Fabula AI co-founder.
“Specifically, we are looking forward to applying our graph deep learning techniques to improving the health of the conversation across the service.”
According to Fabula AI’s LinkedIn account, its model “delivers accurate and unbiased authenticity scores for any piece of news, in any language”.
The main focus will be to help Twitter in its long-term goal of improving the health of the conversation initially, but there are plans to expand into areas such as spam and abuse in the future.
Mr Bronstein is currently the chairman of Machine Learning and Pattern Recognition at Imperial College, a position he will keep while leading graph deep learning research at Twitter.