[Facebook] enters markets with wide-eyed innocence and a mission to “build [and monetise] communities”, but ends up tripping over democracies and landing in a pile of ethnic cleansing. Oopsie!
In mid-2014, after false rumours online about a Muslim man raping a Buddhist woman triggered deadly riots in the city of Mandalay, the Myanmar government requested a crisis meeting with Facebook. Facebook said that government representatives should send an email when they saw examples of dangerous false news and the company would review them.
It took until April this year – four years later – for Mark Zuckerberg to tell Congress that Facebook would step up its efforts to block hate messages in Myanmar, saying “we need to ramp up our effort there dramatically”.
...
“Their AI can’t detect real hate speech and rumours. Mostly it detects just the words like ‘Ma Ba Tha’ [a Buddhist monk-led nationalist group] and ‘Buddha religious’ or something like that,” said Myat Thu from Burma Monitor, a not-for-profit. “It’s not the tracing root cause. Only Burmese content reviewers can know the local context.”
Their reluctance to hire human workers here is in sharp contrast to the case of Facebook M, in which they wanted everyone to believe amazing AI was answering people's questions, but it was actually a bunch of poorly paid human workers.