Shocking article on arXiv.org from the at the Annual Meeting for the Association for Computational Linguistics 2019. It reports that the algorithms used by “sophisticated” hate bias checkers are not neutral – they are based on standard American English. Therefore tweets by black Americans are proportionally more likely to be flagged up as abusive than those by white counterparts because the machine does not understand their usage of language. Classifiers trained on the systems tend to predict that tweets written in African-American English are abusive at substantially higher rates.
Recent Comments