On Thursday, the company publicly released an artificial intelligence tool, called Perspective, that scans online content and rates how 'toxic' it is based on ratings by thousands of people.
For example, you can feed an online comment board into Perspective and see the percentage of users that said it was toxic. The toxicity score can help people decide whether they want to participate in the conversation, said Jared Cohen, president of Jigsaw, the company's think tank (previously called Google Ideas). Publishers of news sites can also use the tool to monitor their comment boards, he said.
People can also feed specific words and phrases into Perspective to check how they've been rated. A quick scan of some very ugly words yielded counterintuitive results: The n-word was rated as 82 percent toxic; c---, a term for women's genitalia, was 77 percent toxic; k---, a derogatory word for a Jewish person, was 39 percent toxic; and c----, a slur for a Chinese person, was 32 percent toxic. If you add the phrase 'you are a' to any of those words, the toxicity score goes up.
Cohen emphasized that Perspective was a work in progress and would only improve if people contributed to it. https://www.washingtonpost.com/news/...=.b2f9ce101252