Categories
Uncategorized

Bias Highlighter

In my book, Cause-Oriented Efficiency I briefly describe one possible tool we could use to error-correct for bias.

Submissions (to a future COE platform) will always have the opportunity for bias. This is because it is not within our own capabilities as individuals with subjective experiences to identify our own biases perfectly. Beyond that, it is necessary to take precautions at the very least to give the greater population a tool that can eliminate or point out biases. That is why, for the COE, there must always be a mechanism for pointing out biases that range from well disguised to blatant.

An example would be a flagging system or a way to report bias within the documents that are submitted. Any of the following documents should be included: Causes, Solutions, Cases, and any other document that requires an objective point of view for submission.

Any other tools such as discussion-oriented tools, comments, or other conversational mediums within the platform do not require bias highlighting as a mechanism. Consider the COE as a platform. When any user is reading a Cause, Solution, or Case, they can use their cursor to highlight biased text and leave a comment about that text. Other users can vote on the bias highlight or report it to be removed. 

P.23, Cause-Oriented Efficiency, Copyright 2017

I published this idea in 2017, but not many people are aware of its potential outside of the use for my hypothetical platform.

To make this a more accessible tool for the general public during times like we are in now with COVID-19, it could help cut through some of the noise.

First and foremost, this idea should be considered open source so there is full transparency in how it is made and any implied usage such as rules of abuse, etc.

The most effective way to make the tool that I could think of is to have a browser extension. I know this doesn’t address mobile readers, but it is a start.

It would be important to have a system of verification of users. Some suggestions would be using your driver’s license or passport number. Another is to use Facebook to log in. Another is to use a distributed trusted network type of environment. This means only verified users can invite someone to become a user. If the invited user abuses the system, both the person who invited them and the invited person are blocked from using the bias highlighter.

An example of a bias highlight would be on the Georgia Department of Health’s website which showed inaccurate data about COVID-19 (Source)

The highlighter would have to be used to highlight the HTML code surrounding the text and data. Then the user who found the potential bias would comment and explain that the numbers are not being reported in a clear or accurate way.

People would then decide if they agree with the comment or not. Enough attention brought to the error would cause the publication to respond.

The above example is a bit less obvious because of the nature of numbers and what you have to understand about statistics and charts to understand the issue.

A different type of bias example is this title: “The return to ‘normal’ requires acceptance of emerging tech” Source. The author of the article is clearly setting up a biased point of view. And, it is thankfully categorized as “Opinion” on the website. Regardless, many people may not understand the difference. Or they may agree with the opinion and have not had access to other opinions. The bias highlighter would highlight the words “requires acceptance” and could start a meta-discussion around this choice of words to crowdsource how accurate, the choice of words, in fact, is.

Is this crowdsourced journalism? Perhaps. It is also just one idea. There could be many others. Please contact me if you find this idea helpful or would like to collaborate on implementing this tool together.

Leave a Reply