In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica.
The only way for a robot or artificial intelligence to be some racist or biased is for us to make it so. There is nothing mystical or magical about computers. They are essentially extremely powerful calculators. All they do is manage numbers and perform calculations on those numbers. An AI is no different. The difference is the meaning we place on the program’s calculations.
There is a long standing phrase within the world of computer programming: Garbage In, Garbage Out. This phrase means the output of a program is only as good as its input. You can have the most sophisticated programming and fastest computer but if the data you feed it is flawed, you are not going to get useful results.
In the case of an artificial intelligence, if you feed it data that is biased, it will give you analyses that are biased. The real problem is not the programs or the computers but it is the studies and statistics that we feed them. This is something we need to keep in mind going forward with artificial intelligence, we need to make sure the data is not flawed or biased.