Last Week in AI #54

Eduardo Cerna, June 29, 2020

AI algorithms continue to prove racist biases, computer vision aids in war crimes court rulings and more...
week_54.jpg

Santa Cruz Becomes First U.S. City to Ban Predictive Policing

The city of Santa Cruz in California has become the first U.S. city to completely ban predictive policing, a move comes as the country is dealing with protests due to police brutality and racism. The decision is significant because predictive policing has been in use in the United States for over a decade, and it uses algorithms to analyze police records, predict criminality in certain neighborhoods, etc. often unjustly targeting minorities and people of color. Activists are hoping that the banning will incite similar moves in other cities.

Read more at: Reuters

Considering AI in your company?

TALK TO OUR EXPERT

AI Tool Further Confirms that Racism in AI is a Real Issue

A picture has been circling around the internet that shows the capabilities of upscaling technology, namely, a computer program that takes as input a pixelated low-resolution picture and generates visual data to fill in the blanks using machine learning. The controversy arose when a picture of former U.S. president Barack Obama was run through the system, and instead of filling the picture with the facial features of a man of color, it generated the face of a white man. This is likely caused by the inherent bias contained in the datasets used for training the model, which usually contain a disproportionately large sample of white faces and not minorities.

Read more at: The Verge



AI to Help Prove War Crimes in Court

At the threat of a rising Shia power in Yemen, Saudi Arabia and other largely Sunni Arab states launched an offensive against the regime during Yemen’s civil war. The intervention was meant to last only a couple of weeks, but five years later, it still hasn’t stopped. Saudi Arabia and its allies have launched over 20,000 air strikes over Yemen, killing innocent civilians and destroying their property - a direct violation of international law. Human rights organizations have led efforts to try and hold the perpetrators accountable for their actions, but on-the-ground verification of these acts by journalists has been too dangerous to be carried out. For this reason, human rights organizations have turned to crowd-sourced images and videos but this has created another problem altogether: analyzing and watching the footage takes too much time and often traumatizes the people scouring them. Researchers at the University of Swansea in the UK are turning to machine learning to better evidence and classify footage where there is indication that war crimes have been committed. Although the final footage still needs to be verified by a human, the computer vision techniques will make the process considerably smoother and efficient.

Read more at: MIT Tech Review

Share this article: linkedin