THIS MONTH’S BEST READS
Facial Recognition Is Law Enforcement’s Newest Weapon Against Protesters, 2 June 2020, One Zero
As protests engulf the country following the murders of George Floyd and Breonna Taylor at the hands of police, law enforcement agencies with extensive facial recognition capabilities are now asking the public for footage of activists. Police in Seattle, Austin, and Dallas, as well as the FBI, have all asked for video or images that can be used to find violence and destruction during protests over the weekend. But because there are no federal or state laws that require transparency for government use of facial recognition technology, there’s no way to know how the technology is being used or which law enforcement departments have access to it.
Using Neural Networks to Predict Droughts, Floods and Conflict Displacements in Somalia, 6 May 2020, Omdena
Millions of people are forced to leave their current area of residence or community due to resource shortage and natural disasters such as droughts, floods. Our project partner, UNHCR, provides assistance and protection for those who are forcibly displaced inside Somalia. The goal of this challenge was to create a solution that quantifies the influence of climate change anomalies on forced displacement and/or violent conflict through satellite imaging analysis and neural networks for Somalia.
How the rise of ‘digital colonialism’ in the age of AI threatens Africa’s prosperity, 8 May 2020
The 21st century colonisation of Africa does not involve armies, argues cognitive scientist Abeba Birhane, but the mass harvesting of valuable data. What value do you put on all the data gathered about you on any given week, or any given day? Technologies such as AI, facial recognition software and now contact-tracing apps are increasingly being deployed all over the world.
From Cairo to Cambridge: One scientist’s quest to humanise AI, 22 May 2020
Scientist, entrepreneur and author Rana el Kaliouby discusses her journey from Egypt to working on emotion AI at the MIT Media Lab. Rana el Kaliouby is co-founder and CEO of Affectiva, a Boston-based emotion-recognition tech firm that grew out of MIT’s Media Lab. Having recently launched her memoir, Girl Decoded, we got the chance to chat to her about how she got to this point in her career and why she believes in a future “where AI and technology can make us more human, not less”.
Google will not develop artificial intelligence for oil, 25 May 2020
Google will no longer develop custom AI tools to accelerate oil and gas extraction, the company reported, taking distance from cloud computing rivals Microsoft and Amazon. The announcement was made after a report Tuesday that documents how the three tech companies harness artificial intelligence and computing power to help oil companies find and access oil and gas fields in the United States and around the world.
Facebook’s AI is still largely baffled by covid misinformation, 12 May 2020, MIT
The tech giant says 88.8% of all the hate speech it removed this quarter was detected by AI, up from 80.2% in the previous quarter. The AI can remove content automatically if the system has high confidence that it is hate speech, but most is still checked by a human being first.
Artificial Intelligence is not the cure for the COVID-19 infodemic, 9 May 2020, The Hill
More than 3 billion people–around 50 percent of the world’s population–engage with and post content online. Some of that content is misleading and potentially harmful, whether by design or as a side effect of its spread and manipulation. With the billions of daily active users on social media platforms, even if a mere 0.1 percent of total content contains mis or disinformation, there is a vast volume of content to review. In response to this challenge, automated content review technologies have emerged as an enticing and scalable solution to help triage mis/disinformation online. Yet, while many technology companies and social media platforms have promoted artificial intelligence (AI) as an omnipotent tactic to address mis/disinformation, AI is not a panacea for information challenges.
Armed drones contentious in German disarmament debate, 13 May 2020, Euractiv
Germany has reopened a controversial debate over whether its armed forces should be trusted to operate armed drones. While an agreement seems far off, the debate could soon get a European twist. A first step was made earlier this week on Monday (11 May), with the defence ministry inviting experts, representatives of civil society and members of parliamentary groups in the Bundestag to a public hearing on what it said was meant to be an “open debate on potential armament”.
Using Drones to Fight COVID-19 is the Slipperiest of All Slopes, 5 May 2020, EFF
Any current buy-up of drones would constitute a classic example of how law enforcement and other government agencies often use crises in order to justify the expenditures and negate the public backlash that comes along with buying surveillance equipment. For years, the LAPD, the NYPD, and other police departments across the country have been fighting the backlash from concerned residents over their acquisitions of surveillance drones. These drones present a particular threat to free speech and political participation.
Ethics of Acquiring Disruptive Technologies: Artificial Intelligence, Autonomous Weapons, And Decision Support Systems – Analysis, 15 May 2020, EuroasiaReview
Last spring, Google announced that it would not partner with the Department of Defense (DOD) on “Project Maven,” which sought to harness the power of artificial intelligence (AI) to improve intelligence collection and targeting, because its employees did not want to be “evil.” Later that fall, the European Union called for a complete ban on autonomous weapons systems. In fact, a number of AI-related organizations and researchers have signed a “Lethal Autonomous Weapons Pledge” that expressly prohibits development of machines that can decide to take a human life.
ICYMI in 2019
Building ethical AI approaches in the African context, 18 August 2019, Global Pulse
Countries worldwide are at different stages of designing and implementing AI strategies and policies to seize the opportunities of this technology. In Africa, Kenya, Tunisia, South Africa, Ghana, Uganda are already working to develop data protection and ethics strategies. The critical question now is : Which ethical approaches are relevant in the context of the African continent?