AI FOR PEACE NEWSLETTER Your monthly dose of news and the latest developments in AI for Peace |
|
|
MARCH 2020 Spotlight on AI regulations, democracy, security, facial recognition, ethics and safety. |
|
|
For more news and resources on AI and Coronavirus look at our Special Edition Newsletter on Covid19 |
|
|
SPOTLIGHT STORY This month we are sharing a special spotlight on Omdena. With a moto of “Building AI Collaboratively”, Omdena works as an innovation platform for building AI solutions to real-world problems through the power of collaboration. Omdena runs two-month AI Challenges where a selected team of up to 50 engineers works with an organization to refine their problem statement, collect the data, and build the AI solutions. Due to the sheer power of 50 engineers collaborating, Omdena is able to deliver a functional AI solution in record time. In some of their projects, organizations entered with no data and left with a functional prototype. Omdena’s community spans across 75 countries with an above average of women represented (>30 percent). While a lack of diversity results in biased solutions an Omdena challenge includes collaborators from at least 15 countries while placing a strong value on gender diversity. Omdena is a partner of the United Nations’ AI for Global Good Summit 2020, and has a track record of successfully completed AI projects, including using machine learning to identify the safest routes in Istanbul for earthquake victims to reunite with their loved ones. It has also delivered AI solutions which helped detect the outbreak of fires in the Brazilian rainforest with 95 percent accuracy. We are happy to announce our partnership with Omdena in conducting AI Coronavirus Challenge with 51 AI/ML experts and enthusiasts from 21 countries (across 6 continents). The goal is to build, through collaboration, data-driven AI models to help make policy decisions taking into account the most economically vulnerable. We will report about the working progress in our next newsletter. |
|
|
THIS MONTH’S BEST READS AI is an Ideology, Not a Technology, March 15, 2020 At its core, "artificial intelligence" is a perilous belief that fails to recognize the agency of humans. The usual narrative goes like this: Without the constraints on data collection that liberal democracies impose and with the capacity to centrally direct greater resource allocation, the Chinese will outstrip the West. AI is hungry for more and more data, but the West insists on privacy. This is a luxury we cannot afford, it is said, as whichever world power achieves superhuman intelligence via AI first is likely to become dominant. The Evolution of Artificial Intelligence and Future of National Security, March 13, 2020 Where regulation may be possible, and ethically compelling, is limiting the geographic and temporal space where weapons driven by AI or other complex algorithms can use lethal force. It might be tempting to use facial recognition technology on future robots to have them hunt the next bin Laden, Baghdadi, or Soleimani in a huge Mideastern city. But the potential for mistakes, for hacking, and for many other malfunctions may be too great to allow this kind of thing. It probably also makes sense to ban the use of AI to attack the nuclear command and control infrastructure of a major nuclear power. Such attempts could give rise to “use them or lose them” fears in a future crisis and thereby increase the risks of nuclear war. The EU Action Plan for Human Rights and Democracy, March 25, 2020 The Action Plan identifies priorities in view of changing geopolitics, the digital transition and environmental challenges. It also offers an opportunity to refresh the EU's approach to human rights and democracy to address current challenges. Action Plan will highlight the political challenges posed by developing technologies—like artificial intelligence —to human rights and democracy. AI Ethics Principles Are Laid Out by DoD For All to See, Applies To Self-Driving Cars Too, March 12, 2020 One arena in which AI ethics can get especially thorny involves AI systems for military uses. Most would agree that we need to observe some form of AI ethics in the building and use of military AI systems, and whatever we learn there can be certainly reapplied to commercial and industrial use of AI systems too. If you look at the principles as standalone (I’ll do so momentarily herein), meaning without the context of military or defense, they are equally applicable to any commercial or industry-based AI systems. How Google.org accelerates social good with artificial intelligence, March 11, 2020 After realizing the potential to affect change while studying systems engineering at the University of Virginia, Brigitte Hoyer Gosselink began her journey to discover how technology might have a scalable impact on the world. Gosselink worked within international development and later did strategy consulting for nonprofits before joining Google.org, where she is focused on increasing social impact and environmental sustainability work at innovative nonprofits. We talked to her about her efforts as head of product impact to bring emerging technology to organizations that serve humanity and the environment. Despite ‘consensus’ with DoD, ODNI moving ahead with its own AI principles, March 5, 2020 Building off the Defense Department’s recent adoption of five artificial intelligence principles, the Office of the Director of National Intelligence will soon release its own public set of AI principles. Aside from DoD, other corners of the federal government have begun to roll out their own AI ethics platforms. The National Security Commission on Artificial Intelligence, led by former Google CEO Eric Schmidt and former Deputy Defense Secretary Robert Work, released an interim report last November that warned of a “brain drain” if federal research and development funding declines. The commission expects to send its final report to Congress in March 2021. The U.S. and EU should base AI regulations on shared democratic values, March 2, 2020 Overlapping principles between the EC and U.S. announcements offer a basis for such cooperation with the United States. The White House principles include public trust in AI, the costs and risks of AI, and the impact of AI on fairness, discrimination, and security of information as well as on privacy, individual rights, autonomy, and civil liberties. These resemble seven key requirements identified by an EU High Level Group of Experts on AI that are incorporated into the white paper: human agency and oversight, technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability. In fact, the U.S. and the EU both agree on the need for AI regulation, the key challenge will be doing it in way that is effective and prevents unnecessary barriers to transatlantic trade and investment. Police Used Facial Recognition to Arrest Over 1,100 People in India Last Month, March 11, 2020 Shortly after one of the worst riots New Delhi has seen in decades, law enforcement agencies in India used facial recognition technology to identify more than 1,100 people who allegedly took part in those riots at the end of February. But there are a slew of legal questions surrounding the use of facial recognition to track down suspects in India. There currently aren’t any laws that clearly lay out ethical uses for such technology. According to the Internet Freedom Foundation, a digital rights advocacy group in New Delhi, the system only had a 1 percent accuracy rate when attempting to identify missing children, and “failed to distinguish between boys and girls.” |
|
|
THIS MONTH’S PODCAST EPISODE CHOICE International Efforts around Use of AI: Interview with Karine Perset, OECD By Cognilytica, March 25, 2020, @cognilytica A little over a year ago the OECD released their AI principles and most recently, in February 2020 the OECD launched the AI Policy Observatory. In the podcast we interview Karine Perset from OECD. She shares why creating international principles around AI is important, why the OECD launched the ONE AI Network of Experts, and where she sees the future of AI headed. Maria Axente from PwC By Humanitarian AI Today, March 16, 2020, @HumanitarianAI Humanitarian AI Today's host Mia Kossiavelou speaks with Maria Axente, Responsible AI and AI for Good Lead with PwC and Director of Outreach at AI Commons, about Maria's work and thoughts on responsible AI and the role of AI in achieving the global Sustainable Development Goals. Technology for Good # 4: Artificial Intelligence By ITU Podcasts, Marc 23, 2020, @ITU and @ITU_AIForGood ITU is the leading United Nations agency for information and communication technologies (ICTs), driving innovation in ICTs together with 193 Member States and a membership of some 900 private sector entities and academic institutions. AI can help solve some of today's challenges, including healthcare, climate and more. Listen to ITU's Tech for Good podcast and discover AI for Good with key experts. |
|
|
THIS MONTH’S PUBLICATIONS Understanding Artificial Intelligence Ethics and Safety - A guide for the responsible design and implementation of AI systems in the public sector, Alan Turing Institute, March, 2019, @turinginst The prospect that progress in AI will help government to confront some of its most urgent challenges is exciting, but legitimate worries abound. As with any new and rapidly evolving technology, a steep learning curve means that mistakes and miscalculations will be made and that both unanticipated and harmful impacts will occur. In order to manage these impacts responsibly and to direct the development of AI systems toward optimal public benefit, The Alan Turing Institute's public policy programme partnered with the Office for Artificial Intelligence and the Government Digital Service to produce guidance on the responsible design and implementation of AI systems in the public sector. AI Governance: A Holistic Approach to Implement Ethics in AI, World Economic Forum, May 3, 2019, @wef This white paper from the World Economic Forum presents a great getting started guide for people looking to implement governance and regulatory mechanisms for AI systems. While being high-level in many recommendations, it sets out the landscape very clearly and posits certain mini-frameworks to reasons about the various tensions that one will encounter when trying to implement governance for AI systems. |
|
|
DIRECTLY FROM AI FOR PEACE Launching AI Coronavirus Challenge We are happy to announce AI for PEACE partnership with Omdena in conducting #AI challenge with 51 AI/ML experts and enthusiasts from 21 countries (across 6 continents). In addition, experts working (or worked) at the World Health Organization, The World Bank, European Commission, UNICEF USA will join too. The goal is to build, through collaboration, data-driven AI models to help make policy decisions taking into account the most economically vulnerable. AI for PEACE COVID-19 SPECIAL EDITION NEWSLETTER IS OUT! Spotlight on Covid19 and the role of the AI Community in fighting the pandemic "With this newsletter we shed a light to efforts in the AI domain and possibilities of AI related technologies to offer that assistance in containing the outbreak. We see this as a potentially critical moment in human history when new research plays an important part in shaping the global response to an acute disease threat. All responses, including the AI ones, will have a decisive influence in what is now both political and health crisis. How we respond to this crisis will influence our path towards a peaceful, sustainable and healthier future." |
|
|
EVENTS TO FOLLOW IN APRIL All Tech is Human: Covid-19, AI, Surveillance and Ethics, by All Tech is Human, April 2, 2020 at 12pm ET, @AllTechIsHuman How do we balance public safety and the use of surveillance tools with our civil liberties? Join us online for a conversation with two of the leading voices on AI ethics, Renee Cummings (AI criminologist, founder of Urban AI, & professor at Columbia University) and Reid Blackman (AI ethicist, founder of Virtue). COVID-19 and AI: A Virtual Conference, by HAI, April 1, 2020, @StanfordHAI COVID-19 and AI: A Virtual Conference will address a developing public health crisis. Sponsored by the Stanford Institute for Human-Centered Artificial Intelligence (HAI), the event will convene experts from Stanford and beyond to advance the understanding of the virus and its impact on society. It will be livestreamed to engage the broad research community, government and international organizations, and civil society. COVID-19: Advancing Rights and Justice During a Pandemic, Virtual Event Series, Four different sessions will be organized on 2nd, 7th, 8th and 21st of April, Join via Zoom, @CLShumanrights The series is organized by the Columbia Law School Human Rights Institute, Duke Law’s International Human Rights Clinic, Columbia Law School’s Center for Gender and Sexuality Law, and Just Security WEBINAR: The Potential of AI for Pandemics, AI for Good Webinar Series, by ReWork AI, April 7, 2020, 10am PDT, @reworkAI |
|
|
On our website, AI for Peace, you can find even more awesome content, podcasts, articles, white papers and book suggestions that can help you navigate through AI and peace fields. Check our online library! |
|
|
|
|