Monday, December 30, 2019

Human Rights in Age of Social Media, Big Data, and AI

By Molly Galvin | Sept. 23, 2019
In just a few years, digital technologies have allowed faster mobilization in response to humanitarian crises, better documentation of war crimes in conflict zones like Syria and Yemen, and more accessible platforms for organizing peaceful demonstrations around the world.   
However, while social media and big data can be powerful tools for anticipating, analyzing, and responding to human rights concerns, these technologies also pose unprecedented challenges. Social media has been weaponized to spread disinformation, interfere in elections, and promote and incite violence. And websites and apps are continuously collecting broad swaths of data on their users — often without them being aware of it, or of how or where their personal information is being used or stored. 
A recent symposium at the National Academies of Sciences, Engineering, and Medicine, organized by the Academies’ Committee on Human Rights, brought together leading experts on human rights and technology for an-depth exploration of these issues.
“Human rights — its vocabulary, its framework, its vision — provides a basis for restraining the worst intrusions and violations of the digital world, and promoting its best.” — David Kaye, UN Special Rapporteur and clinical professor of law at the University of California, Irvine
“Human rights — its vocabulary, its framework, its vision — provides a basis for restraining the worst intrusions and violations of the digital world, and promoting its best,” said keynote speaker David Kaye, United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression and clinical professor of law at the University of California, Irvine.  “Not in some kind of vague … sense of what human rights might be, but in the specifics of human rights law.”
Although Americans tend to think of rights as guaranteed by the U.S. Constitution, international treaties bind countries around the world to uphold rights such as privacy, freedom of opinion and expression, and nondiscrimination, said Kaye.  “How do we get from holding states accountable to holding [digital technology companies] accountable? There is a huge amount of space to work with in this foundation of human rights thinking to make it relevant to the companies, to make it relevant to governments [who regulate companies].” 
Digital Power for Good
Throughout the symposium’s panel discussions, researchers and practitioners shared many positive examples of the power of digital technologies to advance humanitarian goals.  For instance, after the horrific 2012 gang rape and fatal assault of a young woman in Delhi, India, panelist ElsaMarie D’Silva left a 20-year career in the Indian aviation industry to create Safecity, an online platform that relies on anonymous crowd sourcing to document sexual harassment and abuse in public spaces.  “It’s the stories that connect more of us and give us courage to break the silence.  When you can see a database — we find it’s really powerful when you invite police and elected officials in the room and demand accountability.”
The proliferation of cell phones around the world has also empowered civil rights advocates to record and report instances of abuse and advocate for change.  Tanya Karanasios, deputy program director at WITNESS, an international nonprofit organization that trains and supports people using video in their fight for human rights, described how Afro-Brazilians were able to raise greater awareness of police abuse by recording incidents.  This brought attention from the international media and public defenders, who used the evidence to pursue prosecutions of some of the officers.
However, while data and images collected by citizens and civil organizations can shine a spotlight on human rights abuses, it can be difficult to harness disparate data to hold oppressors accountable.  There are many challenges with collecting, preserving, and analyzing images and data from many disparate sources, said Keith Hiatt, who leads the information systems management section at the United Nations International, Impartial, and Independent Mechanism for Syria (IIIM), an entity created to collect, preserve, and analyze evidence of the most serious violations — such as war crimes and crimes against humanity —from many different sources.  The IIIM prepares case files to help bridge the gap between data collection and accountability.
‘Example After Example of Real Harms’
Despite the good that digital technologies can bring to humanitarian and human rights work, they can be applied to interfere with this work as well. “The fact of the matter is that a lot of the activism, organization, and civil society movements we all care about…take place on platforms that were not designed for security,” said Ron Deibert, who directs the Citizen Lab at the University of Toronto. Meanwhile, he noted, governments and private surveillance companies hired by adversaries are employing digital tools and data collection to thwart human rights activists. 
And although big data and AI are sometimes heralded as fixes to societal issues such as inequality and bias, data collection and algorithm design can unintentionally perpetuate these problems. “We are seeing example after example of real harms due to AI…, automated technologies, and other forms of algorithm-driven technologies,” said Mark Latonero, research lead for data and human rights at the nonprofit research institute Data & Society.  For instance, research has shown that facial recognition technologies are demonstrably biased against minorities, which has led the city of San Francisco to ban their use. 
“One of the inherent problems is the societal presumption that that data is objective — that it doesn’t have history, it doesn’t have politics, and it reflects reality properly,” said Rashida Richardson, director of policy research at New York University’s AI Now Institute.  “None of those things are true, based on current data collection and use practices.”
‘Hard Work Ahead’
Much work needs to be done to capitalize on the benefits of digital technologies to advance human rights — while ensuring that these same technologies do not infringe on them. “We as a country need to be re-engaging in human rights mechanisms,” said keynote speaker Kaye. “The further we get from those basic mechanisms of human rights and how they’re working in practice, the harder it is for us to influence them and to educate ourselves about what is happening around the world.  It’s education. It’s engagement. It’s law.”
Technology companies have become governors of online space, and in turn, are shaping freedom of expression around the world.  For instance, approximately 85 percent of Facebook’s 2.5 billion active users are outside of the U.S., with many in countries where access to information is much more limited.  Kaye referenced a recent announcement by Facebook that it would use international human rights standards to make judgments about expression on the platform.  
“In an ironic way, the companies might be — if we push them — a leader in thinking about the way that human rights can have an impact on our lives, and the way we think about privacy and expression and other rights.”
“Everybody knows we are in the midst of a global battle for dominance in AI technology, but we are also in the midst of a geopolitical battle with respect to the norms and values that will guide regulation of AI,” noted Eileen Donahue, executive director of the Global Digital Policy Incubator at Stanford University’s Center for Democracy, Development, and the Rule of Law. “What I worry about most is that we could be having a global, unconscious drift toward digital authoritarianism."
Although the human rights framework isn’t perfect, it is a good model to build upon in order for governments, industry, and civil society to protect rights while reaping the benefits of digital technologies, said Donahue  “We have a lot of hard work ahead to articulate in a compelling way how [digital governance] applies with respect to freedom of expression, freedom of assembly, right to privacy, equal protection, and non-discrimination. It’s going to be a cross-disciplinary, multi-stakeholder process.”

http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=9302019&utm_source=NASEM+News+and+Publications&utm_campaign=0596c47960-NAP_mail_new_2019_10_07&utm_medium=email&utm_term=0_96101de015-0596c47960-107497293&goal=0_96101de015-0596c47960-107497293&mc_cid=0596c47960&mc_eid=37f271fc0d

1 comment: