Unlocking the Black Box
The Promise and Limits of Algorithmic Accountability in the Professions
The Yale Law School Information Society Project is seeking abstracts of papers for a conference on big data and algorithmic accountability to be held on April 1-2, 2016. The abstract submission deadline is August 23, 2015, by 11:59PM.
The increasing power of big data and algorithmic decision-making—in commercial, government, and even non-profit contexts—has raised concerns among academics, activists, journalists and legal experts. Three characteristics of algorithmic ordering have made the problem particularly difficult to address. The data used may be inaccurate or inappropriate. Algorithmic modeling may be biased or limited. And the uses of algorithms are still opaque in many critical sectors.
No single academic field can address all the new problems created by algorithmic decision-making. Collaboration among experts in different fields is starting to yield important responses. For example, digital ethicists have offered new frameworks for assessing algorithmic manipulation of content and persons, grounding their interventions in empirical social science—and, in turn, influencing regulation of firms and governments deploying algorithms. Empiricists may be frustrated by the “black box” nature of algorithmic decision making; they can work with legal scholars and activists to open up certain aspects of it (via FOIA and fair data practices laws). Journalists, too, have been teaming up with computer programmers and social scientists to expose new privacy-violating technologies of data collection, analysis, and use—and to push regulators to crack down on the worst offenders.
Researchers are going beyond the analysis of extant data, and joining coalitions of watchdogs, archivists, open data activists, and public interest attorneys, to assure a more balanced set of “raw materials” for analysis, synthesis, and critique. As an ongoing, intergenerational project, social science must commit to assuring the representativeness and relevance of what is documented—lest the most powerful “pull the strings” in comfortable obscurity, while scholars’ agendas are dictated by the information that, by happenstance or design, is readily available. What would similar directions for legal scholars and journalists look like? This conference will aim to answer that question, setting forth algorithmic accountability as a paradigm of what Kenneth Gergen has called “future-forming” research.
Algorithmic accountability calls for the development of a legal-academic community, developed inter-disciplinarily among theorists and empiricists, practitioners and scholars, journalists and activists. This conference will explore early achievements among those working for algorithmic accountability, and will help chart the future development of an academic community devoted to accountability as a principle of research, investigation, and action.
The conference seeks abstracts on topics including:
- The law and ethics of artificial intelligence
- Algorithmic accountability in medicine, finance, journalism, law, and education
- Algorithms and transparency
- How can law enable “innovative” journalism and research?
- The effect of socio-technological environment on professional practices and norms
- What are the black boxes lawyers and policymakers most want exposed?
500-700 word abstracts may be submitted by August 23 to Heather Branch at firstname.lastname@example.org.
Frank Pasquale (Professor of Law, University of Maryland Francis King Carey School of Law), Caitlin Petre (Resident Fellow, Yale Information Society Project), and Valerie Belair-Gagnon (Executive Director and Research Scholar, Yale Information Society Project)