Algorithms Subprojects

Faculty fellows and research assistants of the ISS’ Algorithms project have begun research to examine the design, understanding, and use of algorithmic systems and big data as it relates to inequality.

Comparing Established Compliance Procedures and Recent Research Proposals for Ensuring Non-Discrimination in Statistical Models

Solon Barocas, Information Science

This project aims to learn what companies using statistical models for employment and credit decisions already do to address concerns with bias and discrimination, how these established procedures compare to the more recent proposals from law and computer science focused on fairness, accountability, and transparency in machine learning, and where practice and research could better inform each other.


Computational Due Process: The Agency of Data Subjects between Compliance and Resistance

Malte Ziewitz, Science & Technology Studies

Understanding algorithmic systems has become a key concern for policy-makers, engineers, and academics. But how do ordinary people make sense of something that is said to be inscrutable? What kind of recourse do they have if they feel misrepresented or mistreated? This project maps, examines, and evaluates the different kinds of recourse that are available to so-called ‘data subjects’ in credit scoring and web search.


The Design and Development of Hiring and Productivity Tools

Ifeoma Ajunwa, ILR

Although hiring algorithms and productivity tools have permeated many sectors of the workforce, little is known about the design and development behind such algorithmic tools. I have gained access to two research sites: 1) a developer of hiring algorithms, and 2) a developer of productivity and work surveillance applications. The proposed project involves both ethnographic research, as well as, in-depth structured interviews of the workers at those two sites. This project will thus inform a deeper understanding of ethical issues associated with the development of algorithmic hiring and work productivity tools.


Organizing Transparency: Tracing the Regulation of Algorithmic Accountability in NYC

Malte Ziewitz, Science & Technology Studies; Maximilian Heimstädt, Organization Studies, Witten/Herdecke University

On December 18, 2017, the New York City Council unanimously passed a bill that established a task force to examine the city’s ‘automated decision systems’ – systems that significantly impact New Yorkers’ lives by matching students with schools, assessing teacher performance, or detecting Medicaid fraud. In this project, we accompany the legislative process and trace the considerations involved in passing regulation for algorithmic accountability through a mix of interviews and documentary analysis. How do different actors and stakeholders think about the promises and challenges of such regulation? What options are considered, justified, and undermined? What can this process teach us about attempts to ensure accountability in computational systems? We document and analyze this process with the help of recent work in organization studies and science & technology studies.


Strategies of Algorithmic Management among Cultural Workers

Brooke Erin Duffy, Communication; Ifeoma Ajunwa, ILR

While algorithmic systems are radically reshaping the production and distribution of media and cultural content across industrial contexts, the impact on independent cultural workers is less understood. This project will draw upon in-depth interviews with self-employed cultural workers to better understand how their “algorithmic imaginaries” (Bucher, 2017), particularly those involving social media, shape their work processes and products.