Syllabus
Session 1: What went wrong with this case? - Introduction
Mandatory readings
Session 2: Avoiding tech pitfalls - errors, choices, biases, (justice?)
Mandatory readings
- Corbett-Davies, S., Pierson, E., Feller, A., Goel, S. (2016, October 17). A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. The Washington Post.
- Dressel, J. and Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances.
- Guo, E., Geiger, G., Braun, J.-C. (2025). Inside Amsterdam’s high-stakes experiment to create fair welfare AI. MIT Technology Review.
- Leufer, D. (2020). Myth: AI can be objective or unbiased. AI Myths.
- O’Neil, C. (2016). Introduction and Chapter 1: “Bomb parts: What is a model?”. In Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group.
- Ziosi, M. and Pruss, D. (2024). Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool. FAccT ‘24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency.
Optional readings
- Bennett, S. H. (2020, 20 August). On A-Levels, Ofqual and Algorithms. Sophie Bennett’s blog.
- D’Ignazio, C. and Klein, L. (2020). 2. Collect, Analyze, Imagine, Teach. In Data Feminism. MIT Press.
- Narayan, A. (2018). Tutorial: 21 Definitions of Fairness and their politics. Proceedings of the 2018 ACM Conference on Fairness, Accountability, and Transparency.
- Suresh, H. and Guttag, J. (2020). A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency.
- Wachter, S., Mittelstadt, B., Russell, C. (2021). Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law. West Virginia Law Review, Vol. 123, No 3.
- Wang, A., Kapoor, S., Barocas, S., Narayanan, A. (2023). Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy.
- Documentation on the A-Level algorithm: Ofqual (2020). Executive summary, Student-level equalities analyses for GCSE and A level, Summer 2020. pp. 5-8; Ofqual. (2020, 15 April). Equality Impact Assessment.
Concrete resources
Standards and Laws
Session 3: Assessing the impacts
Mandatory readings
- Ada Lovelace Institute. (2020). Examining the Black Box: Tools for assessing algorithmic systems.
- Ada Lovelace Institute, AI Now Institute and Open Government Partnership. (2021). Executive Summary. Algorithmic Accountability for the Public Sector..
- Constantaras, E., Geiger, G., Braun, J.-C., Mehrotra, D., Aung, H. (2023). Inside the Suspicion Machine. Wired.
- Costanza-Chock, S., Raji, I. D., Buolamwini, J. (2022). Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. FAccT ‘22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency.
- Groves, L., Metcalf, J., Kennedy, A., Vecchione, B., Strait, A. (2024). Auditing work: Exploring the New York City algorithmic bias audit regime. In Proceedings of the Association for Computing Machinery. Association for Computing Machinery.
- Riley, S. (2024). Overriding (in)justice: Pretrial risk assessment administration on the frontlines. FAccT ‘24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency.
Optional readings/listen
- Ada Lovelace Institute. (2025). Going Pro? Considerations for the emerging field of AI assurance. - For an overview of the current state of the field of “AI assurance”.
- Braun, J.-C., Constantaras, E., Haung, H., Geiger, G., Mehrotra, D., Howden, D. (2023). Suspicion Machines Methodology: A detailed explainer on what we did and how we did it. Lighthouse Reports. - For a methodology deep-dive on how data journalists audited a risk-assessment algorithm for bias (here, in the Dutch context).
- Elish, M. C. (2020). Sepsis Watch in Practice: The labor of disruption and repair in healthcare. Data & Society: Points. - For an account of why assessments are in context are important.
- Gender Shades. How well do IBM, Microsoft, and Face++ AI services guess the gender of a face?. - For an example of one the first third-party technical audits.
- Marda, V., and Narayan, S. (2020). Data in New Delhi’s Predictive Policing System. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency. - For an example of what it is possible to do even without access to algorithms (here, in the Indian context).
- Reply All Podcast. (2018). Episodes 127 and 128: The Crime Machine. Gimlet Media. - For an example of how metrics can impact policies.
- Varon, J. and Peña, P. (2022). Not My A.I.: Towards Critical Feminist Frameworks To Resist Oppressive A.I. Systems. Carr Center for Human Rights Policy, Harvard Kennedy School, Harvard University. - For a non-institutional vision of impacts.
Examples of impact assessments
On environmental impacts
Examples of institutional audits
Examples of explainers/primers to familiarize public servants with issues and impacts
An example of impact assessment in context
An example in the law
Session 4: Building Accountabiilty through “AI governance” (transparency, appeals, public procurement)
Mandatory readings
- Ananny M, Crawford K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society.
- Brandusescu, A., Sieber, R. E. (2025). Design versus reality: assessing the results and compliance of algorithmic impact assessments. Digital Society.
- Green, B., Kak, A. (2021). The False Comfort of Human Oversight as an Antidote to A.I. Harm. Slate.
- Jansen, F., Cath, C. (2021). Just Do It: on the limits of governance through AI registers. In AI Snake Oil, Pseudoscience and Hype, edited by Frederike Kaltheuner. Meat Space Press.
- Kolkman, D. (2020). F**ck the algorithm? What the world can learn from the UK A-level grading algorithm fiasco. LSE Impact Blog.
- Rodelli, C., Chander, S. (2025). One Year On, EU AI Act Collides with New Political Reality. Tech Policy Press.
Optional readings
- Feathers, T. (2023). It takes a small miracle to learn basic facts about government algorithms. The Markup.
- Johnson, N., Silva, E., Leon, H., Eslami, M., Schwanke, B., Dotan, R., Heidari, H. (2025). Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees’ Practices, Challenges, and Needs. FAccT ‘25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency.
- Parmar, T. (2025). Government Documents Show Police Disabling AI Oversight Tools. MotherJones.
- Pénicaud, S. (2025). Making Algorithm Registers Work for Meaningful Transparency. IA Ciudadana.
- Wright, L., Muenster, R. M., Vecchione, B., Qu, T., Cai, P. (S.), Smith, A., Comm 2450 Student Investigators, Metcalf, J., & Matias, J. N. (2024). Null compliance: NYC Local Law 144 and the challenges of algorithm accountability. FAccT ‘24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency.
- Yew, R., Marino, B., Venkatasubramanian, S. (2025). Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act. FAccT ‘25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency.
Registers
Procurement practices
Redress
There are few examples of good appeal and redress. This report by Doteveryone, although focused on online services, is a good source of inspiration, especially in its recommendations.
Session 5: Ensuring meaningful participation
Mandatory readings
- Attard-Frost, B. (2023). AI Countergovernance. Midnight Sun.
- Costanza-Chock, S. (2020). Design Practices: “Nothing about Us without Us.”. In Design Justice.
- Hu, W. and Singh, R. (2024). Enrolling Citizens: A Primer on Archetypes of Democratic Engagement with AI. Data & Society.
- Robinson, D. G. (2022). “Chapter 2: Democracy on the Drawing Board” and Conclusion. Voices in the Code. Russell Sage Foundation.
- Sloane, M., Moss, E., Awomolo, O., Forlano, L. (2022). Participation is not a Design Fix for Machine Learning. EAAMO ‘22: Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
- Young, M. (2025). Gear Shift: Driving Change in Public Sector Technolgoy through Community Input.
Optional readings
- Carollo, M., Tanen, B. (2023). How a Group of Health Executives Transformed the Liver Transplant System. The Markup.
- Cardullo, P., Kitchin, Rob. (2019). Being a ‘citizen’ in the smart city: up and down the scaffold of smart citizen participation in Dublin, Ireland. GeoJournal.
- Office for Statistics Regulation Authority. (2021). Ensuring statistical models command public confidence: Learning lessons from the approach to developing models for awarding grades in the UK in 2020, Executive summary.
- Ofqual. (2020). Analysis of Consulation Responses: Exceptional arrangements for exam grading and assessment in 2020.
- Wylie, B. (2018, 13 August). Searching for the Smart City’s Democratic Future. Centre for International Governance Innovation.
Literacy
Frameworks
Concrete Examples
Session 6: Taking down a system and managing the aftermath - Conclusion
Mandatory readings
- Ehsan, U., Singh, R., Metcalf, J., & Riedl, M. (2022). The algorithmic imprint. FAccT ‘22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency.
- Foxglove. (2020, 17 August). We put a stop to the A Level grading algorithm!.
- Leufer, D. (2020). Myth: AI has agency: headline rephraser tool. AI Myths.
- Poole, S. (2020, September 3). Steven Poole’s word of the day: ‘Mutant algorithm’: boring B-movie or another excuse from Boris Johnson?. The Guardian.
- Redden, J. (2022). Government’s use of automated decision-making systems reflects systemic issues of injustice and inequality. The Conversation.
Optional readings
- Green, B. Z. (2019). Chapter 2: The Livable City: The Limits and Dangers of New Technology. In The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future. MIT Press.
- Johnson, N., Moharana, S., Harrington, C., Andalibi, N., Heidari, H., Eslami, M. (2025). The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment. FAccT ‘25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency.
- Lulamae, J. (2022). People are still angry about the UK’s 2020 grading algorithm experiment. AlgorithmWatch’s Automated Society.
- Ofqual. (2021). Decisions on how GCSE, AS and A- level grades will be determined in summer 2021.
Examples of resistance
France
The Netherlands
The US
The UK