Report from FAccT 2022


The fifth iteration of FAccT (the ACM Conference on Fairness, Accountability, and Transparency) was held earlier this month (June 21–24) in Seoul, South Korea. More than just a hybrid conference, this was actually a full in-person conference, combined with a full on-line conference. These happened in parallel, with virtual sessions starting before and continuing after the in-person component each day.1 Around 500 people attended in person, with another 500 participating remotely. I was there as part of the team behind a paper on the Values in Machine Learning, which ended up being awarded one of four distinguished paper awards!

The conference has grown immensely since I first attended in 2019, but continues to feature impactful interdisciplinary research on topics like fairness, bias, explainability, privacy, governance, and more. Given the growth, it was inevitable that there would be parallel tracks this year, meaning that I only saw a small portion of the work that was presented in-person, not to mention all of the online sessions I missed. As such, this will be a highly partial account of some parts of the conference that I experienced.

The core of most conferences is paper presentations, and here those were grouped into sets of three papers on a loose theme, such as Human-AI Collaboration, Responsible Data Management, or Ethical AI. Unlike most conferences, tutorial and CRAFT sessions2 were distributed throughout the conference, rather than being held beforehand, and covered topics like how to communicate across communities, lessons learned within industry, mapping surveillance, and co-designing with workers.

For me, one of the most interesting threads this year was work on auditing and accountability.3 It feels like by now there is now broad understanding and agreement within the community that algorithmic harm is not something that can be solved with a simple technological fix. As such, much of the community’s attention has turned to investigating, documenting, and holding people responsible for the ways in which algorithmic systems are affecting people’s lives, especially among already marginalized communities.

Along those lines, several papers and sessions touched on the topic of audits. Although auditing has become an increasingly popular approach to trying to hold various institution accountable, there are definite limits to the power of auditing, and even the potential for them to strengthen the legitimacy of questionable systems.4

As an example of why this is tricky, consider a CRAFT session that Evani Radiya-Dixit ran, presenting a framework she developed for auditing the use of facial surveillance by police. This work considered three real-world deployments of such systems, and was able to consistently document ways in which they failed to meet quite minimal legal and ethical standards. As was raised during the Q&A, however, developing such a framework entails the risk of legitimating the use of facial surveillance by police; if they are able to meet the auditing standards, then they could potentially claim that a system is in some sense accredited or even ethical, whereas as many others would oppose any such use of the technology in this context.

A fantastic example of this kind of questionable repurposing of research was presented by Meg Young, from her paper on corporate capture at FAccT. In addition to looking at corporate funding of and participation5 at the conference, her work specifically examined a paper published at FAccT 2021, which performed a “cooperative audit” of Pymetrics, a company which makes a job applicant screening product. That research was funded by the company, and the authors on the paper include both academics and employees, including the Pymetrics CEO as last author. In addition to pointing out the potential for conflicts of interest and limitations of the reviewing process (in which reviewers would not have been able to see authorship and funding information6), it was revealed how this work was later used by the company as the basis for marketing claims that the company had been “independently audited”. Moreover, the company has been a strong supporter of legislation in New York which would prohibit employers from using such tools, unless they have been audited, which would potentially have the effect of creating barriers to competition.

The session in which that paper was presented was one of the most interesting that I attended, including three papers presenting work that was critical of FAccT itself. In addition to the first paper, Ben Gansky presented a paper critiquing “metadata maximalism” – the approach of trying to get researchers to improve transparency and documentation (while potentially ignoring the question of whether this actually helps the subjects of classification) – and Elayne Ruane presented research on how papers published at FAccT conceptualize and operationalize harms and disparities, showing a tendency within the community towards treating these problems in the abstract, rather than focusing on concrete harms to specific groups.7

That issue in itself might sound somewhat abstract, but it takes on greater salience when paired with presentations in other sessions. Without wanting to call out any particular paper, I will just say that I attended multiple presentation in which authors rhetorically positioned themselves firmly within the norms of FAccT, including adopting the framing of sociotechnical systems and systemic change, but which nevertheless seemed to me to fall into the familiar trap of approaching all problems as amenable to being understood using computational models.

Obviously mathematical modeling can have great value and power in some contexts, but I am skeptical of most attempts to apply ideas from fields like dynamical control or complex systems to social problems, no matter how well intentioned. Even though these models are sophisticated, the real world will inevitably be more complex. Treating social worlds as systems that can be modeled using a few equations seems to work against the more broad minded thinking that the titles of these papers suggest, and risks falling into the same traps as more simplistic work on bias and fairness in classification.

Similar tensions were present in some of the keynotes. Tino Cuéllar (former Justice of the Supreme Court of California) spoke on a topic I did not expect – the risks posed by large language models.8 His talk characterized the current and/or next generation of such models as being “hyper-social”, in the sense that they will be capable of producing coherent natural language across a wide range of contexts. Drawing on a subtle combination of ideas, he argued that even though we recognize that these language systems might be poor substitutes for human interaction, the demand among certain groups (especially the elderly, who may have fewer other options, and they young, who may creatively explore this space of possibilities), combined with pressure from companies to sell these technologies, are likely to lead to the pervasive deployment of systems that, while not truly competent, may nevertheless be persuasive, dangerous, and yet potentially beneficial in some contexts. I personally think this assessment is likely to be correct, though as participants rightly pushed back on, we need to be careful not to reinforce the narrative of hyper-competence in attempting to raise the alarm about these developments.

The question of hype was also raised in William Isaac’s interview with Karen Hao, who has covered AI and technology and society for the MIT Technology Review and now the Wall Street Journal. For those familiar with the actual research happening in AI, it is easy to see the extent to which this research tends to be exaggerated and hyped in the media. However, this discussion helpfully illuminated some of the tensions involved. In her own experience, Karen described walking a fine line between wanting to push back against false claims, but also not wanting to bring additional attention to research that might best be ignored.

Unfortunately, as AI is deployed more widely, more and more journalists are being tasked with covering it, without having a sufficient understanding of the technology, they as such are likely to listen to whomever is speaking the loudest. Her advice was that researchers who speak publicly in trying to counteract hype are extremely helpful, as this provides resources that journalists can use to push back against a misleading narrative.

The interview also raised a number of issues and questions related to the North American and European bias that tends to exist at FAccT. Part of this is about the pervasive Western perspective in describing various aspects of algorithmic systems, such as using the term “ghost work” to describe labour that is all too obvious in the countries where it is taking place. The other part was a discussion of the notable lack of participation by AI researchers from China in FAccT. This seems especially pressing, given the differences in how these technologies are being pursued in China compared to the other parts of the world, and the role that China is likely to play in the global deployment of AI.9 As Karen pointed out, Chinese AI researchers are in fact actively following the AI ethics conversations in the West, but do not always feel free to speak about these issues; Western researchers, by contrast, mostly have no idea what’s happening in China.

Finally, I also recommend André Brock’s keynote on what he called “The Libidinal Algorithm or Weak Tie Racism”. This was one of the most philosophically sophisticated talks at the conference, and is not easily summarized. Building on critical race theory and the idea of race as technology, André frames black people as expert users of technology who nevertheless experience “authorless” racism through this engagement, especially on social media, in part because of the ways in which platforms tend to amplify potentially traumatizing content. This was a complex and subtle presentation, full of humour, and develops a rich network of concepts, so it’s worth watching in full.

The conference ended with a town hall, which provided excellent transparency about the challenges, costs, and organizational structure involved in running the conference, along with a summary of some of the tensions within the community.10 As it turns out, the concerns expressed by the participants were largely mirrored by findings reported in a great mixed-method retrospective paper that was also presented at this year’s conference.

In particular, people are delighted that FAccT exists as an interdisciplinary space, but there is some concern that the sub-areas within the conference are too fragmented, and not sufficiently in communication with each other, both at the stage of reviewing and presentations. Moreover, several people expressed concern that FAccT remains or has grown somewhat insular, with limited participation by people who are not already working on FAccT topics (not to mention those outside academia), as well as limited reach of these ideas beyond this community, especially to more mainstream machine learning researchers. There are also definite tensions around how much CS should be the central focus of the conference, how to make this work truly collaborative with those affected, and how to fund the conference without compromising the community’s values (especially when the majority of the sponsorship is coming from large tech companies).

Overall, I would say the conference was a huge success, despite somewhat lower-than-expected participation by more senior researchers. Josh Lazar did an exceptional job running things, and seemed to literally be everywhere, both in person and online. However, given the costs and difficulty of trying to run a full in-person and full online conference simultaneously, the tentative plan is to switch to a model of alternating each year between an online conference and a hybrid (i.e., in-person with streaming) conference. If you have strong opinions about this, it seems like the organizers would love to hear from you.11


  1. Pre-recorded videos are now available for all of the papers, as are live recordings of the keynotes. Unfortunately it does not appear that any sessions other than keynotes were recorded, which means there is a great deal that has not been archived, including all paper Q&As and most tutorial and CRAFT sessions. All papers are available on the conference website. ↩︎

  2. CRAFT = Critiquing and Rethinking Accountability, Fairness and Transparency ↩︎

  3. A few of the many papers on this topic include Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem, Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning, and Making the Unaccountable Internet: The Changing Meaning of Accounting in the Early ARPANET↩︎

  4. Although it was presented elsewhere, a great reference here is Deb Raji’s paper on the limitation of audits, including when they can work well, and what we can learn from efforts in other sectors. ↩︎

  5. Among the corporate affiliations listed by attendees (in-person and online) in their profiles on this year’s conference platform were: AdeptID, Adobe, Apple, DeepMind, ETS, Elsevier, Google, HireVue, HuggingFace, IBM, Indeed, JP Morgan, Kigumi Group, Korea Electric Power Corporation, LG, Medtronic, Meta, Microsoft, Naver, Oxford Wave Research, Parity, Pinterest, PwC, RELX, Salesforce, Sony, Spotify, Tableau, Twitter, Uber, and Zalando↩︎

  6. In fact, this very issue came up for me in reviewing for FAccT, in which a paper I otherwise liked generated concerns about possible conflicts of interest, due to a lack of information about funding, authorship, and positionality. ↩︎

  7. In addition, this work found that among those papers which are explicit about groups that may be impacted, authors overwhelmingly focus on race and gender. ↩︎

  8. I have a Twitter thread summarizing this keynote. ↩︎

  9. I also have a Twitter thread summarizing the main points presented at the Town Hall, along with questions and comments from the audience. ↩︎

  10. During the town hall, the organizers explained that part of the motivation for holding the conference in Korea was the hope that it would encourage and enable more researchers from China to attend. Unfortunately this did not happen, in large part due to the pandemic. ↩︎

  11. For those who attended this year, there is also a post-conference survey where you can provide feedback: https://t.co/jNTEUEinhO ↩︎