Stability and Change

One of the biggest frustrations with software is that things are constantly changing. From operating systems to apps to web interfaces, things rarely remain the same for very long, especially for users of Windows or MacOS. There are many reasons for this of course. For decades, hardware has continued to improve at a steady rate, and so software is constantly being rewritten to take advantage of the latest capabilities. Moreover, the incredibly sloppy standard for software quality and reliability (compared to traditional engineering disciplines) means that even the most professional software is shipped with massive numbers of bugs and vulnerabilities, which constantly need to be patched.

Confidence in Science

I’ve been thinking recently about the role of confidence in science, and how long beliefs can persist simply because everyone else seems to believe them. Coincidentally, Andrew Gelman posted about this two days ago, responding to comments from a biologist about how the replication crisis had not been a major problem in biology. Her argument was that this was because biology is a “cumulative science”. By this she meant that when something important gets published, it is often the kind of discovery that people want to use immediately.

AI Dermatology: Part 2

​In the last post, I discussed the possible broader implications of Google’s recent foray into making an AI dermatology tool. In this follow up post, I want to focus on the research behind the product announcement, bringing a slightly critical eye.

AI Dermatology: Part 1

Midway through last year Google announced a new foray into the medical technology space, sharing that it was developing an “AI-powered dermatology assist tool”—a phone-based app that would allow users to take photos of skin lesions and retrieve information about relevant medical conditions from the web. Similar apps already exist, but it’s fair to say that a comparable effort by Google is likely to have much more significant effects on how people interact with the medical system, their personal data, and even their own bodies.

Vaccine Allocation at Stanford Hospital

In a video that was widely shared last Friday, a representative from Stanford Medical Center spoke to residents protesting how the hospital chose to allocate its first shipment of COVID-19 vaccines. The hospital had around 5,000 initial doses to distribute (and expects to have tens of thousands more within the next few weeks), and came up with an allocation scheme in which only 7 of the approximately 1,300 residents were on the list. Many of these residents deal directly with patients who have COVID-19, whereas other more senior physicians, as well as other front line workers, such as nurses and food service employees, were given priority. In the video, the spokesperson explains that the algorithm they used to come up with an allocation scheme “clearly didn’t work”, to which protestors respond by shouting “Algorithms suck!” and “Fuck the algorithm!” …

The case for professional critics in science

In many areas of science, there is an increasingly urgent unmet need, a role that could be simultaneously fascinating, rewarding, and potential remunerative. It is a role that already exists in various forms, but which could be made into something much more potent, especially if forces converged to make it more prominent. I am talking, of course, about the professional science critic. In the popular imagination, science operates something like a priesthood: scientists enter elite institutions as novices and emerge years later as full-fledged representatives of The Truth.

Representational Power

The original plate of “View from the Window at Le Gras”, a heliograph made by Nicéphore Niépce around 1827.

The original plate of “View from the Window at Le Gras”, a heliograph made by Nicéphore Niépce around 1827.

Although it isn’t normally thought of in these terms, taking a photograph involves recording a four-dimensional block of space-time and projecting it down to a two-dimensional representation. With sufficiently sensitive material (and a fast enough shutter), one can produce images more or less instantaneously, but longer exposures reveal the inherent temporality of this process, showing us something that is clearly based on the world, yet quite different from our experience of it. Today, the ability to create images is so commonplace, of course, that we easily take it for granted, but early commentaries on photography reveal just how extraordinary it once was. Indeed, the history of photography provides both a compelling example of the power of representation, and a useful parallel to more recent forms of technological magic, especially that of machine learning.

On the Perils of Automated Face Recognition

Anthropometric data sheet (both sides) of Alphonse Bertillon (1853–1914).

Anthropometric data sheet (both sides) of Alphonse Bertillon (1853–1914).

For anyone who has been paying attention, it will not have gone unnoticed that the past year has seen a dramatic expansion in the use of face recognition technology, including at schools, border crossing, and interactions with the police. Most recently, Delta announced that some passengers in Atlanta will be able to check in and go through security using only their face as identification. Most news coverage of this announcement emphasized the supposed convenience, efficiency, and technical novelty, while underplaying any potential hazards. In fact, however, the combination of widely available images, the ability to build on existing infrastructure, and a legal landscape that places very few restrictions on recording, means that face recognition represents a unique threat to privacy that should concern us greatly.

What everyone needs to know about interpretability in machine learning

For anyone who’s been paying attention, it should be apparent that statistical machine learning systems are being widely deployed for automated decision making in all kinds of areas these days, including criminal justice, medicine, education, employment, policing, and so on. Particularly with the recently enacted GDPR—the new European regulation about data and privacy—there is growing interest in having systems that are interpretable, that is, we can make some sense of why they are making the prediction that they are making. To borrow an example from Been Kim, if a computer tells you that you need surgery, you’re probably going to ask for some sort of explanation.

Privacy in Context

Although it was not the largest of its kind, or the most invasive, or even particularly surprising, the recent Cambridge Analytica scandal produced a surprisingly large amount of outrage and commentary. If nothing else, it was yet another reminder that we have gradually slipped into a regime where certain aspects of our privacy that could once be taken for granted are now long gone. Are people concerned? Is this something we should be worried about?