Welcome back to Mixtape, The TechCrunch podcast that looks at the human element that drives technology.
For this episode, we talked Meredith Whittaker, co-founder of AI Now Institute and Minderoo Research Professor at NYU; Mara Mills, Associate Professor of Media, Culture and Communication at NYU and Co-Director of NYU Center for Disability Studies; and Sara Hendren, professor at Olin College of Engineering and author of the recently published What a body can do: How we meet the built world.
It was an extensive discussion on artificial intelligence and disability. Hendren started us by exploring the distinction between the medical and social disability models:
So in a medical functioning model, as formulated in disability studies, the idea is just that disability is some kind of condition or a weakening, or something going on with your body that takes it out of the body’s normative average state says something in your sensory makeup or movement or what is impaired and therefore the disability lives on the body itself. But in a social disability model, it’s just an invitation to expand the aperture a bit and include, not just the body itself, and what it does, what it does or does not do biologically. But also the interaction between this body and the normative forms of the world.
When it comes to technology, Mills says, some companies work just within the medical model with the goal of being a total cure rather than just accommodation, while other companies or technologies – and even inventors – will work more in the social model with the goal of to transform the world and create housing. But despite this, she says, they still tend to have “fundamentally normative or general ideas about function and participation rather than disability forward-looking ideas.”
The question with AI, and also just with old mechanical things like Brailers, I would say, is whether we aim to perceive the world in different ways, in blind ways, in minority ways? Or is the goal of technology, even if it is about making a social, infrastructural change, still about something standard or normative or seemingly typical? And it is – there are very few technologies, probably for economic reasons, that really go to the future design of disability. ”
As Whittaker notes, AI is fundamentally normative.
“It draws conclusions from large data sets, and that’s the world it sees, right? And it looks at what is most average in this data and what is an outlier. So that’s something that consistently replicates these norms, right? If it is trained in the data and then it gets an impression from the world that does not match the data it has already been seen, that impression will become an outlier. It does not recognize that it does not know how to treat it. Right. And there are many complexities here. But I think I think it’s something we need to keep in mind as a kind of core of this technology when we talk about its potential uses in and out of this kind of capitalist incentives, like what is it capable of? to do? What does it do? How does it work? And can we think about it, you know, ever possibly in company that includes the many different, you know, huge amounts of ways that disability manifests or does not manifest. ”
We talked about this and much much more in the latest episode of Mixtape, so click on play above and dig right in. And then subscribe wherever you listen to podcasts.