Skip to content

Latest commit

 

History

History
411 lines (245 loc) · 51.3 KB

1-algorithmic-racism-edj-2020-03-17.md

File metadata and controls

411 lines (245 loc) · 51.3 KB

Data. Together. Let's read about it

Algorithmic Racism & Environmental Data Justice (March 17)

🎬 Recorded Call

Intro

These selections mostly come from the environmental data justice (EDJ) Syllabus, which has been an evolving document of literature and links to organizations that work in the environmental and data justice spheres. Members of EDGI have preliminarily defined EDJ as "public accessibility and continuity of environmental data and research, supported by networked open-source data infrastructure that can be modified, adapted, and supported by local communities" (Dillon et al. 2017).

A lot of the readings do not necessarily have to do with environmental data, but are the data justice offerings that influenced EDGI's EDJ as working group. The first is Sasha Constanza-Chock's essay on Design Justice, which is now a book! Then, we are reading the FAIR and CARE principles that the Indigenous Sovereignty Networks have developed. Finally, Pollution is Colonialism is a short but eye opening blogpost.These works have influenced the academic article of the bunch, "When data justice and environmental justice meet: formulating a response to extractive logic through environmental data justice."

Readings

(Optional)

Themes

All of these works surface themes:

  • Interlocking powers of oppression as they shape and contain data
  • Extractive logic by which we produce and use data
  • Community-centered data practices
  • Activism for better data

Key Points from Readings

EDJ Syllabus
  • Most of these readings are on the EDJ syllabus. I encourage everyone to explore it and if any of the themes in the readings stick out to you, you can find more on them here. There is a more elaborate DJ section. I still have to put up the FAIR and CARE principles.
  • We aim to develop the EDJ syllabus as a collaborative and ever-changing document. If you want to add something, please do.
Pollution is Colonialism
  • Land is at the center of colonialism
    • Value comes from the extraction of resources and people from land.
  • Pollutants are material forms of harm
    • Pollutants pose health hazards to land and bodies.
  • The state gives permission to pollute
    • Industry gets permits to pollute certain amounts of compounds without the consent of people.
  • Call to action
    • List of relevant activist organizations
CARE Principles
Collective Benefit

Indigenous peoples should benefit from data infrastructure design and functions. This design process should promote the self-determination of communities and meet their needs. Moreover, communities should be engaged in determining data policies.

Authority to Control

Data should recognize indigenous peoples’ self-determination and self-governance. Data policies and protocols should recognize both collective and individual rights to free, prior, and informed consent

Responsibility

Researchers working with Indigenous data are responsible for showing how it relates to Indigenous Peoples’ self-determination and collective benefit. Relationships between communities and data stewards should encourage transparency, trust, respect, reciprocity, mutual understanding. They should focus on expanding capability and capacity while enhancing data literacy. Resources should generate data that pertain to the languages, worldviews, values, and principles of indigenous peoples.

Ethics

Data collection and practices across the data lifecycle should align with the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP), and ethical assessments of benefits and harms should be conducted from the perspectives of those relating to the data. These ethical considerations should be geared towards future use and metadata data acknowledging data provenance and study the limits or obligations

FAIR Principles
Findable
  • (Meta)data is rich and contains an identifier that is clearly described. They are registered and indexed.
  • Question: what about anonymous data? How much work is it to index (meta)data?
Accessible
  • (Meta)data are retrievable, and the protocol for this process is open and authenticated.
  • Archiving of metadata is important to ensure accessibility
Interoperable
  • (Meta)data are accessible and use "broadly applicable language"
  • "use vocabularies following FAIR principles"
  • Reference to other (meta)data (everything is linked!)
Reusable
  • Rich, accurate descriptions and detailed provenance. Community standards are important for (meta)data.
Design Justice
  • New book out! https://mitpress.mit.edu/books/design-justice
  • The matrix of domination - interlocking structures of oppression including but not limited to racism, ableism, heteropatriarchy, capitalism, etc. inform most top-down design practices.
  • Alternative design of systems, infrastructure, and information can center marginalized perspectives by involving communities in a bottom-up approach.
  • Organizations like Detroit Digital Justice and Allied Media are moving towards this approach towards design for collective liberation and sustainability.
Extractive Logic
  • Bridging from Design Justice and Black feminism, we explore how data is tied up in the Matrix of Domination. Environmental data both serves and can be used to challenge the matrix of domination.
  • Can justice ever be achieved? Maybe not, but we are working towards it.
  • Has an informative overview of EJ vs. DJ. Both deal with extraction and accumulation of capital to the disadvantage of historically marginalized people.
  • ‘Extractive logic’
    • the accumulation of resources, like data, for capital gain by those in power. Connects with Pollution is colonialism.
  • disconnects data from provenance
    • erased the source of data and the people/ power relations that extracted it.
  • privileges the matrix of domination
    • Certain people can see/create/use/access data over others.
  • whitewashes data to generate uncertainty
    • ECHO and TRI present themselves as authoritative data sources but are really industry self-reported and not easy to use databases.
  • EDGI's public comment
  • We struggled to get these ideas across to a policy audience.

Discussion

LOURDES: I'm gonna give my spiel of just my experience wading through these readings because they're sort of like big pillars that are holding up my personal work. And then informing the environmental data justice theoretical framework as a whole.

Starting with Design Justice, that's the first time where I started to really think about design, and in this case building environmental data infrastructure, in terms of intersectionality. When I went to the data justice conference in Cardiff, Sasha Constanza-Chock gave the keynote address. That's the first time that I learned about their work. And they talked about going through airport security as trans. And the fact that when you go through those big giant detector scanners, they click on whether you're male or female. So then people who are trans always get pulled to the side. And then I'm thinking about that in terms of intersectionality, and thinking, what if you're trans and a person of color as well, you're probably always going to get pulled for extra security measures or something. I noticed that myself: in the summer, when I'm darker, I get pulled, and also when I wear my hair up in a bun, because the whole system is based on white and cisgender people. Hairdos that go outside of that, they just throw off the system.

In response to these sort of issues, the whole idea of design justice is bottom-up design. Like I said before, I think the term design– maybe to talk about this a little bit more: what does design mean? Is it engineering as well as planning? And then how does that inform or how does that work within environmental data justice and building new tools that are from the lens of community members.

So then that takes me to the FAIR and CARE principles. I think those are a really good concrete guideline for data scientists and technologists and scientists for working with communities and vulnerable populations. These principles are really popular with and come out of out of the US Indigenous Data Sovereignty Network.

Recently, I wrote one of my comprehensive exam papers on how scientists can apply the care principles to genomic research. So, they're in research. There's just a long history of extractive research that's either killed people or perpetuated ideas of racism and sexism and heteropatriarchy and also getting caught up in this cycle of research being co-opted by capitalism and being for-profit instead of for the interests of the people.

And then then Pollution is Colonialism I think that really reoriented me towards the connection between environmental hazards and land, and then how that's all tied into and produced by a whole history of the state being built on native land and through the appropriation of this land in order to generate capital.

Throughout all these readings, systems of power are taking whatever kind of capital and people's information, people's health and livelihood, taking that in order to produce academic papers, produce money, produce whatever feeds their power. And I'm part of this too.

Rico I don't know if you want to go about and just reflect on the other articles?

RICO: Yes, I can start with my own reason for being interested in this line of study. I think, as someone who has a political background, I've spent plenty of time talking to– I think we've all had this experience of talking to people who believe that – and I'm particularly interested in the racial aspect of this, but – believe that racism exists, and will always acknowledge that, but who localize it to just being an interpersonal phenomenon, that systems don't encode– systems aren't persons. And so systems don't encode those biases.

Especially in a conversation where you're trying to change someone's mind. It's really hard to walk through how a whole system gets shaped and born and fortified with all these small decisions.

I'm particularly interested in applications of algorithms for very simple-to-understand life purposes, that carry with them the biases of the data that are used to generate them or the creators of the algorithms themselves.

One reason I'm interested in it is I think, because it's so hard to say, Hey, listen, Uncle Jim. Listen, man, the system is racist. He's like, Look, I don't doubt that there are politicians and there are DMV workers who are racist, but like the system is not the problem.

I think this is a really easy to understand story: Black Americans are 12% of the American population 12 to 15%. That means that if you're trying to source black faces and arms and bodies to teach a car to stop when it sees that, if you're doing a good job, 12% of the images you're getting to the algorithm to train on are black arms and legs and blah blah. So that means it's going to be significantly less talented at spotting those things and therefore it's gonna stop just a little bit slower. All these things feed into a real danger that is "not the fault" of the researchers who went and just got as much data as they possibly could or "not the fault" of the researchers who wanted to make sure that the data that they got was representative of the population that is going to be impacted by these cars, right? If they say, this is an algorithm for cars that are driving in America, well, 15% of Americans are black. So we're going to have 15% of the images that we train our algorithm on to be black. And that presents a real problem.

You can look at this as a summation of a lot of well-meaning decisions at different points in the chain that results in a really horrible outcome where our black neighbors are walking down the street and not sure if an autonomous vehicle is going to stop for them when they're crossing the street. And that is systemic.

It's not specifically tied to one person saying a bad joke, which we all so easily define as racism. It's part of a system of norms that develop, that have shaped whiteness in America as default. Not necessarily better or worse, but default. When you think of just an average American, most people think of a white family. I shouldn't say most people, most white people do, I'm sure. So the impact of that is that gets encoded in our daily lives.

Unless we're consciously working to reverse that, we're on the verge of amping up everything about our society considerably. And all the little little elements of racism that exist in these algorithms, they will be in the car that I drive and the Uber that I pick up and the blank and the blank and the blank. Whatever racism is tied into the system is about to get multiplied.

That's a real interest of mine in terms of trying to educate myself and others on how this happens so that we can convince more people that this is a real, clear systemic case, because this seems to be the cleanest version of that; I think there's a lot of people who aren't going to believe that the system is racist unless they see everybody with a federal government, job, say something really terrible about another race. And that's just not how it actually exists.

That's how I've been drawn to the subject. That's my own little philosophy. So I tried to share some readings of just very light weight. Here's a tech application and here's why it's doing a shitty job perpetuating an encoded norm, and that norm being racially or otherwise tinged.

LOURDES: I think that definitely connects with design justice. Sasha writes about algorithmic bias, but all of our materials are built from these systems of oppression, like this Nalgene bottle and the straw – everything is built from this system. Coming from an environmental perspective, people of color are disproportionately affected by the emissions from making those plastics, and we buy into that whole system. And the wealth that's generated from that.

There's a really good article by Wendy Leo Moore, that of whiteness, what whiteness is because the ideas of white and black come out of racism, come out of that system. The idea is that whiteness is property is that wealth that generations have inherited from slavery and also that ability to navigate life without that kind of discrimination.

KELSEY: I like where you're going with that, Lourdes. Specifically, one of the things that I think is really important about understanding various forms of bias, including racism, is – I don't know if I should call it a privilege or a right to not worry about it. Because things are just made with you in mind.

KEVIN: That's true. I really liked the design justice piece because they ask the question of who's being introduced at the beginning, who's being involved in the problem solving. I've been going down this design thinking pattern a little bit and Lourdes, you asked earlier, what is design? For me it's problem solving, whatever realm that happens to be. It could be applications, it could be engineering, design can happen in all forms. I was just thinking about those questions at the beginning, how are people involved? And who gets to be involved? Are the communities affected being involved?

I also like that you shared the CARE principles alongside the FAIR principles. The FAIR principles were really data-centric, but the CARE principles were talking about the communities or what would happen with data.

Thinking about design justice makes a lot of sense in this realm: how do we make sure that these communities are brought forward at the very beginning of any problem solving exercise, problem solving process?

KELSEY: I went to a Sunrise Movement training last summer that was held in Berkeley. I don't know how familiar each of you is with the Bay Area, but when you drive north from Berkeley, there's a big obvious petrochemical plant refinery in Richmond. There was a woman who came to speak at the Sunrise Movement training. And Sunrise Movement has a lot of good things about it. And it is a very explicitly inclusive movement. But I think a lot of more seasoned activists from different backgrounds see all the energy and all the hope and they worry that it's naivety. So one of the things that the trainers, also youth, did was they made a point of bringing in some outside organizers to kind of coalition build and gave them space to speak. One of the most interesting ones was a woman who lived down the street from that refinery. And she talked about what a visible symbol it is that that's happening in their community, which is a mostly black community, I believe. And there's a lot of reasons why this plant emitting chemicals is this obvious blight, and it's this thing people want to fight.

And there's a thing that happens on a regular basis, which is protest groups get really upset about it and decide to go make a stand and they march down the street. And she says, I live there, and I'm an organizer, and not even once in like the 20 years, whatever it is, I don't know how many decades, not even once have they come by and said, Hey, we're thinking about doing a strike in your area. Do you want to join?

It's related to the discussion that I've heard around how "easy" it is to solve problems over in Africa versus here. You can "solve" their system because you can't solve yours at home. It's not that it's easy. It's that you don't understand the complexity.

LOURDES: My friend from my program actually wrote her dissertation on the Chevron refinery. And it's one of the largest refineries in the nation. And she wrote about the community activist groups and how the refinery also greenwashes everything. They're like, oh, we're here, but Chevron will donate this whole garden and all these sustainable things to your neighborhood. So that we can be green. And that goes along with the color blindness or not color blindness, but what I talked about in the extractive logic paper, white washing. It's covering everything up including the fact that they are profiting off of people's health.

Something else I wanted to bring up was this idea of bias. I feel like data justice people or technology people always say, oh, this algorithm is biased etc. But then coming from a sociology of science perspective, we sort of equate objectivity with talking about colorblindness. The the word bias implies that there is some sort of objectivity, but we know that objectivity doesn't really exist.

RICO: As a society, we're beginning to laugh at people that say they don't see color. I think the ones who are closest to not seeing color are the ones who make sure that they see it. This is my way of saying, I worked at a fashion tech startup before Qri. We were training computer vision models on how to extract attribute data from fashion product images. So, this is a gray sweater. It's a medium gray sweater. It's a medium gray men's sweater. From just the picture of a model wearing this, we were trying to teach computer algorithms how to do that.

You'd think that when you get started on a project like that you need 10,000 images that are classified on a taxonomy. And you feed that data into a neural network, so that when I show the computer it can then learn.

When I began working there and seeing what we were doing, I was like, almost all of the models that we're using are white. That's probably the most cost-effective way to make sure we get lots of pictures, lots of models, because most of the models are white. And I was curious about how this might do when we're trying to identify a blue bag with someone who's not white.

We didn't get that far in the company. The company only lasted 11 months. But in order for the algorithm to be colorblind, it needed to see: this is a blue bag and it's being carried by a black person. This is a blue bag and it's being carried by a brown person. This is a blue bag, and it's being carried by a white person. The computers are very dumb until you make them very smart.

It can be really expensive to source a lot of that training data. You can imagine, we're about to compete for a project where we tag colors of bags, for a giant client. You want to be able to do that quickly and cost efficiently, so you're going to be searching for images of white models carrying blue and brown and yellow bags.

I think like many tech startups, we weren't doing life or death work, but that's kind of not the point. The foundation of the work that we're doing, and the approach that we're taking, if copied by everybody, encodes the wrong kind of colorblindness, where I can't tell you what color this bag is because a black model is holding it. That's a big problem. And not because it costs more to figure out what the color of that bag is, because then maybe someone has to go in and manually select: the computer's never seen a black person before, this is a blue bag.

To get to colorblindness, you have to put all of these colors at the front of this person's memory.

I equated machine learning in the application we were using it to: imagine you have a young child who fell off another planet and didn't speak any language that you spoke. And all you did was show them a picture and said blue, and another picture and said green. Eventually, after thousands and thousands of images, they figure out what green means and what blue means. That's essentially what you're doing with computer vision. But without giving more of the context of, like I said, green purse, a white woman, tall, whatever. It's gonna be blind to all those other aspects.

KELSEY: I come from a startup background too. And one of the things that you really, really feel when you're in a startup is how urgent it is that you spend little and make lots. I wish Brendan was here because I feel like we've talked about this a little bit. But I mean, frankly, it's why I left Silicon Valley.

One of the things that that Brendan and I have talked about is how different it is when he works at Qri versus when he works at EDGI, because within EDGI one of the things that we do is work very slowly. We take the problems and we take a lot of time and distance and thought and it means that we don't get things done on a product-style speed. For those of us who actually really enjoy making product, it's maddening. You want to be able to produce, otherwise you feel like you're not doing anything useful, which is a whole other thing.

But there's a huge value in doing things slowly, trying things that you don't complete because you've decided that it wasn't the right thing.

I don't want to be the person who always goes back to "oh, well, capitalism", but there is something there's something very, very real to that, which is: this is us deciding what matters for ourselves and therefore, for the parts of society we have influence over.

In my context, it was, can we really afford to try to hire for diversity, which was like, can we afford not to?

LOURDES: These are the nuances of getting caught up in capitalism.

KEVIN: I was just thinking, even EDGI has that problem. Hiring. Hiring is – it's not that you want it to be fast, but you don't want it to be too slow either because then it just drains all the energy out. And then we just use that as the excuse to use shortcuts, like we just don't have time for it. So we'll just hire the easiest thing. And the easiest thing is just to stay within our networks and the people that we know already. Rico's agreeing, like Yes, because it's really easy to get a lot of white people pictures.

RICO: Completely.

LOURDES: This whole fast pace thing– this is our life in this society. We are always going, ch-ch-ch-ch– and that's what neoliberalism is about, is this idea of being extra efficient, maximizing your profits. Getting, and always needing, more money.

Sometimes you have to take a step back. I think Coronavirus is helping us do that, helping us say, Wow, there are things that are deeply flawed in the systems that we've created for ourselves, but we also need them.

How do we live outside of them now?

KELSEY: I'm very disturbed this week by– I get that it is difficult for people to work from home while their children are not in school and not in childcare of any kind. On the other hand, I've been talking to a lot of parents who are basically weighing: I don't know I don't know how to entertain them other than putting them in front of a screen. And I'm like, What? Did we forget that you can play music and talk to people and go outside? There's so many ways of being in the world that don't involve screens, and genuinely I think people have forgotten that. And it's really biting people right now. It feels hard to watch because, I can go out there. But in the city, maybe you're just in an apartment complex, and you actually can't go anywhere. So you kind of have to have this virtual world that you actually live in. Which is why algorithms and bias training is so important, because if everything we do is moderated through a screen and especially through the internet– who controls the internet? And how?

RICO: The thing I'm most afraid of is that it's either not a person, or not a good person, or not enough people.

By controlling the internet, I'm just going to boil that big idea down into something small, my LinkedIn network. Let's say, 25 from my friends get a new job. How do they put the five that I should congratulate in front of me? It's a decision by kind of no person because there's an algorithm behind that based on the last five people I've messaged or the last five people I haven't messaged, and that just encodes whatever behaviors. It's encoding the ways of working that exists today. And like, locking them in unless you proactively change them.

Whenever I hear a Spotify song that I don't like, I'm worried about disliking it, because then I think I'm never going to hear that artist again. You know, the computer is going to learn too much.

RICO: I don't know the algorithm that recommends from LinkedIn who I should "Say hello" to but if I pass on two women in a row, if it then stops recommending that I "Say hello" to women or congratulations on the new job, that I'm not going to remember that someone now works in organization that could be great for my business. It needs to not encode just your previous behaviors, because a lot of our previous behaviors are based on the networks that we grew up with. And we've all been locked in these segregated worlds, a lot of us have been locked in a lot of these segregated worlds, and the those worlds are not going to change unless those algorithms are cracked into at some some other point.

LOURDES: I was thinking about the same thing with Spotify. I just like too many bands that are all white dudes, and I really need to stop that. But you know, I like the music that I like. But when I like things, then Spotify is like, Oh, she really likes white boy punk bands from the 90s. And so we're gonna like recommend some white boy punk bands from now. I've made a rule for myself that if it's all white cis white dudes from the past five years, I'm not listening to that.

KELSEY: The best part is that your music taste is supposed to be based on music that you've heard before and music that you've liked before. So it might actually work.

LOURDES: I have a really eclectic music taste. But, you know, it's the your daily mix. One will be jazz, one will be hardcore punk, another will be shoegaze or something.

KELSEY: Why is it that– I mean, I would expect that Spotify, LinkedIn, Facebook, all of them probably do specifically call out people and bands and stuff as white, male, female, Latina, whatever. In most forms of machine learning, we do actually choose what parameters are being looked at. We pick these ones because they're easy to identify. But they are in many ways the least interesting.

LOURDES: But then we're talking about the fact that they're not chosen. There is no equity in how these algorithms are built. There's no algorithm, I'm assuming, in Spotify that says, Oh, this person's listening to too much white music. We're gonna put some people of color bands.

RICO: This is I think a perfect distillation of your point, Lourdes. I bet most datasets at Spotify are not coded by the race of the author. And instead, it's just the genre because 90s punk bands were mostly white. Those things get correlated tightly and so when you keep listening to that, you keep getting whiter audiences. To your point, in that sense, the algorithm was colorblind, but because it was colorblind, it therefore tied to another correlative thing and therefore became pushing white music. I shouldn't say white music, white punk 90s music.

If there were somebody on the engineering team who was like, I'm gonna, tag these artists by their predominant race– I'm worried that this black punk band in the East Bay is getting no listens or whatever. There's a there's a black punk band in the in East Bay. And so I'm going to weight things. These algorithms are putting bands in front of you that are a little bit more local to your place. There are other categories that are not race specific, that can get other kinds of music in front of you.

If I'm in New York and I listen to No Effect, Pennywise, those are my bands. But if Spotify was aware that 90s punk rock music was a white dominated genre, if– I'm a New Yorker– if it pushed up a black punk band from, Queens into my feed, and I was like, this sounds cool. And then oh my god, they're from Queens. I'm much more likely to check them out. Because they're from Queens than because they're black.

LOURDES: Their algorithms incorporate the sound wave and match up not only according to genre but according to the melodies and tones. So that will match to your interest. So that's one way to get people to listen to more people of color music.

But then, I feel like, at least before all this material and readings came out, and people started writing about this, the act of tagging a band with their race, like how white or how not-white they were would be perceived as racist. And then the question of, who are you, some white dude coder, to determine the race of a band? So then you would have forms when the band puts up their information on Spotify that says, What's your demographic? And then the band will be like, why is this a question here? But I think that should be a thing.

KELSEY: I think that's really interesting. I would love to see recommendation engines– you definitely see a lot of blog posts and especially librarian curations of you know how to read for a specific story that might not be part of existing mainstream. When I was in the Portland library recently, I picked up a couple fliers that were movies and books centering indigenous voices– #ownvoices, where you're specifically reading books that are written by the people that they're about.

In our modern society, not everybody is interested in reading for diversity, but there's a huge population that is. I think that there's a market available for recommendation engines that specifically give you stuff that's outside of the norm, stuff that's outside of the mainstream.

LOURDES: I just searched on Spotify, "punks of color", and there is one playlist.

I think it does start with putting together playlists, putting together book lists, curating things. And then fostering those sorts of tastes.

KELSEY: In theory, given that this is Data Together, we're talking about algorithmic bias as an encoded algorithm. But we also have influence over how people think and perceive what normal is, right? Just by creating models of it.

One of the things that I was doing for a while was going through those book lists for whatever diversity category, some book list of things that were released in the last year, and my local library has a "recommended this book" option to suggest materials for purchase. And so I could put these in the library. I wouldn't even necessarily have to read them. They would just be on the shelves. It felt really powerful because otherwise people didn't even know they existed.

RICO: Switching gears a little bit, I think I also shared an article of TikTok moderators, being alerted when a TikTok-er who has Down syndrome has at least 6000 people watching their video. A moderator is cued to check in and make sure that people aren't making fun of that person.

I'm fascinated by the unintended consequences of really good policies and practices, and you can make all the reasons why in the world today that's sort of a necessary process. At the same time, imagine being someone with Down syndrome and knowing that if I get a certain level of popularity, I'm going to have moderators in my feed immediately. That's hurtful. I mean, maybe I don't know it. It depends. I guess everybody can interpret that differently. But I'd feel othered by that.

We're sort of fumbling through a lot of this: best intentions, tech as it is, the world as it is. And the output is really messy.

LOURDES: That is patronizing in a way: that someone has to look out for you.

RICO: Right.

LOURDES: But, I love it when people intervene and are telling people off.

There's an organization, I don't know if they still exist, of white folks who come in and intervene in threads of racist people being racist, and to take the load off of the person of color in that thread who's like, y'all are being racist? they'll step in and will educate. I think that's cool. That's a good way for white people to engage in solidarity.

But it's only when they're requested.

I think that goes back to Design Justice: you ask people if they want to contribute. But, and this goes with EDGI too, you don't want to put the burden of contributing on people; you want to give them the space and facilitate their contribution. And if they want to take a leadership role, it's open but not not pressed.

KELSEY: It could actually be really cool feature if TikTok, when you signed up, said, Hey, are you a member of XYZ vulnerable population? Because if so we can prioritize moderation requests that you submit.

That would be neat.

RICO: Yeah, I feel I should have really read through the whole case, but I think it's just triggered.

KELSEY:
It read to me less positive than you presented it.

I guess I'm really curious about how folks think we can get around algorithmic bias given that machine learning is definitely happening.

KEVIN: Just give me a chronological timeline again. And no ads. Don't push anything. Just let me find it through serendipity.

I do like weird hacks when I'm like using when I do use social media. For Facebook, you can create personal lists. And I only use it through the browser, not through the app, even on my phone. And then in that way, I'm able to see things chronologically without ads. And it's a very different experience because I start to see stuff from people that I normally don't interact with, which is nice.

I would say, how do we turn off that? Is there a feature where we just turn off the algorithm? Do we always need an algorithm to push everything?

RICO: Transparency sounds like the very first step. This is the tension because this is the bread and butter. This is the trade secret. If you knew exactly how it worked, you could adjust your behavior accordingly, or maybe ask for changes. I mean, I basically know that my Instagram feed is some combination of things I like and things that they that look like the things that I have liked.

LOURDES: The other thing is, big advertising. But the fact that you can put ads on Facebook for not a lot of money... it really helps. If EDGI wanted to put up an ad, it would only cost a couple bucks. That's really cool about the technology. But then people can also spread misinformation using that and can also target say African American populations with misinformation and do damage that way.

KEVIN: Yeah, it'e really dangerous. Weaponizing being able to specifically target people for things leads me closer to DWeb stuff with federated social networks. Does it make sense for me to be connected to somebody I don't know, in Egypt, like right now, or should I be having social networks with my neighbors? For what's called Scuttlebutt you can see folks that are on the same network or just around you. Is there a place for smaller social networks? Instead of the ginormous Facebook where we're all on the same thing? That's the question. I wonder, what would that be like?

RICO: That's a sword that cuts both ways. Because the white Kevins of the world who have the same genuine interest in keeping their world a little smaller, if they grew up in white worlds, are then keeping their all-white worlds a little more tightly wound, unless you're proactively braking against that. Yeah.

KEVIN: Yeah. There's this other question. How much of the social networks is our real life? Why are we spending so much time on it? And then, because we're spending so much time on it makes it easier to self-isolate in these bubbles. If we had to step out and deal with the people around us more often, how much would that change things? Depends on where you're living, I guess, too.

RICO: Yeah, I think I've just chosen to like– I mean, I'm already 35 and race has been a problem in my country for 500 years and a problem in the world for far longer than that. I think I've accepted the fact that it will be a big part of the life of many of the people of the country that I live in for the rest of my time here. And so I'm keeping the goal of erasing all of that for another lifetime. Just reducing the worst parts of it.

Bringing us back, the fact that autonomous car can't as easily identify a black neighbor of mine is a majorly scary problem. And that seems really both really scary and really fixable. And so, I personally want to focus efforts on where those two things intersect. Let's get the big dangerous things reduced however we can. There may only be three black punk bands in the United States. And for there to be six might be progress. But that's a longer project.

LOURDES: where you have Bad Brains, which is homophobic, and people are like, oh, Bad Brains. Oh, they're a black band. It's like, Yeah, but they're bigoted.

KELSEY: I was reading an argument, that machine learning. I'm trying to remember where this was from. [Edit: it's from A Human Algorithm by Flynn Coleman]

The author was making the argument that this is the only moment in which we can influence whether algorithms have our same biases. Pretty soon, machine learning will work faster than human ability to feed machine learning.

I don't know if that's a full argument because it's just another one of those things that's huge and out of our control.

KEVIN: Well, there's that question posed by one of the articles where it said, in this vein of like slowing down, why should we even do this at all? The facial recognition software article asked, what is it officially useful for? Unlocking our phones and tracking everybody wherever they're moving.

LOURDES: I mean, there's Instagram, quick changing art things. Fun facial recognition. But yeah, no, I agree.

KELSEY: I feel like I've heard a good argument about: it's a good thing we developed the H bomb because it brought us all these other technologies that we really like.

KEVIN: Oh, yeah.

LOURDES: I mean, technology isn't bad, it isn't good. It's another tool.

KELSEY: The only thing I'm sure about is that it's gonna happen. I, I don't really believe in human abilities to suppress ideas in any kind of permanent way.

LOURDES: Any other thoughts?

KEVIN: Some stuff that I was reading made me think about the ECHO project. I heard Kelsey say this, but, basically these permits are permissions to pollute, right? That's what all these permits are, just giving these companies or whatever they happen to be permission to pollute. And I'm curious, this relates to design justice, how are these thresholds determined? How long ago were these determinations made? Were they thought about in aggregate, like, it's okay for one power plant to pollute that much, but what if we had 10 of them next to each other? Is there any thought to the cumulative effect of all of them being allowed to pollute in that area?

LOURDES: Yeah.

KEVIN: Do you know that stuff, Lourdes? I know you've been studying this.

LOURDES: Yeah. That's what I've studied, is just those thresholds being based on being generally based on white male bodies. And the idea that the dose makes the poison. That's totally, not reflective of reality. And the fact that companies prevent certain science from happening, it's what this STS scholar Scott Frankel calls "undone science". There's just loads of science about these chemicals that isn't done.

KEVIN: I really like the design justice article because it calls into those questions: these people that are being affected, why aren't they involved in the determination of these permits and how they work and stuff like that, or how they were set?

RICO: I think there's a new algorithms officer for New York City. I don't know exactly what their job and powers are. But I think one of the big missions of the people who cared about that position being created and shaped it is– all the data that could any algorithm that goes into a city process– the algorithm is made public and the underlying training data if it's machine learning, is made public, so that at least people can take a look at it. This is sort of what I mean about, Kevin, your point: Do I get to tune the algorithm for myself? The first step would be at least understanding how it works as an instrument, right?

There was a police effort in Chicago, which I think is still alive. It's sort of like predictive policing, if you've seen Minority Report. It's kind of like that, where they crunch all this data and they say, Look, you are likely to commit a crime or be the victim of a crime, based on these factors. We're knocking on your door just to check to make sure that you've got housing, a job, blah, blah, because these are the four things that, if you get those, you're much less likely to be on our list in the future. So like, how's your job, blah, blah. And this is wildly controversial for all the reasons that are probably obvious to us, but a lot of people in communities that had their doors knocked on, some of them are like, why the fuck are cops knocking on my door like this and other people are like, Oh my god, I'm so glad that they're checking in to make sure that my housing is safe and they helped me make a complaint about my landlord blah, blah. Very Brave New World.

KEVIN: Yeah.

RICO: You can see all the obvious reasons this is concerning, right.

KELSEY: That's the best version of predictive policing I could imagine.

RICO: Yeah.

KEVIN: I agree with you, Rico, that the transparency of it helps us determine– a similar thing, where my friend, she works at a school, and they want to use some type of data analysis to find students at risk of failing and reach out to them first. But then there's all this identifying information about the parents' income or whatever and so, yeah, it's just like Brave New World. For sure.

RICO: I don't know how I feel about it.

KELSEY: I was trying to remember– I have a note here from when I read it and I'm trying to decipher it, and maybe this will bring something out for you. But it's just design principles. The design process itself matches product best practices to ensure there's a market, but it becomes potentially extractive at the point of profit balance versus incentive and challenge to design– and the challenge to design and create a useful thing.

There is something really interesting reading that article about design justice. My education in design is from a school of thought called user oriented collaborative design. And it embodies the same principles that were outlined in the paper to a large degree, which is that you choose who your product is going to serve. And then you go and you meet some of them. And you find out what your product ought to be, by way of interviewing them and then showing them rough prototypes, until you have something that really solves their need. And it's really cool because you're identifying problems– you can't not identify a good problem that way. On the other hand, you've now used the input of these people to create a product that you wish to now sell to them, and everyone like them, for a profit. I'm not anticapitalist enough to say– there is some level of, well, you deserve to make some money for going through this process of design. Design and creation of a product is real work. It's useful work.

LOURDES: Yeah.

KELSEY: But exactly how much and where is really much more a function of, how much can you get for it?

LOURDES: Yeah, and I'm not a fan of top-down models, and I think you know what? Most of these pieces are saying is that we need to shift from top-down to bottom-up. And I personally think that corporations and companies should be owned by the workers, by like everyone who is in it and maybe have shares based on how long you've been with the company. And how much, I don't know, student loans you have and that's, again, like that's sort of what EDGI, even though we do, I mean, we have issues with how much we pay people according to skills. What if we lived in a world where all skills were equal, like someone managing makes as much as someone doing cleaning because they're both working the same amount of hours? And everyone has a stake in the organization that they keep running

KEVIN: Wages for housework as well.

LOURDES: Or maybe also think of economies that are outside of money.

KELSEY: Yes, I'm more concerned with the idea that we have to give people incentives to do things.

LOURDES: yeah. Come on, come on.

KEVIN: Yeah.

LOURDES: To make money.

RICO: I think, as an MBA in a hopefully high-growth tech startup, and a white guy and a capitalist and a Warren supporter, I think it'll be "revolution enough" to have just, Lourdes, to your point, greater worker ownership of the firms that they operate in, whether that's quotas for certain-sized companies like 10 or 20%, or something like that.

The massive accumulation of wealth among a small number of people– capitalism was evil and terrible 60 years ago, and that system was wildly preferable to the one that exists today. Elizabeth Warren's made this point many times: a lot of people talk about racial harmony coming once black families make closer to the median net income in America. Her point is, do you know how much closer those families' net income was to the median net income in the 1960s and 50s? it was higher.

Well, wealth has gone like this. And the world has gotten way more stratified, depending on a lot of different things. Racial progress has progressed in a number of areas. So we have to unbuckle those a little bit instead of just hoping that race gets figured out once everybody's middle class again.

LeBron James will have to work 1200 years at his current salary– 1200 years at LeBron James wages– to amass enough wealth to be Jeff Bezos. And how many people have to work for 1200 years to be LeBron James? It's way out of wack.

LOURDES: He needs to pay my student loans. And they need to, I don't know, they need to give us money, because nobody works hard enough to make that much money.

RICO: Right. Right.

LOURDES: Yeah.

RICO: like 5000 years of work.

LOURDES: Yeah. That's just not fair. And I hate whenever someone gives me that bootstraps thing. Really, you think that someone who works 40 hours a week should make that much more than 40 hours' worth?

I'm angry now.

KEVIN: I forgot the phrase but I think it's like: Americans, there's a lot of folks that support the billionaires because they believe that they'll be they'll be billionaires soon enough. But that's not what 1% means.

RICO: The estate tax doesn't impact anybody who's trying to give less than $11 million to their children. That's a lot, and everybody's complaining about this "death tax" and not being able to bequeath whatever little wealth we all have to our children.

LOURDES: I mean, but that's another thing, inheritance. People have to cut out the large inheritances. I think it's okay to pass down a few thousand or whatever to help when you die, but more than that, I just don't think it's fair. That's how racism is constructed.

KELSEY: Even if no money is passed down, you have probably an easier education and social support.

RICO: Right. Those things compound.

KELSEY:
Wait, Kevin, do you want to read that quote out? I feel like it'd be a great closer to our video.

KEVIN: "Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires." By John Steinbeck.

Chat log

John Steinbeck

  • 01:27:49 Kevin: That’s the quote