Home » Module 7 - Algorithmic Justice » Algorithmic Justice

Algorithmic Justice

Hey all!

We’re more than halfway through the semester (I think) and we’ve been deliberately taking our time. Please DM me if you need any support or feel lost or just want to say hello. I’m here for you and hope this asynchronous space can still feel human.

Let’s jump back into thinking critically about the fields inside engineering. This goes for everyone, but especially for Computer Science majors — have you considered the ways in which your field has bias? the ways your field has a profound impact on how society is shaped?

I’m not sure if these questions are being raised in your Grove courses (I hope they are! Tell me if they are!) and since we’re considering both rhetoric and composition, these questions must be taken into account. 

For this week, I would like you to watch this 13 minute talk by Dr. Joy Buolamwini about facial recognition and the effects when the sample set skews white and male.

For the module comment, I would like you to consider the following:

Take note of 2-3 rhetorical issues Dr. Buolamwini raises that speak to you. For me, it was her reframing of the “under-sampled majority” as a way to think about who is represented in most technological spaces and who is erased. So often we say “minority” when speaking about the people of the global majority who are not white and that set standard creates an intentional bias which has real implications (think policing, thinking community funding, think incarceration rates).

Have you ever considered algorithmic bias when using your devices?

What are some ways we can shift the dominant data set?

If you have an experience of algorithmic bias that you want to share, I welcome it in this space but it is not required.

Thanks everyone for staying engaged and enjoy the rest of your week!


3 Comments

  1. One issue that stood out from Dr. Buolamwini’s talk was how 1 in 2 Americans has their facial recognition in networks that are used unwarranted and haven’t been checked for accuracy, leading to folks being misidentified to be a criminal suspect. The problem here is that there is no consent on how one’s face is being utilized and more so, that the inaccuracies in the facial recognition sphere wrongfully leads to innocent people being convicted for crimes they did not commit. Another issue of interest was the concept of power shadows. Given how a lot of the tech space is comprised of ‘pale males’, the datasets used to train facial recognition AI is representative of that. This leads to the lack of accuracy when we want to identify darker skinned individuals or differentiate between men and women (it was also baffling to see AI struggle with identifying one’s sex even when it was restricted to the binary). I agree with Dr. Buolamwini when she says that intersectionality matters in this field because it prevents there being an algorithmic bias towards “pale males” in these facial recognition technologies.

    One of the devices I often notice algorithmic bias with are automatic hand dryers or soap dispensers. I’m not sure if this technology has the same mechanisms as facial recognition AI but I notice for my lighter skin peers, most of the time the dispensers/dryers operate instantaneously. Another technology I’ve seen this in is when I try to upload a picture of my ID for verification on certain apps, it takes many tries since the apps clock the scans as “too dark” or “unclear”.

    The main course of action is to address why there aren’t more faces from the “underrepresented majority” in the spaces who create and develop these datasets. A overall lack of resources when it comes to school and professional development? Racial biases from employers (which would clash with Title VII of the Civil Rights Act)? Circumstances disproportionately preventing communities of color to enter these spaces? The solution goes back to the resource gap within black and brown communities, which often leave them out of scientific discoveries and research, like we previously saw with Science Under the Scope. It’s addressing these systemic issues which can shift this dominant data set. It can look like having more STEM camps in underfunded neighborhoods. It can look like organizations putting together career fairs in state colleges to inform students of their options. It can also involve holding companies accountable through fines and regulations if their workforce and data sampling is heavily coming from one group.

  2. One of the rhetorical issues that Dr. Buolamwini raised was the fact that inclusivity has to be attained through intention. In fact, it is interesting to think about how easy it is to follow along with pre-existing data and assume it was developed with positive unbiased intentions. However, since racism is so deeply rooted in society, in order to achieve technological inclusivity, developers must start from ground zero, by being cautious about what data is being used.

    Another notable rhetorical issue is the acknowledgment of a counterargument regarding the laws of physics and the capacity for AI to recognize dark skin tones and females. She was able to specify an instance with IBM to show that it is possible to be more inclusive with AI, but a lack of urgency is why companies choose not to. Additionally, with the comparison of the various company audits, it is made evident that all AI technology is capable of inclusive recognition.

    Previously, I haven’t considered algorithmic bias when using my device, because I was under the assumption that technology, being technical-based, is innately unbiased. However, bias from the technology’s creator is reflected in the final product, especially with AI.

    We can shift the dominant data set by creating urgency, which can start from something as small as an email and extend into a larger movement with boycotts if there is no company response. Losing profit is the greatest motivator for a company.

    Although I did not learn about the biases within AI on my own, after doing a project with my co-workers on AI bias, and watching the documentary Coded Bias, I was able to expose myself to such a prevalent issue.

  3. One of the linguistic difficulties mentioned by Dr. Buolamwini was the need to achieve inclusivity through intention. In fact, consider how simple it is to follow along with pre-existing material and imagine it was created with nice, neutral purposes. However, because discrimination is so deeply embedded in history, developers must start from the beginning in order to achieve technical inclusion, by being careful about the data they utilize.

    I have considered algorithm bias when using a device. Most specifically a smartphone. When I had a samsung phone in the ninth grade I would be able to download a third party source that was able to give me access to entertainment without paying. I did not develop the algorithm but the developers had created a bias to what some people might say.

    We can shift the dominant data set by boycotting large corporations who have taken advantage of the working class and make large profits by using cheap labor. Some ways we could do this by is cutting off spending to these companies and protesting to politicians on these corporations.

Leave a comment

Your email address will not be published. Required fields are marked *

Course Info

Professor: Andréa Stella (she/her/hers)

Email: astella@ccny.cuny.edu

Zoom: 4208050203

Slack:engl21007spring22.slack.com/