top of page

Midjourney AI is Racist AF

Artificial Intelligence (AI) has become an integral part of modern life, influencing decision-making processes in various fields. However, the inherent bias that exists within AI systems can perpetuate and exacerbate societal inequalities. This bias is often a reflection of the data used to train these systems, as historical prejudices and stereotypes can be inadvertently encoded into algorithms.


My friend and I inadvertently experienced Midjourney's racism, today. The Midjourney program on Discord is one of the most advanced AI image generators on the market. While the developers have an imperative to aim for inclusivity and fairness, they are not held to any standard. The algorithms may inadvertently learn and reproduce discriminatory patterns present in the training data. This can result in biased outcomes, disproportionately affecting certain demographic groups.

Prompt: "African American stock broker losing money."

You can see above that inclusion of "stock broker" and "African American" in the same prompt required more detail for Midjourney to correctly create. I had to describe skin tone or just say "Black" to get a melanated person. However, the program effortlessly created an image of two Black men when I submitted a non-racialized request for gang members.

Prompt: "Darkskinned African American male stock broker standing on street who has lost money."

Prompt: "One crip and one blood gang member sitting at a table together, laughing over hot tea."

The Midjourney program's alleged racist tendencies underscore the importance of scrutinizing AI systems for biases and working towards more comprehensive solutions. Moreover, developers should be transparent about the data sources used, the algorithms implemented, and the measures taken to minimize biases. Furthermore, regulatory frameworks should be established to hold developers accountable for the ethical implications of their AI applications.


Users, developers, and policymakers need to be informed about the potential pitfalls of AI systems, encouraging a collective responsibility to ensure equitable inputs and outputs. Collaborative efforts involving diverse teams can help identify and rectify potential biases during the development phase. Addressing bias requires a multifaceted approach, encompassing diverse and representative training data, rigorous testing for biases, and ongoing monitoring and adjustment of algorithms to mitigate any emerging issues.

Comments


bottom of page