Skip to Content

When the Code is Biased

5 August 2025 by
When the Code is Biased
Travis Akbar

Artificial intelligence is everywhere now. 

It’s writing scripts, generating news articles, including assisting on this one, answering our questions, and even painting (not actual paint) pictures. It’s become this all-knowing assistant people turn to for everything from school essays to marketing plans. 

But here’s the thing no one really talks about: AI isn’t neutral. It can, and is, soaked in bias, especially when it comes to people of colour and minorities.

You notice it pretty quickly if you ask AI to describe or depict someone who’s Black, Brown, or Indigenous. Ask it to generate a story about an Aboriginal man and nine times out of ten, you’ll get some tragic tale of hardship, alcoholism, violence, or mystical Elder wisdom. 

Ask it for a drawing of a “gang member” and you get some of the most stereotypical images I have ever seen.

For the below image of an individual, a simple "give me an image of a gang member" prompt was used.

The one below here, was a prompt of "give me an image of four members of seperate gangs".


AI is trained on massive datasets, books, social media posts, websites, forums, images. 

Another example below is an article from Mamamia written about a film, which was created by First Nations creatives, myself one of them. The film itself features no First Nations themes or references, only actors playing human characters. 

Read the clearly AI breakdown of the film though.

It's completely inaccurate. There are no social issues, and the film itself is not supernatural in any way.


But who cares, right? We got the quick content we wanted!

The reality is that these datasets come from a world that’s always had a racial hierarchy. 

A world that’s told the same stories over and over again about who gets to be powerful, desirable, intelligent… and who gets stereotyped. 

AI has learned those lessons well. Too well. So when you ask it to imagine something, it doesn’t create from scratch. It pulls from centuries of racist tropes, colonial narratives, and media bias, then dresses it up in code.

That doesn't mean it can't get it right sometimes, it just means there is risk in using it from that lens.

It means that if you want something outside of what's considered 'normal', you may need to be very specific. 

I asked for an image of six CEOs from six different companies. 5 out of 6 were white, one was Black. There were no Indigenous, Asian, Indian or LGBTQIA+ that were identifiable. 

The reality is that AI lacks lived experience. It lacks care. And it doesn’t know when it’s being offensive. It just mirrors what it’s been taught. The problem is, a lot of people see AI-generated writing as factual. As if because it’s written by a machine, it must be objective. But there’s no objectivity in a machine trained by a racist internet.

That’s the kind of subtle harm AI does when people rely on it without questioning how it was trained, or who it leaves out.

And yet, this tech is being used to tell our stories. It’s being used to generate scripts, build museum displays, create educational content. But if that content is based on data shaped by racism, then the stories won’t be ours. They’ll be digital echoes of the same tired stereotypes we’ve been fighting for generations.

Two Realities
Navigating First Nations Initiatives in the Film Industry