Sci-Fi Versus Reality: The AI We Were Promised Versus The AI We Got

We may not have flying cars (yet) but are we living in the AI dream science fiction of the past promised?

The AI Fantasy Versus The AI Reality

Science fiction has long shaped our expectations of artificial intelligence (AI). In movies, books, and TV shows, AI is often portrayed as either a benevolent assistant, a dangerous overlord, or a sentient being struggling for autonomy. But the AI we actually have today—voice assistants, recommendation algorithms, and predictive analytics—is far from the AI of our imaginations.

I had the opportunity to connect with futurist Crystal Washington, who shared her insights on gender and AI, as well as how science fiction of the past shapes our experiences in the present. As she points out, “Everything we are exposed to often reinforces our idea of a specific gender's place in society.” Our modern AI systems replicate outdated narratives, shaping human behavior in ways we might not even realize.

So how did we get here? And how do we change course?

AI In Sci-Fi Versus AI In Real Life

I had to do some searching because my geriatric millennial brain remembers but does not recall all the details of movies and cultural importance. What is interesting is how some AI assistants and their representation of specific gender traits compare to the assistants of today. Looking at these portrayals side by side, I saw some clear correlations.

HAL 9000 (2001: A Space Odyssey) – Male, cold, logical, and ultimately dangerous
AI in finance, security, and decision-making is often designed with neutral/male voices, reinforcing authority
Rosie the Robot (The Jetsons) – A polite, subservient, maternal figure
Siri, Alexa, Cortana – Almost all AI assistants’ default to a feminine voice
Blade Runner’s Replicants – AI as an oppressed underclass, racialized and commodified
AI in policing, hiring, and surveillance disproportionately targets Black and brown people
Her (Samantha) – AI as an emotional caretaker, designed for male comfort
AI chatbots often reinforce gendered emotional labor (e.g., chatbots built with feminine personalities)

The Feminization Of AI Assistants

Why are so many AI assistants gendered as women? The answer lies in deep-rooted gender biases that have existed long before AI. Crystal Washington offers an illuminating example: “A few years ago, one of my nephews—who was just 10 at the time—noticed that both my iPhone and Android assistants had male voices. When he asked why, I explained that if all assistants had only female voices, it might reinforce the idea that women are supposed to be assistants. He immediately understood the problem.”

This early socialization is key. If AI assistants are always voiced as feminine and subservient, we reinforce the notion that women exist to serve and obey . The history of secretarial, customer service, and caregiving roles—traditionally feminized labor—directly informs why AI takes on these characteristics.

And yet, when AI is given authority, the voice often becomes male or neutral—mirroring the real-world perception that men are more authoritative. Crystal points out that "we tend to hear male voices as more authoritative on average, unless we train ourselves otherwise."

AI And Racial Bias: The Sci-Fi 'Other' Becomes the AI Reality

Beyond gender, racial bias is also deeply embedded in AI systems. Science fiction often portrays AI as an "othered" being—subjugated, feared, or exploited. This has eerie parallels with real-world racial discrimination in AI applications:

  • Facial recognition systems misidentify Black and brown faces at much higher rates, leading to wrongful arrests. A study by the National Institute of Standards and Technology (NIST) in 2019 found that facial recognition algorithms were up to 100 times more likely to misidentify Asian and Black individuals compared to white individuals. Research by Joy Buolamwini at the MIT Media Lab further demonstrated that commercial facial recognition systems had significantly higher error rates when identifying darker-skinned women, with misclassification rates reaching up to 4 percent, compared to near-perfect accuracy for white men.
  • AI hiring tools have been found to filter out resumes with names perceived as “non-white.” A notable resume study conducted by Marianne Bertrand and Sendhil Mullainathan in 2003 found that resumes with White-sounding names received 50 percent more callbacks than identical resumes with Black-sounding names, indicating racial bias in hiring. More recently, AI-driven hiring tools have been shown to reinforce this bias, as seen in Amazon's AI recruitment system, which was scrapped after it was found to systematically downgrade resumes that included words associated with women or underrepresented groups.
  • Predictive policing algorithms disproportionately target minority communities based on biased historical crime data. Studies have shown that predictive policing software, such as those used in major U.S. cities, often reinforces existing racial disparities because it relies on historical crime reports, which are themselves influenced by systemic bias. A 2019 study by the AI Now Institute at New York University found that predictive policing tools disproportionately over-policed Black and brown neighborhoods, leading to higher rates of wrongful surveillance and arrests. Additionally, a 2020 study published in Science Advances revealed that predictive algorithms tend to direct law enforcement to areas with higher reported crime, even when those reports stem from over-policing rather than actual crime rates.

These biases don’t occur because AI itself is inherently racist or sexist—they happen because AI is trained on historically biased data. If past hiring favored white men, an AI model trained on that data will reinforce the same discrimination. As Washington notes, “Much of the technology we are getting is the result of science fiction movies and books that were created between 50 and 100 years ago. The imagination existed first, and now technology is catching up—but it’s repeating the same biases.”

Where Are We Headed? The Future We’re Already In

Sci-fi dystopias often explore AI turning against humanity, but in reality, the greater threat is AI reinforcing existing inequalities . If we don’t actively work to change how AI is designed and trained, we risk embedding these biases deeper into society.

Washington highlights the challenge of creating non-gendered AI, pointing to projects like Q (2019) and Accenture's Sam (2020)—both attempts at developing non-binary AI voices that seem to have faded away. "I actually tested out Q on several people, asking them what it sounded like. Most said, 'It sounds like a woman with a deep voice.' This tells us that people automatically assign gender even when AI designers try to avoid it."

In other words, simply designing a non-binary voice isn't enough—we have to reshape how people perceive AI and gender altogether.

Breaking The Sci-Fi Cycle

So how do we create AI that doesn’t inherit the biases of the past?

  • Stop defaulting AI to feminine helper roles—give users diverse voice options.
  • Address racial bias in AI training data—ensure datasets are representative and ethical.
  • Push for AI that actively challenges bias, rather than reinforcing it—make AI an opportunity to reshape perceptions rather than automate oppression.

Crystal Washington sums it up best: “We get to shape the future of AI and people’s perceptions. Sometimes, we have to create technology in a way that actively challenges our biases.” The technology of tomorrow reflects the imagination we have now. The ideas we see as valuable regarding gender roles have the potential to reinforce what we see as normal today or to shape what equity could look like in the future. If science fiction gave us the blueprint for AI, it’s time we rewrite that blueprint to build a future worth having.

Have you applied for our annual Inclusive Leader’s List? We are now accepting applications. Amplify you or a colleague working to build inclusion and equity in the IT Channel Ecosystem. Apply Now!

Close