Skip to main content
    Stories

    Bias in AI: Lessons from Elma Glasgow on Why Inclusive Design Matters

    by Melda Findikli

    Bias in AI: Lessons from Elma Glasgow on Why Inclusive Design Matters

    Bias in AI: Lessons from Elma Glasgow on Why Inclusive Design Matters


    Imagine uploading a simple selfie to enhance your lips with AI—only to be returned an image that looks nothing like you. This was the experience of Elma Glasgow FRSA, award-winning cultural producer and inclusive engagement expert. What she thought would be a small, fun experiment became a striking reminder: AI is not neutral.

    Elma’s story is both humorous and alarming. Using Wix AI to lightly enhance her selfie, she discovered her skin tone, hair, and facial features had been altered—producing a white version of herself. Even more, the AI slimmed her body. She hadn’t asked for this; she just wanted a subtle lip color change.

    This is not just vanity—it’s a real-world example of bias baked into AI systems, and it has consequences far beyond selfies.


    Why AI Can Be Biased

    AI models learn from the data they are trained on. If datasets are not diverse—or if the developers building the AI lack diverse perspectives—the results reflect those biases.

    Elma points out that much of AI development is dominated by white male perspectives, often using datasets that assume whiteness as the “default.” Bias in AI influences decision-making based on skin tone and other protected characteristics under the Equality Act 2010. It can impact healthcare, criminal justice, finance, and employment.

    For instance, studies show AI programs trained on images of lighter skin can misdiagnose conditions on darker skin tones. Dr. David Wen from Oxford University Hospitals explains that such bias may lead AI in healthcare to miss rashes or skin lesions in darker-skinned patients. The consequences are not minor—they can result in false positives, unnecessary biopsies, and health inequities.


    What Needs to Change

    Elma advocates for urgent action: fairness, transparency, and community engagement at every stage of AI development. Equity should be a foundational requirement, not an afterthought.

    At Kimolian.ai, we echo this approach. Inclusive design and testing are not optional—they’re essential. AI systems must be trained on diverse datasets, reviewed with community input, and continuously audited for bias. Only then can we ensure AI benefits everyone, not just the majority.


    Elma’s experience reminds us that AI is a human creation—and humans make mistakes. But with inclusive design, transparency, and ethical engagement, we can create AI systems that serve all communities fairly.

    We invite you to read Elma Glasgow’s full article on LinkedIn and join the conversation: How can AI be designed for everyone, not just some?

    🔗 Read Elma Glasgow’s article

    Melda Findikli

    Melda Findikli

    Melda Findikli is a renowned figure in the realm of technology marketing, specifically recognized for her impactful contributions to digital and social media marketing. She earned a degree in Project Management from a prestigious university, where she honed her expertise in Strategy for Leaders and product marketing.

    Read More Stories

    Explore more insights on AI transformation and business innovation.

    We use cookies to enhance your experience

    We use cookies to analyze site traffic and optimize your experience. By accepting our use of cookies, your data will be aggregated with other user data. Learn more in our Privacy Policy