Future of Work with Catalyst

We Must Ensure AI Drives Gender Equity

by Nicole Jackson

There’s no way around the increasingly invasive influence of artificial intelligence (AI) in nearly all aspects of our lives. As companies continue to embrace AI and machine learning, business leaders need to ensure this technology is used to even the playing field, not reinforce overt or hidden bias. Responsible AI is a practice and framework to assist organizations in mitigating risk by incorporating strategies, protocols, and procedures before, during, and after AI is deployed.

As more tasks are automated and workplaces shift toward human-machine collaboration, we must remember that AI cannot eliminate human error or bias. If left unchecked, it can often perpetuate it. For companies trying to build better, fairer, and more inclusive teams and workplaces, as well as those seeking to deliver their products, solutions, or services to diverse populations, unidentified bias in AI is one of the biggest reputational risks.

At the height of the pandemic, Catalyst researchers saw the potential dangers of utilizing AI in hiring practices and found that hidden bias may creep into the hiring process at nearly every phase. In the job-posting phase, researchers found that a platform’s AI uses predictive technology to target those deemed most likely to click—a practice that can create measurement bias.

Likewise, algorithms that assess candidates based on historical data use patterns from previous inputs and may unintentionally perpetuate bias. The data labeling process used to tag trends in historical data may create poor proxies, thereby favorably ranking arbitrary data. This could manifest in the candidate selection process using data inputs like names or backgrounds of existing employees, advancing a candidate with the name Connor while rejecting candidates with names such as Jamal, without system owners even being aware.

Leaders should ensure processes and teams are in place to evaluate systems, organizational, and supplier maturity and readiness before implementing AI. Each phase of evaluation should include a use case with specific intent, and teams involved in the implementation should consider representational data appropriate for the use case and datasets, reviewing process and quality assurance measures to evaluate the impact before deployment.

Companies should also consider the inherent risks in less obvious scenarios, including things like unconscious or inattentional blindness posed by implementation teams, and create mitigation strategies, including training, to lessen potential harm.

It’s important as well to incorporate specific constraints by using human-in-the-loop quality assurance scenarios while solutions are in development and are being tested, deployed, measured, and monitored. Companies should also develop operational governance as well as use guidelines and best practices, auditing procedures, and accountability measures. While many of these strategies show promise in mitigating bias, they will continue to require iterative refinement due to rapidly evolving systems and data used to create and sustain them.

Ultimately, building a more equitable workforce in an AI-powered world requires intentional action and collaboration across sectors. Employers that in their hiring practices and recognize the unique value that teams with diverse backgrounds bring to the table are at a strategic advantage, as their employees also reflect the growing diversity of consumers. DW

Nicole Jackson is a technologist and head of digital transformation at Catalyst.



Join the Diversity Woman Community! Join a network of career-oriented women and use the member directory to see all the members in your community and find world class mentors. Access exclusive leadership development packages to help you achieve your career goals. Work With Coaches. Take Career Development Courses, and much more.