A recent study led by Dr. Ivan Yamshchikov at the University of Applied Sciences Würzburg-Schweinfurt in Germany has uncovered significant gender bias in generative AI systems, including ChatGPT. The research shows that these AI tools often recommend lower salaries for women compared to men, even when qualifications, experience, and job roles are identical.
The team tested five popular AI models by providing user profiles differing only by gender. For example, ChatGPT’s o3 model suggested a salary of $280,000 for a female candidate but recommended $400,000 for a male candidate with the same background. The salary gap was most pronounced in fields like law, medicine, business management, and engineering, while recommendations in social sciences were more balanced.
This study highlights a persistent challenge: AI systems can unintentionally reflect societal biases embedded in their training data. Despite ongoing technological advances, these findings emphasize the need for ethical standards, transparency, and independent oversight to ensure fairness in AI-driven decision-making.
As AI becomes increasingly integrated into hiring and salary negotiations, awareness of these biases is essential to prevent unintentional discrimination and promote equitable outcomes.