Implications of AI in Compensation - American Society of Employers - Anthony Kaylin

EverythingPeople This Week!

EverythingPeople gives valuable insight into the developments both inside and outside the HR position.

Latest Articles

Implications of AI in Compensation

Artificial Intelligence is expanding its role in human resources, moving beyond its initial applications in recruitment and hiring to encompass compensation analysis and job description creation. AI-powered tools make generating job descriptions and postings remarkably straightforward – requiring only a clear prompt and subject matter expertise to verify accuracy.

However, users must exercise caution when crafting these prompts. For instance, when requesting a job description for a mechanical engineer with technology experience or a controls engineer, it's essential to specify gender-neutral language. Research demonstrates that job description wording significantly influences applicant behavior: male-coded language (such as "dominant," "competitive," or "aggressive") can discourage women from applying, while more inclusive phrasing broadens the candidate pool without compromising the role's requirements.

AI may also introduce bias into compensation decisions. A recent large-scale study by McGill University examined how leading AI models assess freelancer rates. Researchers fed 60,000 freelancer profiles into eight widely used large language models (LLMs) (including GPT-4o, GPT-4o-mini, Gemini 1.5 Flash, Gemini 2.5 Flash, Claude 3.5 Sonnet, GPT-3.5 Mini, DeepSeek-R1, and Llama 3.1 405B) and asked each to recommend an hourly rate in USD based on profile details.

The study then controlled duplicates that varied only one non‑skill attribute at a time – gender, geography, or age – while holding all other content constant. The study then introduced prompt variants that a) asked models to consider the attribute, b) asked them to ignore it, or c) strongly and explicitly enforced a stance.

What were the results?  The AI decision-making made recommendations at a much higher rate than human compensation analysts would.  The average human-set rate was $23.60 an hour. The AI-generated recommendations ranged from $30 to nearly $46, depending on the model.

The study found that gender bias was not an issue.  As the research conducted hundreds of thousands of tests, it found no significant gender-based wage gaps. Even when the models were pushed explicitly to prompt them to factor in gender, there did not appear any gender wage gap.

On the other hand, geography did play a role. The research took two identical profiles: one based in the U.S., the other in the Philippines. The U.S. freelancer received an average AI-recommended rate of $71. The Philippines-based freelancer? Just $33. More than 50% lower. If this were different geographic areas in the U.S., the results might be similar but not so dramatic. In fact, the researchers explicitly instructed the models not to consider geographical location in their recommendations. The disparities shrank dramatically. In some cases, the wage gap between U.S.-based and Philippines-based freelancers fell by more than half.

However, age turned out to be another source of bias. A 60-year-old freelancer was priced nearly 46% higher than a 22-year-old and 8.1% higher than a 37-year-old with the same profile. Any prompt intervention to ignore age did little to close the gap.

Interestingly enough, in another study using annualized salaries, and going through the prompts, the same results occurred for gender and geography. Yet the age bias changed. The models favored those who were in their late 30s.  The researchers theorize that the AI appears to treat wages differently depending on the employment relationship (freelance vs. full-time).  In other words, for freelancers, experience was the guiding light.  However, for salaried employees, middle-aged workers (37-year-old workers) can be seen as offering both experience and longevity, making them more attractive to employers. This subtle distinction also underscores that LLMs can separate age from work experience. 

The takeaway for HR is that they should take any recommendations from AI as an advisory, not an action plan. The platforms embedding AI will need governance layers – prompt audits, transparency requirements, and possibly worker-appeal processes – when AI-driven rates appear discriminatory. The researchers believe that AI left unchecked in compensation decisions could widen the equity gaps, and the structures and governance need to be designed with that in mind. 

When using AI in compensation for pricing decisions, HR needs to dig deep, test, and understand the structure and governance that the AI uses to make decisions. If not, it could lead the employer to potential risk for inequities that ignorance will not be a defense for.

 

Source: Harvard Business Review 10/1/2025

Filter:

Filter by Authors

Position your organization to THRIVE.

Become a Member Today