Share
Commentary

Libs Horrified as Powerful Amazon 'AI' Produces 'Sexist' Results

Share

A secret machine learning tool that tech giant Amazon used to evaluate job applications has been scrapped after it “learned” to deprecate applications by women, Reuters reports.

“The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters,” the Wednesday report from Jeffrey Dastin read.

“Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions,” Dastin wrote. “The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars — much like shoppers rate products on Amazon, some of the people said.”

While most media reports called it artificial intelligence, the program is actually based on machine learning. Given that AI is pretty much shorthand for all sorts of stuff like this, however, we can let that slide. The important thing is that it was meant to allow Amazon to sort resumes without a hiring manager even looking at them.

One source contacted about the program called it the “holy grail” of recruitment technology.

Trending:
Biden Calls for Record-High Taxes ... We're Closing in on a 50% Rate

“They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those,” the person told Reuters.

The problem is that machine learning programs tend to learn patterns. Like most tech companies, Amazon’s applicants are primarily male; this is because more men tend to get STEM degrees and work in the tech industry. However, this also means that when the program analyzed 10 years’ worth of resumes into the company, it came to some rather stark conclusions about gender.

“In effect, Amazon’s system taught itself that male candidates were preferable,” Reuters reported.

“It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’ And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.”

Do you think this computer program was sexist?

Amazon put safeguards into the program that stopped it from drawing conclusions from permutations of the words “woman” and “man.” However, the company couldn’t guarantee it wouldn’t learn more ways around it, so the program was scrapped.

It’s worth noting that the program was used only in trials, and the algorithm proved so unreliable that it was “returning results almost at random,” according to Reuters. However, you can probably predict the reaction on social media:

https://twitter.com/EricaJoy/status/1050028003126697985

Rachel Kraus of Mashable argued that the program represented deeper problems than just the fact that it learned to prefer men over women. Rather, she said, the more serious issue was “the creation of hiring algorithms themselves — not just at Amazon, but across many companies, which she said “still speaks to another sort of gender bias: the devaluing of female-dominated Human Resources roles and skills.

Related:
While Hateful Demonstrations Cripple 'Elite' College Campuses, the Opposite Happened at a Major Christian University

“According to the U.S. Department of Labor (via the workforce analytics provider company Visier), women occupy nearly three fourths of H.R. managerial roles,” Kraus wrote. “This is great news for overall female representation in the workplace. But the disparity exists thanks to another sort of gender bias. …

“Amazon and other companies that pursued AI integrations in hiring wanted to streamline the process, yes. But automating a people-based process shows a disregard for people-based skills that are less easy to mechanically reproduce, like intuition or rapport. Reuters reported that Amazon’s AI identified attractive applicants through a five-star rating system, ‘much like shoppers rate products on Amazon’; who needs empathy when you’ve got five stars?”

I find it curious that a writer Mashable seems to know Amazon’s hiring needs better than Amazon does. She also seems to know a lot more about the soft skills baked into the program than anyone else, which is certainly convenient for the article but not necessarily supported by any facts. This isn’t even addressing the question of how applicable those soft skills might be to a position, which Kraus seems to take for granted without providing evidence of why they’re important. In fact, the only argument she seems to present is that there are more women than men in human resources — which she promptly undermines by noting “(t)here is a perception that H.R. jobs are feminine roles,” a sexist notion in itself.

The other arguments I’ve seen thus far are somehow less thorough than that, merely presupposing that machine learning itself is sexist. Except it isn’t; machine learning just is. It makes its judgments based off of patterns, patterns that are based on disparities in career choices between men and women. If that changes, so would the results. They didn’t “buil(d) an AI recruitment system which taught itself to downgrade applications that included the word ‘women’s'” or anything of the sort. They experimented with machine learning to evaluate resumes and it failed. There’s a marked difference.

As it stands, Amazon won’t be using machine learning to vet applicants anytime soon. Given how this experiment turned out, that’s probably a wise move. However, it doesn’t prove endemic sexism is somehow responsible for individual career choices or that a program that noticed this was proof of bigotry.

Yet again, this is a presupposition that’s horribly unsupported by the facts. How many column inches in the past few years — in both annals of higher education and the lay press — have been dedicated to the lack of women in STEM fields and how the disparity might be addressed? The hurdle doesn’t seem to be one of entry barriers but instead one of personal preference.

On the other hand, it is humorous to see the company founded by Jeff Bezos — also owner of The Washington Post — raked over the coals for inveterate misogyny by the “Democracy Dies in Darkness” crowd. Not that they realized the particulars of the program or that it hadn’t been implemented, mind you. They didn’t care that Amazon manually corrected it. They just saw the headlines and they were off to the races.

In other words, liberals assumed that a company with a relatively liberal culture was chauvinist because a program produced undesirable results and was shut down. Irony, it seems, is a dead animal in these woods.

Truth and Accuracy

Submit a Correction →



We are committed to truth and accuracy in all of our journalism. Read our editorial standards.

Tags:
, , ,
Share
C. Douglas Golden is a writer who splits his time between the United States and Southeast Asia. Specializing in political commentary and world affairs, he's written for Conservative Tribune and The Western Journal since 2014.
C. Douglas Golden is a writer who splits his time between the United States and Southeast Asia. Specializing in political commentary and world affairs, he's written for Conservative Tribune and The Western Journal since 2014. Aside from politics, he enjoys spending time with his wife, literature (especially British comic novels and modern Japanese lit), indie rock, coffee, Formula One and football (of both American and world varieties).
Birthplace
Morristown, New Jersey
Education
Catholic University of America
Languages Spoken
English, Spanish
Topics of Expertise
American Politics, World Politics, Culture




Conversation