Algorithmic HR: Taking the 'Human' Out of 'Human Resources'?
How closely do you follow your guidelines? Let’s say that if someone doesn’t hit 3/5 of their monthly KPIs, you give them a strike. Three strikes in six months, and it’s an instant termination. It’s a concrete and structured way to handle poor performance. Still, some HR professionals might introduce a bit of wiggle room – what if, in one of those months, the employee in question was bereaved? Or that third strike was only just below the line?
Well, Algorithmic HR may take you (and your wiggle room) out of the equation – quite literally.
What Is Algorithmic HR?
Algorithmic HR as a concept is quite self-explanatory: rather than making decisions based on subjective opinions and human observation, HR departments can use people analytics to inform their choices. That way, they can be more confident in their decisions, knowing that they’re based on objective data.
One way that this can be used is in employee monitoring. If someone has exceeded their targets or experiences a particularly good week, maybe the system will reward them with a lunch voucher or an afternoon off. Conversely, if a salesperson, for example, is consistently late or the amount of time they spend on the phone drops, this might trigger a sequence that will set up a meeting with their line manager to discuss their performance. While this may sound a little bleak, it also takes human bias out of the equation; everyone gets fair treatment.
The removal of bias is especially good in the hiring phase. When a person’s skills and experience are assessed by a computer system, key characteristics such as gender, race or disability never have a chance to hold them back.
Once Artificial Intelligence enters the room, the possibilities expand. Predictive modelling can pick out employees who, according to the relationship between certain data points, may be likely to leave the business. Based on this, businesses can then figure out what they need to do to retain these staff members.
Algorithms are only as good as the data they’re given. A pocket calculator can give you the square root of pi (1.7725, by the way), but it can’t tell you how many fingers you’re holding up. In much the same way, Algorithmic HR’s summation of the ‘what’ is unparalleled, but its analysis of the ‘why’ is limited at best. Take the example at the beginning. If your algorithm is not given the full context of a person’s poor performance – maybe they’ve just had a baby and are tired out of their mind – it might recommend you chuck the new parent out on the street.
Of course, it could be worse. It could chuck them out for you.
Amazon contracts many of its drivers through Flex, which means they are not granted the same protections as full-time staff members. Recently, Amazon came under fire as these workers were being let go without a human ever making the decision. Instead, Amazon uses an app that monitors drivers on their speed, safety and ability to meet quotas. If a driver fell below the bar, they wouldn’t get called into an office to talk about it. Instead, they would get a message on their phone saying their services were no longer required. This system has been criticised as cold at best and, at worst, full-on dystopian. It’s hard to disagree when a person’s livelihood depends on the decisions of a collection of 1’s and 0’s.
What are the implications of a system with the power to terminate employees? Some would argue that it’s absolute fairness – you set a standard and those that fall behind are let go. It does away with human bias; someone won’t be more or less likely to get fired depending on how much their manager likes them. Machine bias, however, is still a relevant concern: computer systems aren’t perfect and they rarely – if ever – account for their own mistakes. This, naturally, makes them very difficult to argue with.
AI complicates things further, as it tends to follow the patterns on which it was trained. This, ironically, implants it with the same unconscious biases that many humans carry around.
Take AI image generators like Dall-E and Midjourney, for example, which were trained on existing art to be able to create images from prompts. When asked to draw a ‘woman’, these programmes tend to default to a Caucasian woman unless the prompt specifies a different ethnicity. This isn’t an intentional feature. Rather, it’s down to the fact that descriptions of other art on the internet will rarely specify a woman’s race when she’s Caucasian. If the art depicts an Asian or Hispanic woman, however, this characteristic is more likely to be pointed out.
A small problem in art generation becomes a big problem in hiring. What if you ask an AI to provide you with candidates ‘similar’ to the team you have already? You may have wanted a similar set of skills, failing to notice that your small team happens to consist entirely of men under 40. The AI, then, may unwittingly exclude candidates that don’t fit that superficial description. Just like that, you have a feedback loop that could impact the diversity of your workforce, dogmatically, in any number of ways.
So, What Do We Do?
Algorithmic HR is a tool. Much like a hammer, it is a force neither for good nor evil, though it may be used for both. In fact, it’s best to view these systems as supporting members of the team: listen to what they tell you, trust them to do their job, but keep the decision-making power for yourself. Keep in mind that for every minute detail they’re able to pick up on that you’d never notice, there are a thousand pieces of context they couldn’t possibly comprehend.
In short: use Algorithmic HR as a sat nav. Don’t let it drive the car.