Artificial Intelligence and the Future of Ethics in HR.

Fred Gulliford

11 Jun, 2019

Fred Gulliford

On the eve of London Tech Week, Qlearsite hosted our second breakfast briefing of the year. We welcomed a community of Human Resources (HR) and technology professionals to Old Street to talk about Ethics and Artificial Intelligence (AI) in their organisation.

We were fortunate enough to be joined by 3 experts on this topic:

They shared their thoughts on the following questions:

 

What do we mean by ethics in HR?

Who do we hire? Who do we fire? Who do we promote? Who do we praise? Who do we develop?

These are the difficult types of questions that HR are faced with every day. HR is a branch of management where ethics really matters because their decisions can have a significant impact on the lives of the employees.

There are a multitude of different pressures that can create the conditions for potential ethical dilemmas and conflicts in HR. It is therefore paramount to have a set of moral principles that dictate an organisation’s behaviour, especially within the HR team.

 

Why do we need to be weary of Ethics when it comes to the use of AI in HR?

Qlearsite consultant Charlotte Murray, recently wrote a blog on AI and Ethics – with great power comes great responsibility. Charlotte frames the ethical dilemmas that the implementation of AI poses to society and organisations, which provides great context to the broader arguments.

Tradition in HR has been that recruitment assessments, employee engagement analysis, appraisal processes were carried out by people, but AI can now carry out these tasks with significant time and cost savings. AI applications in this space have grown exponentially over the last decade helping management automate their processes and become far more strategic in their decision making.

However, Charlotte warns that while the possible applications of AI are endless, the unethical application of this technology risks exaggerating existing biases and inequalities within organisations. It is therefore imperative that HR leaders have a framework by which the assess the use of new technologies and embed ethical thinking in their evaluations.

In her talk, she highlighted examples demonstrating how a lack of ethical thinking in AI’s design process can lead to unfair, sometimes harmful outcomes. Now that AI can significantly alter the course of our lives, we need to act now before it is too late.

“AI needs to benefit everyone in society and in organisations, not just those who control it. To avoid falling into the disappointment trap of an unethical AI, we need humans from a multitude of disciplines to appreciate the social, societal, cultural, financial and economic context (and more!) that AI operates in and inject ethical thinking at every stage of the process, from design to implementation.”

You can watch Charlotte’s entire talk below:

 

 

 

 

How can we ensure that we use AI ethically?

One of the big problems that we have is that there are a lot of ethical codes and frameworks out there, which one do we abide by?

Dr Caitlin McDonald believes this is because there are different view on what ethics means. There are also a multitude of different ethical questions that apply at different times depending upon the maturity of the technology you are implementing. If a particular vendor or technology is still in its proof of concept phase, you must be far more rigorous in your analysis of its potential outcomes than if something is far more mature.

Despite there being a plethora of different frameworks out there, Caitlin believed they generally all boil down to 4 key areas that should make up an effective ethical framework –

  • Fairness
  • Transparency
  • Explainability
  • Accountability

Each area is not mutually exclusive, you can’t have one without the rest. Ultimately, what you are striving for is accountability, for your ethical framework to have impact, there needs to be accountability at various stages in the process to ensure that there is a chain of responsibility within your organisation.

 

I want to build an ethical framework, where do I start?

Ben Gilburt shared some useful tools that you can use in your assessment of ‘trustworthy AI’. For HR Practitioners looking to understand the validity of their current or new vendors, have a look at the Algorithm Impact Assessment (AI Now Institute) and the new Ethics Guidelines for Trustworthy AI (European Commission). You’ll be able to pick and choose from a list of questions depending upon the needs of your organisation.

It is important to be aware at this point that “There won’t necessarily a perfectly right and wrong option in your selection of a vendor” but Ben emphasised that it is more important to “make sure you’re making the decision based using a variety of decision makers and on data not simply gut instinct.”. 

Finally, Artificial Intelligence can often be an intimidating prospect, particularly if you are not necessarily knowledgeable on its inner workings. Charlotte Murray offered some pragmatic advice to not be afraid to “Ask questions if you don’t understand – people get put off by AI because it sounds very inaccessible. But if you just ask the providers to detail what is going on ‘under the hood’, if they are not necessarily willing to go into too much detail, and make that information accessible, I think that is a bit of a red flag."

Our next breakfast briefing will be happening early September, and we look forward to sharing the details with you soon.

Ready to find out more?

Our organisational scientists can show you the employee survey and analysis platform or watch a 3 minute demo video.

Book demoWatch demo 3 mins