Risks of AI

Digital Eye Wave Lines Stock Background - Credit: iStock-1333772664

As artificial intelligence has evolved, concerns have also increased regarding the risks posed by its technological advance, especially around recruitment.

Over the past 12 months, humankind has been racing against the machines to combat the existential threat of artificial intelligence (AI). In May came the clarion call from eminent global scientists and experts in the form of the Statement on AI Risk which said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The Centre for AI Safety in the US said it can sometimes be difficult to voice concerns about some of advanced AI’s most severe risks and the “succinct statement” aims to overcome this obstacle and open up discussion. It certainly achieved its aim.

In the final quarter of 2023, President Biden issued the Executive Order on Safe, Secure and Trustworthy Artificial Intelligence, which the White House said represents the “the most significant actions ever taken by any government to advance the field of AI safety”. In the European Union (EU), after three days of “marathon” talks, the Council presidency and the European Parliament’s negotiators reached a provisional agreement on the proposal on harmonised rules on AI. It describes its AI act as a flagship legislative initiative with the potential to foster the development and uptake of safe and trustworthy AI across the EU’s single market by both private and public actors.

Meanwhile in the UK, Prime Minister Rishi Sunak hosted the two-day summit on AI safety at Bletchley Park, which was attended by 150 representatives from around the world. It also established the AI Safety Institute, tasked with testing the emerging types of AI before and after they are released. And led by GCHQ’s National Cyber Security Centre, the UK has also published what it claims are the first global guidelines to ensure the secure development of AI technology. Agencies from 18 countries, including the US, have confirmed they will endorse and co-seal the new guidelines.

So: what does this flourish of activity mean for recruiters and employers?

In the US directive, President Biden directed actions in a number of relevant areas, including addressing job displacement, labour standards, workplace health and data collection. The directive says that principles and best practices in this area will benefit workers by providing guidance to prevent employers from “undercompensating workers”, “evaluating job applications unfairly” and “impinging on workers’ ability to organise”.

4K Resolution of Digital Eye Wave Lines Stock Background - Credit: iStock - 1333222351

Regulations and AI

The EU AI act takes a “risk-based” approach and the higher the risk, “the stricter the rules”. Work will continue at a technical level to finalise the detail of new regulations but the provisional agreement provides for increased transparency regarding the use of “high-risk AI systems”. In its Regulatory framework proposal on AI, AI systems identified as high-risk include AI technology used in:

  • high-risk AI systems are educational or vocational training, that may determine the access to education and professional course of someone’s life (for example, scoring of exams)
  • employment, management of workers and access to self-employment (eg. CV-sorting software for recruitment procedures).

Ahead of the regulations’ finalisation, agencies and employers should, in any case, be ensuring due diligence when it comes to implementing any systems that utilise AI and machine learning.

Hayfa Mohdzaini, senior research adviser for data, technology and AI at the Chartered Institute for Personnel and Development (CIPD), said summaries from the UK AI summit roundtables were “high-level”, and recruitment and HR “didn’t appear to be the focus of discussions”. However, she added, she welcomed the international co-operation it brought.

She adds: “The summit did achieve a non-binding declaration signed by 28 nations, which was a positive step forward and signals that political leaders are aware of AI’s challenges and opportunities.


Next steps

Has Britain got enough AI talent?

While there is no doubting the UK’s ambition to be a world leader in AI, organisations from all sectors still face challenges when it comes to ensuring we fulfil these aims. The CIPD’s Hayfa Mohdzaini says the UK has many startups and large companies working on AI-embedded applications, as well as a strong research base with top universities researching AI.

Vortical line pattern- Credit: iStock - 1305745456

Moreover, the UK government also invests in research and development and provides funding for start-ups. “However, the UK market is not as large as others – for example, the US or China – and there is a shortage of people with AI skills,” she says. “People professionals can support in acquiring and developing the talent that organisations need, and in creating the right organisational culture and environment for innovation in this area.”

Kelly Gauthier is an AI and data science recruitment expert, and says the UK has produced many of the world’s leading researchers and innovative minds in the field of AI. “Some of the most exciting AI research groups and companies in the world are here. The enduring problem is whether UK-based companies can compete with US companies on the salary-front? So far, this hasn't proven to be the case,” she says.

Gauthier adds: “The other big question mark is whether the UK can integrate tech into education across the board (not just at private schools) from a young age so that students can learn the requisite skills and adopt the right mindset to thrive in the emerging economy.”

The TUC’s Nicola Smith believes the UK can become a global leader in AI without “riding roughshod over workers’ rights” but says it is vital that working people have a seat at the table and a genuine say in how AI is developed and used in the workplace.

“Over the last 13 years we have seen chronic underinvestment in UK skills,” she says. “It is essential that workers get the necessary training as the world of work changes and that people whose jobs are at risk from advancements in AI are not just thrown on to the scrap heap.”


AI and workers

At its AI conference in April last year (see Recruiter July-August 2023), the Trades Union Congress (TUC) said employers must disclose to workers how AI is being used in the workplace to make decisions about them and that every worker should be entitled to a human review of decisions made by AI systems, including job applications, so they can challenge decisions that are unfair and discriminatory.

TUC head of rights Nicola Smith said it was “very disappointing” that unions and many from wider civil society were not invited to the AI summit.

“This was a huge, missed opportunity,” she says. “Union members are at the coal face and deserve to have a say in how technology is rolled out and implemented at work. The summit was also guilty of focusing too much on existential future risks instead of the current challenges many are facing over the use of AI at work, like being hired and fired by algorithm.

“AI is already making life changing decisions about the way we work, like how people are hired, fired and performance managed. But UK employment law is not keeping pace ... leaving workers vulnerable to potential discrimination and exploitation.

“While the likes of the EU and US are actively legislating to protect workers, our government is taking a worryingly laissez-faire approach to regulating AI,” Smith adds.

She explains that the TUC has brought together a taskforce of unions, academics, lawyers and tech leaders to develop a legal framework for AI at work. “Over the coming months we will be lobbying all parties to support our new AI employment bill. This needs to be introduced onto the statute book to protect workers’ rights and ensure everyone benefits from advances in technology.”

Smith says the chief mission of the new TUC AI taskforce will be to fill the current gaps in UK employment law by drafting new legal protections to ensure AI is regulated fairly at work for the benefit of employees and employers.

Mohdzaini said the CIPD seeks to inform the development of any future legislation in this area. “It’s important that both employers and policy makers understand any potential risks around the use of AI at work. We will be using insights from this forum to build on our existing guidance, and work to understand the policies employers need to optimise the benefits of AI for both the business and workers.”

The CIPD’s collated AI resources include guidance, podcasts, thought leadership and webinars, which are available on the CIPD website. Mohdzaini adds: “The ‘technology and people’ core knowledge in our Profession Map also now sets the expectation for people professionals to be aware of AI-embedded technologies and to use them responsibly at work.”


For further reading

The National Cyber Security Centre https://www.ncsc.gov.uk/

AI Safety Institute policy paper https://shorturl.at/pU258

Fact sheet on US Executive Order https://shorturl.at/htER6

EU AI act provisional details and more information on its risk-based approach 

https://shorturl.at/dfFKN

https://shorturl.at/asyG9

Centre for AI Safety (US) https://www.safe.ai/

CIPD resources https://shorturl.at/pqMR4

TUC AI Taskforce https://rb.gy/hro1bs


AI and responsibilities

Mohdzaini advises that AI-embedded HR and recruitment tools need to be audited periodically to ensure that they are working as intended and not unfairly disadvantaging underrepresented groups of people. “People professionals need to think about which AI-embedded tools can help them be more efficient and create value for their organisations, while ensuring a great employee experience. This means that as AI and other technologies we use at work evolve, so will the nature of our work and the roles of people professionals.”

Keith Rosser, director of group risk and Reed Screening, says participation by industry bodies is essential. “And they don’t just need to be thinking about how work is done today but about the wave of changes that could be coming in the future.”

Rosser sits on the All-Party Parliamentary Group (APPG) on Modernising Employment, which wants to make UK hiring the fastest in the world. In discussions over the past 12 months, he says MPs have been quick to bring up the subject of AI. He recognises the potential of AI to bring efficiencies in the hiring process as well as address many of its emerging challenges such as helping to verify who people are and detect suspicious profiles. “For example, if a candidate regularly changes their profile with contradictory information, it could detect this,” he says. “Also, with the sector embroiled in scams and fake jobs, I think AI could have the ability to, for instance, scan job adverts to see if they are genuine.”

But, like many, he has concerns about it bringing more discrimination into the process and says the big question is over the part played by humans in the future. “Are we really talking about entire hiring and recruitment processes with, for example, automated video interviews and that type of thing,” he says. “Or are we talking about using AI for what people see as the waste activity but still having a human involved in the key stages? And I think that’s the big unanswered question at the moment.”

“What stands out with AI is the sheer speed with which it learns and develops, which in itself will drive a new pace of change. In short, the industry needs to be all over this.”

Image credit | iStock

 

Lightbulb Tech Jobs - Credit: Shutterstock - 2134947991

Tech & Tools: January/February 2024

Jack Kennedy, senior economist at job site Indeed UK

Tired or Haggard Businesswoman Character Lying on Huge Battery with Low Red Charging Level. - Credit: iStock - 1277390923

Emotional overload

Let’s talk about the consequences of compassion fatigue in the workplace, and what businesses can

16 January 2024
Tara Ricks profile

Business advice: Staying ahead of the curve is crucial

As our landscape continues to evolve, planning for and staying ahead of curve volatility into Q1

16 January 2024
World news isometric landing page. Earth globe lying on huge smartphone screen with tv presenters broadcasting on television.Credit: iStock - 1197355483

Are we entering a brave new world?

Across the planet we are in a period of enormous upheaval with huge changes taking place that aff

16 January 2024
Top