10 Apr 2024
by Jonti Dalal-Small

Cultivating psychological safety in AI decision making

The potential of Artificial Intelligence (AI) and automated decision-making tools to create efficiency and drive pace is exciting. But as organisations move to incorporate AI into their day-to-day work, a challenge is to balance innovation with results. Organisations must consider the human side of the equation by nurturing a positive and adaptive culture. The key to that is psychological safety. 

Effective decision-making is at the core of organisational success, and AI is already delivering benefit in this space. AI has the potential to provide much needed efficiency improvements in organisational decision making. By harnessing the power of machine learning and data analysis, AI can revolutionise how organisations process information, identify patterns, and ultimately make decisions. But as the Post Office scandal has brought into sharp focus, automated decision-making must be backed up by human intelligence and accountability. 

AI alone isn’t the answer  

Decision making efficiencies are not just about speed. Often the aspiration is that decisions are tailored yet consistent, empathetic yet fair - a feat that requires an understanding of human behaviour, values and motivation. While AI promises to be quicker, it comes with its own challenges. Biases can seep into algorithms, leading to undesirable results and exaggerating societal inequalities. For instance, the 2020 attempt to grade A-Level and GCSE exams with a machine learning algorithm resulted in nearly 40% of students receiving lower grades than anticipated by teachers. This led to public uproar and legal action, especially when the lower grades happened more frequently in inner city state schools.  

Even with these challenges, it’s possible to get it right. Combining the strengths of AI with human elements will optimise the process. By integrating insights from behavioural science and ethical AI, we can enhance decision-making, ensuring it becomes fairer, more informed, and empathetic. While AI is not the sole answer to decision-making challenges, it can contribute significantly when used in tandem with human intelligence and ethical considerations. We need humans to be part of the process. More than that, we need humans to feel psychologically safe in contributing to that process. 

Culture and behaviours of an organisation  

In a time of rapid, complex and unpredictable change, focusing on psychological safety is a critical factor when implementing artificial intelligence. The term ‘psychological safety’ refers to an individual's sense of security in taking risks, voicing opinions, and making mistakes without fear of punishment or retribution. 

Psychological safety is vital to the successful adoption of AI. Individuals need to see AI as an opportunity rather than a threat, and this can only happen if they feel psychologically safe. Otherwise, individuals may resist the adoption of AI tools and view them as threats, hampering the successful integration of AI into an organisation's operations. 

Fostering psychological safety  

To foster psychological safety, organisations need to promote a culture where open communication is encouraged, and concerns about AI can be openly discussed. Google's Project Aristotle found that the biggest determinant of a successful team was whether individuals felt safe to speak up and share ideas. This highlights the importance of psychological safety in promoting success within an organisation. 

The importance of psychological safety extends to the development and deployment of ethical AI. When individuals feel psychologically safe, they are more likely to trust AI and participate in implementing responsible AI. This can help organisations spot and mitigate any biases being built into AI algorithms, rendering more effective and efficient systems. 

Poor implementation of AI can damage psychological safety within an organisation. However, if organisations focus on ensuring their employees feel psychologically safe, the introduction of AI is likely to be smoother. This will enhance the employee experience, improve decision-making, and lead to better overall outcomes. 

In our work with clients – and within our own organisation – we are implementing a psychologically safe culture. We’re passionate about this approach, because we know that organisations that thrive with AI won’t just have the right tech, data and governance. They will be defined by collaboration, an ethical approach and cultures of psychological safety. 


Heather Cover-Kus

Heather Cover-Kus

Head of Central Government Programme, techUK

Heather is Head of Central Government Programme at techUK, working to represent the supplier community of tech products and services to Central Government.

Prior to joining techUK in April 2022, Heather worked in the Economic Policy and Small States Section at the Commonwealth Secretariat.  She led the organisation’s FinTech programme and worked to create an enabling environment for developing countries to take advantage of the socio-economic benefits of FinTech.

Before moving to the UK, Heather worked at the Office of the Prime Minister of The Bahamas and the Central Bank of The Bahamas.

Heather holds a Graduate Diploma in Law from BPP, a Masters in Public Administration (MPA) from LSE, and a BA in Economics and Sociology from Macalester College.

Email:
[email protected]
LinkedIn:
https://www.linkedin.com/in/heather-cover-kus-ba636538

Read lessmore

Ellie Huckle

Ellie Huckle

Programme Manager, Central Government, techUK

Ellie joined techUK in March 2018 as a Programme Assistant to the Public Sector team and now works as a Programme Manager for the Central Government Programme.

The programme represents the supplier community of technology products and services in Central Government – in summary working to make Government a more informed buyer, increasing supplier visibility in order to improve their chances of supplying to Government Departments, and fostering better engagement between the public sector and industry. To find out more about what we do, how we do this and how you can get involved – make sure to get in touch!

Prior to joining techUK, Ellie completed Sixth Form in June 2015 and went on to work in Waitrose, moved on swiftly to walking dogs and finally, got an office job working for a small local business in North London, where she lives with her family and their two Bengal cats Kai and Nova.

When she isn’t working Ellie likes to spend time with her family and friends, her cats, and enjoys volunteering for diabetes charities. She has a keen interest in writing, escaping with a good book and expanding her knowledge watching far too many quiz shows!

Email:
[email protected]
Phone:
020 7331 2015
Twitter:
@techUK,@techUK
Website:
www.techuk.org,www.techuk.org
LinkedIn:
https://bit.ly/3mtQ7Jx,https://bit.ly/3mtQ7Jx

Read lessmore

Annie Collings

Annie Collings

Programme Manager, Cyber Security and Central Government, techUK

Annie joined techUK as the Programme Manager for Cyber Security and Central Government in September 2023. In this role, she supports the Cyber Security SME Forum, engaging regularly with key government and industry stakeholders to advance the growth and development of SMEs in the cyber sector.

Before joining techUK, Annie was an Account Manager at a specialist healthcare agency, where she provided public affairs support to a wide range of medical technology clients. She also gained experience as an intern in both an MP’s constituency office and with the Association of Independent Professionals and the Self-Employed. Annie holds a degree in International Relations from Nottingham Trent University.

Email:
[email protected]
Twitter:
anniecollings24
LinkedIn:
https://www.linkedin.com/in/annie-collings-270150158/

Read lessmore

Austin Earl

Austin Earl

Programme Manager, Central Government, techUK

Austin joined techUK’s Central Government team in March 2024 to launch a workstream within Education and EdTech.

With a career spanning technology, policy, media, events and comms, Austin has worked with technology communities, as well as policy leaders and practitioners in Education, Central and Local Government and the NHS.

Cutting his teeth working for Skills Matter, London’s developer community hub, Austin then moved to GovNet Communications where he launched Blockchain Live and the Cyber Security and Data Protection Summit. For the last 3 years he has worked with leaders in Education across the state and independent schools sectors, from primary up to higher education, with a strong research interest in technology and education management.

Email:
[email protected]
Phone:
07891 743 932
Website:
www.techuk.org,www.techuk.org
LinkedIn:
https://www.linkedin.com/in/austin-spencer-earl/,https://www.linkedin.com/in/austin-spencer-earl/

Read lessmore

Ella Gago-Brookes

Ella Gago-Brookes

Team Assistant, Markets, techUK

Ella joined techUK in November 2023 as a Markets Team Assistant, supporting the Justice and Emergency Services, Central Government and Financial Services Programmes.  

Before joining the team, she was working at the Magistrates' Courts in legal administration and graduated from the University of Liverpool in 2022.  Ella attained an undergraduate degree in History and Politics, and a master's degree in International Relations and Security Studies, with a particular interest in studying asylum rights and gendered violence.  

In her spare time she enjoys going to the gym, watching true crime documentaries, travelling, and making her best attempts to become a better cook.  

Email:
[email protected]

Read lessmore

 

Authors

Jonti Dalal-Small

Jonti Dalal-Small

Organisation Psychologist & Behavioural Science Lead, Sopra Steria