Cultivating psychological safety in AI decision making
The potential of Artificial Intelligence (AI) and automated decision-making tools to create efficiency and drive pace is exciting. But as organisations move to incorporate AI into their day-to-day work, a challenge is to balance innovation with results. Organisations must consider the human side of the equation by nurturing a positive and adaptive culture. The key to that is psychological safety.
Effective decision-making is at the core of organisational success, and AI is already delivering benefit in this space. AI has the potential to provide much needed efficiency improvements in organisational decision making. By harnessing the power of machine learning and data analysis, AI can revolutionise how organisations process information, identify patterns, and ultimately make decisions. But as the Post Office scandal has brought into sharp focus, automated decision-making must be backed up by human intelligence and accountability.
AI alone isn’t the answer
Decision making efficiencies are not just about speed. Often the aspiration is that decisions are tailored yet consistent, empathetic yet fair - a feat that requires an understanding of human behaviour, values and motivation. While AI promises to be quicker, it comes with its own challenges. Biases can seep into algorithms, leading to undesirable results and exaggerating societal inequalities. For instance, the 2020 attempt to grade A-Level and GCSE exams with a machine learning algorithm resulted in nearly 40% of students receiving lower grades than anticipated by teachers. This led to public uproar and legal action, especially when the lower grades happened more frequently in inner city state schools.
Even with these challenges, it’s possible to get it right. Combining the strengths of AI with human elements will optimise the process. By integrating insights from behavioural science and ethical AI, we can enhance decision-making, ensuring it becomes fairer, more informed, and empathetic. While AI is not the sole answer to decision-making challenges, it can contribute significantly when used in tandem with human intelligence and ethical considerations. We need humans to be part of the process. More than that, we need humans to feel psychologically safe in contributing to that process.
Culture and behaviours of an organisation
In a time of rapid, complex and unpredictable change, focusing on psychological safety is a critical factor when implementing artificial intelligence. The term ‘psychological safety’ refers to an individual's sense of security in taking risks, voicing opinions, and making mistakes without fear of punishment or retribution.
Psychological safety is vital to the successful adoption of AI. Individuals need to see AI as an opportunity rather than a threat, and this can only happen if they feel psychologically safe. Otherwise, individuals may resist the adoption of AI tools and view them as threats, hampering the successful integration of AI into an organisation's operations.
Fostering psychological safety
To foster psychological safety, organisations need to promote a culture where open communication is encouraged, and concerns about AI can be openly discussed. Google's Project Aristotle found that the biggest determinant of a successful team was whether individuals felt safe to speak up and share ideas. This highlights the importance of psychological safety in promoting success within an organisation.
The importance of psychological safety extends to the development and deployment of ethical AI. When individuals feel psychologically safe, they are more likely to trust AI and participate in implementing responsible AI. This can help organisations spot and mitigate any biases being built into AI algorithms, rendering more effective and efficient systems.
Poor implementation of AI can damage psychological safety within an organisation. However, if organisations focus on ensuring their employees feel psychologically safe, the introduction of AI is likely to be smoother. This will enhance the employee experience, improve decision-making, and lead to better overall outcomes.
In our work with clients – and within our own organisation – we are implementing a psychologically safe culture. We’re passionate about this approach, because we know that organisations that thrive with AI won’t just have the right tech, data and governance. They will be defined by collaboration, an ethical approach and cultures of psychological safety.
Heather Cover-Kus
Heather is Head of Central Government Programme at techUK, working to represent the supplier community of tech products and services to Central Government.
Ellie Huckle
Ellie joined techUK in March 2018 as a Programme Assistant to the Public Sector team and now works as a Programme Manager for the Central Government Programme.
Annie Collings
Annie joined techUK as the Programme Manager for Cyber Security and Central Government in September 2023. In this role, she supports the Cyber Security SME Forum, engaging regularly with key government and industry stakeholders to advance the growth and development of SMEs in the cyber sector.
Austin Earl
Austin joined techUK’s Central Government team in March 2024 to launch a workstream within Education and EdTech.
Ella Gago-Brookes
Ella joined techUK in November 2023 as a Markets Team Assistant, supporting the Justice and Emergency Services, Central Government and Financial Services Programmes.