One year on from the Online Safety Act - techUK members are introducing new features and product changes to create a safer internet
In the year since the Online Safety Act received Royal Assent, techUK members have made wide ranging product updates and introduced new safety features to improve their services and better serve users.
In the year since the Online Safety Act received Royal Assent, techUK members have made wide ranging product updates and introduced new safety features to improve their services and better serve users.
The passing of the Online Safety Act after years of debate, both inside Parliament and out, represented a significant moment for the regulation of online services.
The Act gave broad new powers to Ofcom and created new duties for online service providers with the aim of creating a safer online experience for UK internet users.
The Act is an incredibly complex piece of legislation, and one that is the product of significant time and effort on the parts of Government, officials, and regulators. These efforts have continued at pace as the Act continues to be implemented.
The tech industry however is not just waiting for the implementation milestones in the Act to get to work and has already moved to innovate. Developing new technologies and making changes to services to better protect users.
Industry action:
techUK members are already taking their own steps to deliver on the ultimate goal of the legislation, to make the online world safer, more user friendly, and more reliable.
As well as the changes that are being made by platforms techUK members are also developing a raft of new safety tech solutions. A recent report from the Department for Science, Innovation and Technology noted that the value of the safety tech sector in the UK could reach a value of over £1 billion in turnover by 2025/26. With the UK already a global leader in this space, techUK’s members are already playing a significant role in the development of these new technologies.
In a range of areas from online safety to fraud, misinformation and deepfakes, techUK members are operating at the forefront of online safety regulation, often grappling with some of the most difficult and sensitive issues as they develop new ways to protect users and build trust online.
Staying safe online
Over the past year, YouTube has collaborated with the independent experts on its Youth and Families Advisory Committee to enhance content recommendation safeguards for teen users. This includes limiting repeated recommendations of sensitive videos, such as those comparing physical features, idealising body types, or displaying intimidation.
YouTube has also expanded its crisis resource panels to interrupt users searching for topics related to suicide, self-harm, or eating disorders. These panels prompt users to pause and slowdown in moments of distress, redirecting them towards helpful resources.
To incorporate the experience of younger users, TikTok has also launched its Youth Council, a new initiative in partnership with specialist online safety agency Praesidio Safeguarding and made up of young people between the ages of 15-18 representing a range of communities and countries.
The platform also operates private by default accounts for younger users, which prohibits under-18s from livestreaming and under-16s from sending or receiving private messages (DMs). The platform has also previously introduced 60-minute daily screen time limits for young users, and has also committed to investing $2 billion in trust and safety in 2024.
Earlier this year, Meta also announced Teen Accounts for young Instagram users, featuring built-in protections that limits the content they can see and restricts who can contact them. All teens using Instagram in the UK are being automatically placed into Teen Accounts, which are private by default, and teens under 16 will need a parent’s permission to change any of these settings to be less strict.
New tech innovations
To keep pace with other requirements in the Act, age verification providers such as Yoti have been partnering closely with social media platforms and other organisations to ensure user safety. Yoti has partnered with Meta to help Instagram verify the ages of its users, using trained AI to estimate ages whilst protecting privacy. Yoti’s facial age estimation technology has performed over 570 million checks worldwide, and is being used by a range of businesses and industries around the world including social media, gaming and age-restricted e-commerce.
The same technology has been used by Avakin Life, a 3D life-simulation game, to protect users. Players who verify that they are over 18 using Yoti’s age assurance technology, they will be able to unlock additional content including chats with fewer restrictions and in-game spaces not accessible to unverified users. This ensures that players over the age of 18 can play with confidence with other adult players, while also enhancing the safety and player experience for younger audiences.
Tik Tok launched a dedicated STEM feed, featuring content from selected experts in their fields. Users under the age of 18 will have the feed enabled by default, but older users can also opt-in. The feed is designed to be education, helping encourage learning and discovering new topics.
Tackling misleading information
Aware of the sensitivities surrounding the UK General Election, TikTok was the first platform to launch a dedicated election centre, connecting TikTok users to trusted information from the Electoral Commission and working alongside fact-checking partner Logically Facts to provide helpful advice on media literacy.
X has developed and expanded Community Notes, which aim to empowering users to collaborate and add context to potentially misleading posts. Contributors who sign up to write and rate notes can leave them on any post, and if enough contributors from different points of view rate that note as helpful, the note will be publicly shown on a post.
There are now over 750,000 contributors in 197 countries to add helpful context to posts on X, including ads, and especially on highly engaged content such as key news events. A recent study found that, across the political spectrum, Community Notes were perceived as significantly more trustworthy than traditional, simple misinformation flags. It also found that Community Notes had a greater effect on improving people’s identification of misleading posts. Importantly, independent studies show that posts with notes are shared 50-61% less, and deleted 80% more.
X has also Community Notes to be automatically shown on posts that feature AI-generated images and other out-of-context media. The approximately 7,800 media notes that have been written are now showing on over 600,000 distinct posts and have been seen over 2.5 billion times. Anyone can now request a Community Note, and with enough requests, top contributors will be alerted and can propose notes. The program is built on transparency: the Community Notes algorithm is open source and publicly available on GitHub, along with the data that feeds it so that anyone can audit, analyse or suggest improvements.
25 leading technology companies also came together to sign the AI Elections Accord, a set of commitments intended to combat the deceptive use of AI in 2024 elections. These measures include proactively mitigating risks from deceptive AI election content and detecting its distribution across platforms.
Addressing deepfakes and the impact of generative AI
TikTok has likewise started to automatically label AI-generated content on its platform, ensuring that the context behind a video is clear to viewers, a tool it has since expanded to also cover content created on some other platforms and subsequently re-shared, and which has since been used by over 37 million creators.
Google has enhanced its search functions to remove non-consensual, sexually-explicit deepfakes and demote websites hosting high volumes of removed content, whilst also making it easier for victims to request the removal of sexually explicit deepfake content and working to reduce the prominence of such content in search results. Updates made in the past year have reduced exposure to explicit image results on these types of queries by over 70%.
YouTube has implemented a tool requiring creators to disclose when realistic content has been altered using AI, and X’s Synthetic and Manipulated Media Policy, ensures that media that could deceive or mislead users and cause harm is labelled.
Meta has made changes to the way they handle manipulated media based on feedback from the independent Oversight Board and a policy review process with public opinion surveys and expert consultations. It now adds “AI info” labels to a wider range of video, audio and image content when they detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content. Transparency and additional context are increasingly seen as the better way to address manipulated media to avoid the risk of unnecessarily restricting freedom of speech, so this approach keeps content on their platforms so they can add labels and context.
Combatting fraud
To combat fraud, Google has partnered with the Global Anti-Scam Alliance and the DNS Research Federation to launch Global Signal Exchange (GSE), a new project with the ambition to become a “global clearinghouse for online scams”, with Google as its first Founding Member. By combining forces, the GSE aims to make it easier to share key signals of fraud, enabling faster identification and disruption of malicious activity. This includes online shopping scams, with the initial pilot of the project enabling Google to share over 100,000 URLs of suspicious or fraudulent merchants.
Meta has expanded a first-of-its-kind information sharing partnership with banks. The Fraud Intelligence Reciprocal Exchange (FIRE) programme allows banks to share intelligence with Meta directly to combat scams on their platforms. The early stage of this pilot has already led Meta to take action against thousands of accounts run by scammers, with approximately 20,000 accounts removed based on data shared. They are continuing to onboard more banks and strengthen their fraud detection capabilities, creating a safer digital environment for users in the UK and globally.
What comes next:
The clear roadmap set out by Ofcom and the commitment from Government to the implementation of the Act, as passed, has given welcome certainty to the industry.
This has allowed members to move forward and begin making the changes to their products and services that will help make the internet safer place. Additionally, this has created good market conditions for providers of safety tech. As the Government and Ofcom move forward with the process of completing the rollout of the Act, we encourage them to continue to provide as much certainty as possible to industry.
The Online Safety Act took too long to make its way through Parliament, and we believe it is everyone’s shared objective to ensure it is fully implemented on schedule, while creating the space to allow online service providers to move early where they can.
Once the Act is fully in force, Government, Industry and the regulator can assess the effectiveness of the Act based on evidence and then work collaboratively to improve the regime over time.
techUK and our members are committed to this process and look forward to continuing to work with our partners as the UK’s Online Safety regime is fully established.
Authors:
Edward Emerson
Head of Digital Regulation, techUK
Edward Emerson
Head of Digital Regulation, techUK
Edward leads the Digital Regulation programme at techUK, which includes our work on online safety, fraud, and regulation for growth initiatives.
He has prior experience working for the Department for Digital, Culture, Media and Sport and has previously worked for a number of public affairs consultancies specialising in research and strategy, working with leading clients in the technology and financial services sectors.
As Associate Director for Policy Neil leads on techUK's public policy work in the UK. In this role he regularly engages with UK and Devolved Government Ministers, senior civil servants and members of the UK’s Parliaments aiming to make the UK the best place to start, scale and develop a tech business.
Neil joined techUK in 2019 to lead on techUK’s input into the UK-EU Brexit trade deal negotiations and economic policy. Alongside his role leading techUK's public policy work Neil also acts as a spokesperson for techUK often appearing in the media and providing evidence to a range of Parliamentary committees.
In 2023 Neil was listed by the Politico newspaper as one of the '20 people who matter in UK tech' and has regularly been cited as a key industry figure shaping UK tech policy.
techUK's Policy and Public Affairs Programme activities
techUK helps our members understand, engage and influence the development of digital and tech policy in the UK and beyond. We support our members to understand some of the most complex and thorny policy questions that confront our sector. Visit the programme page here.
techUK's Growth Plan
We are excited to announce our Growth Plan, which outlines how supporting the UK tech sector will drive growth for the benefit of every nation and region in the UK
Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.
Antony Walker is deputy CEO of techUK, which he played a lead role in launching in November 2013.
Antony is a member of the senior leadership team and has overall responsibility for techUK’s policy work. Prior to his appointment in July 2012 Antony was chief executive of the Broadband Stakeholder Group (BSG), the UK’s independent advisory group on broadband policy. Antony was closely involved in the development of broadband policy development in the UK since the BSG was established in 2001 and authored several major reports to government. He also led the development of the UK’s world leading Open Internet Code of Practice that addresses the issue of net neutrality in the UK. Prior to setting up the BSG, Antony spent six years working in Brussels for the American Chamber of Commerce following and writing about telecoms issues and as a consultant working on EU social affairs and environmental issues. Antony is a graduate of Aberdeen University and KU Leuven and is also a Policy Fellow Alumni of the Centre for Science and Policy at Cambridge University.
As Associate Director for Policy Neil leads on techUK's public policy work in the UK. In this role he regularly engages with UK and Devolved Government Ministers, senior civil servants and members of the UK’s Parliaments aiming to make the UK the best place to start, scale and develop a tech business.
Neil joined techUK in 2019 to lead on techUK’s input into the UK-EU Brexit trade deal negotiations and economic policy. Alongside his role leading techUK's public policy work Neil also acts as a spokesperson for techUK often appearing in the media and providing evidence to a range of Parliamentary committees.
In 2023 Neil was listed by the Politico newspaper as one of the '20 people who matter in UK tech' and has regularly been cited as a key industry figure shaping UK tech policy.
As Head of Public Affairs, Alice supports techUK’s strategic engagement with Westminster, Whitehall and beyond. She regularly works to engage with ministers, members of the UK’s parliaments and senior civil servants on techUK’s work advocating for the role of technology in the UK’s economy as well as wider society.
Alice joined techUK in 2022. She has experience working at both a political monitoring company, leading on the tech, media and telecoms portfolio there, and also as an account manager in a Westminster-based public affairs agency. She has a degree from the University of Sheffield in Politics and Philosophy.
Edward leads the Digital Regulation programme at techUK, which includes our work on online safety, fraud, and regulation for growth initiatives.
He has prior experience working for the Department for Digital, Culture, Media and Sport and has previously worked for a number of public affairs consultancies specialising in research and strategy, working with leading clients in the technology and financial services sectors.
Samiah Anderson is the Head of Digital Economy at techUK, overseeing the Digital Economy programme, which promotes how the UK digital economy and innovation can drive sustained economic growth.
With over six years of Government Affairs expertise, Samiah has built a solid reputation as a tech policy specialist, engaging regularly with UK Government Ministers, senior civil servants and UK Parliamentarians.
Before joining techUK, Samiah led several public affairs functions for international tech firms and coalitions at Burson Global (formerly Hill & Knowlton), delivering CEO-level strategic counsel on political, legislative, and regulatory issues in the UK, EU, US, China, India, and Japan. She is adept at mobilising multinational companies and industry associations, focusing on cross-cutting digital regulatory issues such as competition, artificial intelligence, and more.
She holds a BA (Hons) in Politics, Philosophy, and Economics from the University of London, where she founded the New School Economics Society, the Goldsmiths University chapter of Rethinking Economics.
Audre joined techUK in July 2023 as a Policy Manager for Data. Previously, she was a Policy Advisor in the Civil Service, where she worked on the Digital Markets, Competition and Consumers Bill at the Department for Science, Innovation and Technology, and at HM Treasury on designing COVID-19 support schemes and delivering the Financial Services and Markets Bill. Before that, Audre worked at a public relations consultancy, advising public and private sector clients on their communications, public relations, and government affairs strategy.
Prior to this, Audre completed an MSc in Public Policy at the Korea Development Institute and a Bachelor's in International Relations and History from SOAS, University of London. Outside of work, she enjoys spending time outdoors, learning about new cultures through travel and food, and going on adventures.
Mia works to ensure that the UK policy and regulatory environment promote investment into the tech sector, recognising the role of digital and emerging technology towards future growth and productivity.
Prior to joining techUK, Mia worked as a Senior Policy Adviser at the Confederation of British Industry (CBI) within the Policy Unit.
Mia holds an MSc in International Development from the University of Manchester and a BA(Hons) in Politics and International Relations from the University of Nottingham.
Archie Breare joined techUK in September 2022 as the Telecoms Programme intern, and moved into the Policy and Public Affairs team as the Team Assistant in February 2023 and as Public Affairs Manager in September 2023
Before starting at techUK, Archie was a student at the University of Cambridge, completing an undergraduate degree in History and a master's degree in Modern British History.
In his spare time, he likes to read, discuss current affairs, and to try and persuade himself to cycle more.
Policy and Public Affairs - Team Assistant, techUK
Oliver Alderson
Policy and Public Affairs - Team Assistant, techUK
Oliver is the team assistant for the Policy and Public Affairs teams, joining techUK in November of 2023. He assists the teams admistrative support, communication, and event production.
Prior to working at techUK, Oliver studied at Swansea and Bristol universities, attaining a masters in Policy Research. During this time he competed in debating competitions around the country where he discussed and deliberated various policy issues. Between his studies Oliver acted as a student assistant in mental health research for the SMaRteN student network.