3 key trends driving Artificial Intelligence (AI) in 2021
For many years, Ocado Technology has been at the forefront of applying artificial intelligence (AI) to real-world problems:
- AI powers personalised experiences in our webshop - learning what you like to order often and tailoring offers relevant to you, with accurate stock balancing taking place behind the scenes.
- AI also powers our forecasting engines across over 50,000 products, which helps us to better predict demand and provide fresh groceries for customers - contributing to the lowest rate of food waste to landfill in the grocery industry.
- And it is AI again that manages the real-time control and health of our robot swarms working to deliver customer orders within our Customer Fulfilment Centres (CFCs) with speed and accuracy.
- Applications of AI permeate our last mile too, for example optimising the routes we use for deliveries - taking into account factors such as the time of day and roadworks, to maintain delivery slots and reduce fuel emissions.
As serial innovators and disruptors, we are constantly learning and improving our craft, looking for the next way we can use technology to disrupt the grocery industry.
With our team’s expertise and experience in this area, a question I’m often asked is ‘Where is AI going? How can we prepare it for the future?’
So what AI trends are on the horizon?
The answer to this question isn’t clear cut.
For me, it’s about thinking about the key trends in the AI space and then understanding their impact on your organisation.
1. The focus will continue to move away from accuracy, onto other measures and aspects of machine learning
For a long time, the key metric in most academic papers and discussions has been accuracy, but this has recently started to shift. We expect this trend to continue.
Rather than trying to ‘just’ create the most accurate algorithm, more emphasis will be placed on other metrics and attributes of the system.
There will be more focus on creating systems that allow quick development and deployment (and easy management) of models so that reproducible results can be achieved easily.
Shipping early and shipping often will become more and more important and as a result - MLOps skills will become more valuable than raw algorithm tuning skills.
At the same time, responsibility and trust become more important as algorithms take over a bigger part of the decision-making process on behalf of customers and colleagues.
The legal framework to underpin this trend started with the General Data Protection Regulation (GDPR) in 2018 and has become a more global phenomenon with the introduction of the California Consumer Privacy Act (CCPA) in 2020.
Being able to explain a model will become more important at the expense of the performance of ‘black-box’ models. In other words, AI systems need to be ready to explain how and why they came to a decision - in a way that can be understood by those affected by the decision.
With AI taking over more and more of the decision-making from customers and colleagues it is important to make sure that the AI is trusted and works in the interests of those that are affected.
Developments in AI will only be accepted if they provide agency for users and this can only be achieved if they are able to follow the decision-making process.
2. Developments in hardware will create a tale of two worlds
We will start to see a speed of up two opposing trends linked to hardware. A lot of state of the art models (e.g. GPT-3) require so many resources to train that only a small minority of organisations will ever be able to afford to do so at scale.
We will therefore continue to see increased emphasis on light libraries (e.g. TensorFlow light) and techniques that reduce the training complexity of advanced methods (e.g. quantization).
This trend will be further accelerated by the growth in Internet of Things (IoT) devices that often don’t have access to huge compute resources and don’t always have enough bandwidth connections to be able to rely purely on models hosted in the cloud. Concerns about carbon emissions generated by data centres will also contribute.
At the same time, we will also continue to see an explosion of AI-specific chips and hardware, such as optimised Graphics Processing Unit (GPUs), Tensor Processing Units (TPUs) and Field-Programmable Gate Arrays (FPGAs), - maybe even Quantum computing over time. These will be used by organisations with the required financial resources in order to develop models that can deal with more complexity and take into account much more training data than the ‘lighter’ systems mentioned before.
This can potentially result in a tale of two worlds - where those organisations that have access to the latest and greatest in hardware will be able to build systems that can significantly outperform their less wealthy competitors, while everyone else focuses on developing approaches that are more hardware light.
3. Data will remain key to success
With much of the latest machine learning research available to the community, a big part of the value differentiation for organisations lies in access to proprietary data. Organisations will therefore need to become even better at extracting meaning out of the data they have.
Multimodal learning (i.e. learning from different data types) will gain in popularity as organisations get better at combining their different data sources. This will require new ways of thinking about dataflows and data warehouses.
In addition, we will see more use of active learning. While this technique is traditionally used for labelling data, we can expect the concepts behind it to gain bigger traction and to be used in applications such as recommendations where there might be multi-turn interactions with the user.
This means that products will need to shift their thinking from a pure ‘what can machine learning do for me’ to a more complex ‘what can I do for machine learning.’
Technology waits for no one
If there is one thing 2020 has shown us, it's that no one can predict the future. We are writing the story as we go.
Yet these trends are already giving rise to new ways of working at Ocado Technology.
For instance, we’re increasing our investment into the creation of a machine learning platform to make the development and deployment of AI at Ocado as simple as shopping online. This will allow us to create new machine learning products faster and with less effort.
We’re bringing together data science, engineering, analytics, UX and product into multi-disciplinary teams to make sure each team is equipped to solve real problems and provide value to our customers and partners.
What’s more, we are investigating how bespoke hardware can help us better extract meaning from our data. For example, developing bespoke machine learning applications around video for use in our Customer Fulfilment Centres.
The key feature of AI is that it learns. And learning is an ongoing process. We’ve only scratched the surface of its potential.
So while we don't yet know what the future holds, one thing is for sure – AI is here to stay.
Author:
Gabriel Straub - Chief Data Scientist, Ocado Technology
You can read all insights from techUK's AI Week here
Katherine Holden
Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme.