Lee Sedol, a world-class Go Champion, was flummoxed by the 37th move Deepmind’s AlphaGo made in the second match of the famous 2016 series. So flummoxed that it took him nearly 15 minutes to formulate a response. The move was strange to other experienced Go players as well, with one commentator suggesting it was a mistake. In fact, it was a canonical example of an artificial intelligence algorithm learning something that seemed to go beyond just pattern recognition in data — learning something strategic and even creative. Indeed, beyond just feeding the algorithm past examples of Go champions playing games, Deepmind developers trained AlphaGo by having it play many millions of matches against itself. During these matches, the system had the chance to explore new moves and strategies, and then evaluate if they improved performance. Through all this trial and error, it discovered a way to play the game that surprised even the best players in the world.
If this kind of AI with creative capabilities seems different than the chatbots and predictive models most businesses end up with when they apply machine learning, that’s because it is. Instead of machine learning that uses historical data to generate predictions, game-playing systems like AlphaGo use reinforcement learning — a mature machine learning technology that’s good at optimizing tasks. To do so, an agent takes a series of actions over time, and each action is informed by the outcome of the previous ones. Put simply, it works by trying different approaches and latching onto — reinforcing — the ones that seem to work better than the others. With enough trials, you can reinforce your way to beating your current best approach and discover a new best way to accomplish your task.
Despite its demonstrated usefulness, however, reinforcement learning is mostly used in academia and niche areas like video games and robotics. Companies such as Netflix, Spotify, and Google have started using it, but most businesses lag behind. Yet opportunities are everywhere. In fact, any time you have to make decisions in sequence — what AI practitioners call sequential decision tasks — there a chance to deploy reinforcement learning.
Consider the many real-world problems that require deciding how to act over time, where there is something to maximize (or minimize), and where you’re never explicitly given the correct solution. For example:
- How should you route data traffic to different servers or decide what servers to power down in a data center?
- When building a molecule in simulation to develop a breakthrough drug, how do you determine which reagent to add next?
- If you want to sell a large amount of stock, how do you carefully sell small orders throughout a day to minimize the amount that the stock price drops?
If you’re a company leader, there are likely many processes you’d like to automate or optimize, but that are too dynamic or have too many exceptions and edge cases, to program into software. Through trial and error, reinforcement learning algorithms can learn to solve even the most dynamic optimization problems — opening up new avenues for automation and personalization in quickly changing environments.
What Reinforcement Learning Can Do
Many businesses think of machine learning systems as “prediction machines” and apply algorithms to forecast things like cash flow or customer attrition based on data such as transaction patterns or website analytics behavior. These systems tend to use what’s called supervised machine learning. With supervised learning, you typically make a prediction: the stock will likely go up by four points in the next six hours. Then, after you make that prediction, you’re given the actual answer: the stock actually went up by three points. The system learns by updating its mapping between input data — like past prices of the same stock and perhaps of other equities and indicators — and output prediction to better match the actual answer, which is called the ground truth.
With reinforcement learning, however, there’s no correct answer to learn from. Reinforcement learning systems produce actions, not predictions — they’ll suggest the action most likely to maximize (or minimize) a metric. You can only observe how well you did on a particular task and whether it was done faster or more efficiently than before. Because these systems learn through trial and error, they work best when they can rapidly try an action (or sequence of actions) and get feedback — a stock market algorithm that takes hundreds of actions per day is a good use case; optimizing customer lifetime value over the course of five years, with only irregular interaction points, is not. Significantly, because of how they learn, they don’t need mountains of historical data — they’ll experiment and create their own data along the way.
They can therefore be used to automate a process, like placing items into a shipping container with a robotic arm; or to optimize a process, like deciding when and through what channel to contact a client who missed a payment, with the highest recouped revenue and lowest expended effort. In either case, designing the inputs, actions, and rewards the system uses is the key — it will optimize exactly what you encode it to optimize and doesn’t do well with any ambiguity.
Google’s use of reinforcement learning to help cool its data centers is a good example of how this technology can be applied. Servers in data centers generate a lot of heat, especially when they’re in close proximity to one another, and overheating can lead to IT performance issues or equipment damage. In this use case, the input data is various measurements about the environment, like air pressure and temperature. The actions are fan speed (which controls air flow) and valve opening (the amount of water used) in air-handling units. The system includes some rules to follow safe operating guidelines, and it sequences how air flows through the center to keep the temperature at a specified level while minimizing energy usage. The physical dynamics of a data center environment are complex and constantly changing; a shift in the weather impacts temperature and humidity, and each physical location often has a unique architecture and set up. Reinforcement learning algorithms are able to pick up on nuances that would be too hard to describe with formulas and rules.
Reinforcement Learning at Work |
|||||
How top companies are using this breed of AI to solve tough problems. | |||||
Company | Application | Sector | Inputs | Actions | Objective |
Royal Bank of Canada | Trade execution platform for multiple strategies | Financial services | 200-plus market-related data inputs | Sell, buy, hold stocks | To trade as close as possible to VWAP, a common price metric |
Netflix | Test schedules for business partner devices | Technology | Historical test and device performance information | Which test to do next | Minimize device failure |
Spotify | Recommendation engine | Entertainment | Previous songs liked/disliked/not played | Which songs to put in your playlist | Maximize user listening time |
JPMorgan Chase | Financial derivatives risk and pricing calculations | Financial services | Historical market data | Price and sell a financial product | Maximize future cash flows of an investment portfolio |
Data center cooling | Technology | Temperature/air pressure | Turn on fan; add water to air unit | Control temperature and reduce energy usage | |
DiDi | Order dispatching | Ride hailing | Number of idle vehicles, number of orders, location, destination | Match driver to passenger | Minimize pickup time and maximize revenue |
Note: Details for use cases can be found in published papers, but we could not verify if they are used in production applications. | © HBR.org |
Here at Borealis AI, we partnered with Royal Bank of Canada’s Capital Markets business to develop a reinforcement learning-based trade execution system called Aiden. Aiden’s objective is to execute a customer’s stock order (to buy or sell a certain number of shares) within a specified time window, seeking prices that minimize loss relative to a specified benchmark. This becomes a sequential decision task because of the detrimental market impact of buying or selling too many shares at once: the task is to sequence actions throughout the day to minimize price impact.
The stock market is dynamic and the performance traditional algorithms (the rules-based algorithms traders have used for years) can vary when today’s market conditions differ from yesterday’s. We felt this was a good reinforcement learning opportunity — it had the right balance between clarity and dynamic complexity. We could clearly enumerate the different actions Aiden could take, and the reward we wanted to optimize (minimize the difference between the prices Aiden achieved and the market volume-weighted average price benchmark). The stock market moves fast and generates a lot of data, giving the algorithm quick iterations to learn.
We let the algorithm do just that through countless simulations before launching the system live to the market. Ultimately, Aiden proved able to perform well during some of the more volatile market periods during the beginning of the Covid-19 pandemic — conditions that are particularly tough for predictive AIs. It was able to adapt to the changing environment, while continuing to stay close to its benchmark target.
How to Spot an Opportunity for Reinforcement Learning
How can you tell if you’re overlooking a problem that reinforcement learning might be able to fix? Here’s where to start:
Make a list.
Create an inventory of business processes that involve a sequence of steps and clearly state what you want to maximize or minimize. Focus on processes with dense, frequent actions and opportunities for feedback and avoid processes with infrequent actions and where it’s difficult to observe which worked best to collect feedback. Getting the objective right will likely require iteration.
Consider other options.
Don’t start with reinforcement learning if you can tackle a problem with other machine learning or optimization techniques. Reinforcement learning is helpful when you lack sufficient historical data to train an algorithm. You need to explore options (and create data along the way).
Be careful what you wish for.
If you do want to move ahead, domain experts should closely collaborate with technical teams to help design the inputs, actions, and rewards. For inputs, seek the smallest set of information you could use to make a good decision. For actions, ask how much flexibility you want to give the system; start simple and later expand the range of actions. For rewards, think carefully about the outcomes — and be careful to avoid falling into the traps of considering one variable in isolation or opting for short-term gains with long-term pains.
Ask whether it’s worth it.
Will the possible gains justify the costs for development? Many companies need to make digital transformation investments to have the systems and dense, data-generating business processes in place to really make reinforcement learning systems useful. To answer whether the investment will pay off, technical teams should take stock of computational resources to ensure you have the compute power required to support trials and allow the system to explore and identify the optimal sequence. (They may want to create a simulation environment to test the algorithm before releasing it live.) On the software front, if you’re planning to use a learning system for customer engagement, you need to have a system that can support A/B testing. This is critical to the learning process, as the algorithm needs to explore different options before it can latch onto which one works best. Finally, if your technology stack can only release features universally, you need likely to upgrade before you start optimizing.
Prepare to Be Patient.
And last but not least, as with many learning algorithms, you have to be open to errors early on while the system learns. It won’t find the optimal path from day one, but it will get there in time — and potentially find surprising, creative solutions beyond human imagination when it does.
While reinforcement learning is a mature technology, it’s only now starting to be applied in business settings. The technology shines when used to automate or optimize business processes that generate dense data, and where there could be unanticipated changes you couldn’t capture with formulas or rules. If you can spot an opportunity, and either lean on an in-house technical team or partner with experts in the space, there’s a window to apply this technology to outpace your competition.
This article has been published from the source link without modifications to the text. Only the headline has been changed.