I had an interview with Mr. Paul Thies from Thomson Reuter on September 21, 2018, and below is the transcript of the interview (with some minor changes).
Understanding people better, thanks to AI
For Chuan Sun, artificial intelligence is a means for better people understanding and more efficient business operations
We recently spoke with Chuan Sun, vice president and data scientist at JPMC, who provided us with his views on how data science and machine learning is valuable as a tool for financial professionals, the challenge of overcoming bias in programming, and how artificial intelligence can be used in the fight against money laundering.
QUOTE: “AI can replace humans for error-prone tasks and liberate us to do more creative things. If you train AI well, it can learn for itself … it is possible for us to use self-learning AIs to achieve precision-crafted performance with reduced operational and human error.”
– Chuan Sun
PAUL THIES: What do you see as the top competitive advantages that artificial intelligence (AI) can bring to organizations?
CHUAN SUN: I think there are two major advantages that AI can bring to organizations. One thing is that it can improve our ability to understand customers better. For an organization to be really customer-centric, it has to understand its customers deeply. We are all living in an ocean of data, where every millisecond tons of metadata (whether organizational or customer-related) is generated and then fed into data pools. When that data is properly ingested and fed into the machine learning workflow, it is feasible for decision makers to get a very deep and quick understanding about customer behavior, which can create a competitive edge.
The second major advantage is that it really helps organizations become more efficient. Currently, most of the work still requires humans to handle and process information, but we know that humans are prone to making mistakes. Our brains and our hands are not programmed to beat machines on computationally intensive tasks.
Consider Murphy’s Law, which says that anything that can go wrong will go wrong. For example, in order to scan thousands of compliance files, thousands of human hours are required. For an AI system, however, it just takes seconds to finish; that is one way that JPMorgan Chase deployed AI last year. AI can replace humans for error-prone tasks and liberate us to do more creative things. If you train AI well, it can learn for itself; a well-known example is AlphaZero which leveraged self-learning and reinforcement learning to master the game of Go, without human knowledge of the game. To this end, it is possible for us to use self-learning AIs to achieve precision-crafted performance with reduced operational and human error.
PAUL THIES: What are some use cases of data science that are particularly valuable for financial professionals?
CHUAN SUN: I think there are many use cases. Depending on the category of the financial services or the products or applications, machine learning can put many different tools and algorithms at the disposal of data scientists and financial professionals. An example is traditional statistical base analysis – it’s very simple but it’s also very powerful and useful in many scenarios. Also, simple algorithms like linear regression, generalized linear models and logistic regression are still widely used in many cases, such as credit card applications, mortgage applications, loan applications, and even in high-frequency trading. I’ve talked to many people in the trading area, and those simple things still work very well.
In order to have high-quality features there are many things to try, such as data sampling. They’re very simple but if they are not performed or applied in the right way they cannot give you a good result. Feature selection and feature embedding are two useful techniques. They can be used in pretty much any machine learning workflows as a standard pre-processing step. Feature normalization and standardization are also things that we must do in nearly all machine learning algorithms. There are some exceptions (for example, tree-based algorithms or certain neural network-based algorithms), but generally you need to do standardization or normalization.
Many financial services generate a lot of time-series data – for example, credit card transactional histories or stock prices. Some companies also scrape daily news articles from different news sources. It could be very valuable for financial professionals to understand algorithms in extracting insights from the time-series data to do forecasting or trend analysis. No matter if it is very simple like an AutoRegressive Integrated Moving Average (ARIMA) model or sophisticated like a recursive neural network-based model like Long Short Term Memory (LSTM), I recommend financial professionals try using those algorithms.
One thing that I believe is that whenever there is a target metric or KPI that we really care about, there is a possibility to apply certain statistical or machine learning models to improve this metric as long as we already collect a reasonable amount of high-quality data.
PAUL THIES: How can machine learning assist with credit decisions, and how do we keep bias from creeping in to the programming?
CHUAN SUN: I think first of all we need to understand where bias comes from. Let’s consider a very simple example. Suppose there is a start-up company that wants to build autonomous cars. One project is that they want to let a test car drive on the street, understand traffic and detect if there are any other cars in front of it.
At a very early offline data collection stage, this start-up company needs to collect likely millions of data images. It is possible that in this stage the data collection rules introduce bias, either intentionally or unintentionally – for example, if a data collector only collected images on very sunny weather days, the learning model will not generalize well on rainy or snowy days because it only learned from sunny weather days. In this case, the right way would be to make sure that all the collected images in the training dataset are representative enough to cover all possible weather conditions. You can imagine this is one source of bias.
Let’s imagine that the data collection is fair and very comprehensive, and the data collectors use traffic images from all weather conditions (sunny, rainy, snowy, foggy, etc.). Due to the hierarchical nature and spatial dependency of the elements of the traffic image (the road, the light, the pedestrians, the other cars, all that), it is very important to use a machine learning model with a large capacity to explain all the variances in the traffic image.
Now, if the decision makers in that start-up company make a very simplified assumption that a linear model can fit the traffic space very well and they build a shallow or simple model like a multinomial logistic regression with regularizers, then essentially they made an assumption using hyperplane can split all types of traffics under different weather well. This is not exactly right because for this particular case the engineers introduced algorithmic bias. Using shallow models with a small capacity, no matter how many regularizations are used and no matter how fancy you want to penalize your model, is not the right solution for this problem.
To sum up, I think that data bias and algorithm bias are the two major source of bias. Other than those, there is human-made bias, because humans made the machines and the algorithms, and they directed how those machines and algorithms collect and interpret data. I believe that if humans are involved in the decision process then bias might always exist.
Any time there’s a decision which needs to be made in AI or machine learning then that decision needs to be made by a group of people rather than by just one person. Anytime there’s confusion just vote and make the decision process democratic, because this is exactly what we learn from several of the most successful machine learning algorithms. Those algorithms are very powerful. For example, the random forest is an ensemble of decision trees; gradient boosting is an ensemble of weak learners; and deep neural network is an ensemble of logistic regression with rectifiers; and long short-term memory (LSTM) is an ensemble of vanilla neural networks with temporal dependencies. As long as we use an ensemble and make the decision process democratic, then we can remove bias. We may not remove 100% of the bias, but we can remove bias to some extent.
PAUL THIES: How do you see AI and machine learning employed in the fight against money laundering?
CHUAN SUN: To my knowledge, the adoption of very sophisticated machine learning and artificial intelligence algorithms in the anti-money laundry effort has been slow, both due to the difficulty of the problem and because of the vigorous scrutiny in regulatory compliance.
Money laundering involves three steps: placements, layering and integration. Traditionally, the major method used is called rule-based modeling (for example, decision tree-based or random forest based). This is essentially saying that you just write a bunch of “if/else, if/else, if/else” checks: if A then B, else C. This is not surprising at all, because the rule-based logic is very easy to integrate and establish. The rule-based method, however, is not perfect because we cannot hire people to write the rules to cover all the possible scenarios of money laundering, because technology evolves and the bad guys can also learn. Also remember that with the recent movement of the so-called democratizing of AI from some of the giant tech companies, bad actors can also use AI with cloud to upgrade their arsenals to do illegal things, including money laundering and cybercrimes.
In order to really adopt machine learning and AI into anti-money laundering efforts, we need to understand that there are two main parts. The first part is monitoring the financial or non-financial transactions to identify any suspicious activities. Technically, we can use time series modeling or LSTM, and trend analysis. The second part is Know Your Customer (KYC). We can do things like customer segmentation through clustering or anomaly detection or novelty detection, or even sophisticated or advanced techniques like auto-encoders. In anti-money laundering efforts, many teams are already exploring the use of reinforcement learning to tackle this problem. Essentially, though the problem is hard, I think we should be optimistic as there are so many different tools at our disposal. Over time, I think we will find a way to use explainable AI to solve the problem of money laundering, as well.