"It is dangerous how much of AI work is done in silicon valley"

As Artificial Intelligence (AI) progresses and disrupts the way we work, companies building it have a demon to deal with. If a tiny segment of the population trains these machines, biases will creep in. IBM says it is making a lot of progress on what it calls 'responsible stewardship'. It is about being ethical, transparent and understanding biases. David Kenny, Senior Vice President, IBM Watson and Cloud Platform, tells Goutam Das that there is a strong case for diversity in AI. Edited excerpts:
Q: How have AI for professional use and AI for the consumer evolved in the past few years?
A: The core of AI is machine learning. The field of learning continues to advance. In Watson, a lot of our advancements have been about learning more from past data. We are more focused on professional use. So, the ability to learn a specific domain - around regulation, around health, law, and others - is happening at a much faster rate. On the other end, the consumer guys are focused on general intelligence, which is also growing. A consumer field is closer to the way search works, which is the ability to look at a broad range across all fields. It works for simple tasks. There is breadth. Then, there is depth of learning in a place like Watson, which is learning specific professional fields.
The machines are learning at a much faster rate than they were a year ago. In the professional field, the training is done by professionals, not consumers. The oncology model is trained by oncologists; the accounting models are trained by accountants. There is more peer review. Second, for professional AI to work, it needs to be embedded into the work flow. If a hospital is using Watson for oncology, it has to fit into the way it works. You are not creating another layer. There is also a lot more security around professional AI. For good reason, there are rules, regulations, and structures. Watson respects that rather than disrupt. It is critical to make sure it is usable in extending professions, as opposed to consumer, which is more willing to replace things.
Q: You have earlier said that you would be careful in not allowing biases seep into Watson. What sort of biases are there and what happens when biases get into an AI machine?
A: All AI models are trained. It would predict that this is a human face, this is a cancer tumour, a bad loan. If all your humans are white men who live in Western countries, then chances are that your models are going to extend that bias into the system. We are working with accounting firms for mortgages. If you say 'this kind of people are risks' and 'those kind of people are not', the model will learn that. The more you have a diversity of trainers, a diverse population, the more you double-check whether that is true. Did the training produce a truth about a bad loan? Did it produce the truth about facial recognition? The early image recognition, because of the volume of data, tended to do a better job of reading the faces of Caucasian men than they did of dark-skinned women. It is really important that AI models are trained globally. It is very dangerous how much of the work is done in Silicon Valley. It is a very small place. We need to have a global perspective if you are doing things like health or law or ethics. You need to understand global traditions, perspectives from around the world; you need to make sure it serves the whole world and that it is not just exporting Western bias to the world.
Q: A lot many see AI as a threat more than an opportunity. Much of the conversation is about displacing the workforce rather than looking at productivity increases. Heading Watson, you would be an optimist. However, if you were to step outside of Watson and look at AI and what it could and cannot do more objectively, where would you stand?
A: There is going to be a transition. At the end of the day, AI tends to do more simple tasks. But AI will drive more autonomous vehicles - so there will be less need for chauffeurs and truck drivers. It will free up customer service to add more value and solve more problems. You can do more preventive maintenance. But it does not take humans away. Human investments can be re-deployed in improving the software. That's why I'm optimistic. It does make businesses more productive; it does make products more useful. However, there isn't a job where AI isn't going to create a leverage for. There is a real obligation on training. There is a real obligation on helping people build more skills. For centuries, technology has automated manual tasks. We have to continue to advance human capacity to do new things. That is why we are investing in schools, training, more of a culture of continuous learning. Your learning can't stop in the university because technology is advancing. Yes, some things will be disrupted but new opportunities and more interesting work will be created.
Q: How long would this period of transition be?
A: We are engaging with other companies, consortia and governments on what we can do to help. Those who are getting smart about it are going to get to the front of it. Developing countries can leapfrog by getting their education programmes shifted. Some Western countries are going to feel the most disruption because they like the way it is. Education policies, skill development will differentiate countries and it would differentiate companies as well. I think the disruption is totally manageable but only with skill development. Over time, there have been technological advances that have caused disruption. This one is big and it happens quickly. I am also encouraged because I find a lot of prime ministers, presidents, mayors, governors, and CEOs getting behind it. We need more. We need to really focus on this.
Q: Tell us about the progress Watson made, what works well and what doesn't.
A: There are over 16,000 Watson engagements now. One progress is our approach of crawl, walk, run. We have done a much better job of breaking the components of Watson into micro services, allowing it to start with a simple task. How to do customer service, regulatory compliance or read medical journals. Then you do something more advanced. This ability to have an agenda that changes every three-six months is key. At the beginning of Watson, there were a couple of moon shots, like we are going to cure cancer and stop climate change. We put a lot of money to get to the end point. And we didn't break that journey into pieces. We are still going to get there. But we are going to get there step-by-step.
The second thing we are doing a better job of is making Watson a part of the existing workflow as opposed to it's a new thing. Increasingly, the client of IBM is the General Manager, the person who runs the business; it is not the IT department. That is why Watson is partnering with Salesforce and SAP. KPMG has been using Watson for accounting. AI can't create new work - it will make the new work better.
The third thing we are making a lot of progress on is what I call 'responsible stewardship.' If I am going to recommend a doctor these treatments for a tumour, you have to be transparent about how the decision got made. We also understand data ownership and privacy. This is setting us on a much better path.