Interviews

How the Centre for the Fourth Industrial Revolution is helping invent an AI wheel for all

If AI, IoT, blockchain or any other emerging technology has a place in your business then this will be some valuable food for thought.

The World Economic Forum is a not-for-profit-foundation that works to shape global, regional and industry agendas by engaging with leaders across politics, business, culture and more. Within it sits the Centre for the Fourth Industrial Revolution –a hub for global, multi-stakeholder cooperation to help develop governance around emerging technologies like artificial intelligence.

The policies, norms, standards and incentives created aim to shape the development and deployment of these technologies for the benefit of everyone. They also ensure some consensus when it comes to the way they’re used in our lives and businesses.

We spoke to Kay Firth-Butterfield, Head of AI and Machine Learning, to find out what some of the big questions around AI are, what brands should be doing to responsibly prepare themselves to use it and why it’s better to help shape the wheel:

 

Kay Firth-Butterfield, Head of AI and Machine Learning, World Economic Forum

 

Could give us a brief overview of the World Economic Forum and the work it does?

At the Centre for the Fourth Industrial Revolution Network we deal with the governance of technology and particularly technologies that are emerging like artificial intelligence (AI), IoT, autonomous vehicles, drones, blockchain etc.

I work out of San Francisco, which is the head of this network of centres. But we also have centres in Japan, China, India, the UAE, Colombia, Israel, Norway, South Africa and Saudi Arabia.

When we talk about governance of technology, we want to make sure that we get the very best out of these technologies. I believe AI is a fundamental technology that will change the world. What we want to do is make sure it changes the world for the better and that we minimise some of the negatives that we’re seeing in AI around privacy, safety, accountability, transparency and bias.

Our work is governance with a small g – we’re not necessarily talking about regulation and government. We look at regulation by government through to self-regulation by companies and everything in between, which might be norms or certification or standard setting. It’s a broad approach to ensure that these technologies give humans and the planet the very best opportunities.

Our colleagues in Geneva and New York tend to deal with equally important issues like the future of work, gender, environment, but they’re not doing this particular piece around the governance of technology.

For example, we have a project within my team on bias in human resources run run by a professor from McGill University. She will be producing a toolkit to help everyone who wants to use these AI tools for human resources to really think through the AI implications. Whereas our colleagues in Geneva are thinking about the use of these tools in a much broader fashion, so they will use the toolkit to inform the general thinking about what the future of human resources looks like.

 

Kay Firth-Butterfield, Project Head, Artificial Intelligence and Machine Learning, World Economic Forum Lee Kai-Fu, Chairman and Chief Executive Officer, Sinovation Ventures, People’s Republic of China Erica Kochi, Co-Founder, UNICEF Innovation, United Nations Children’s Fund (UNICEF), New York Ronald Dahl, Director, Institute of Human Development, University of California, Berkeley, USA Carlos Edmar Pereira, Chief Executive Officer and Founder, Livox, Brazil Wanuri Kahiu, Film Director, AFROBUBBLEGUM, Kenya during the Session “Generation AI ” at the Annual Meeting 2018 of the World Economic Forum in Davos, January 26, 2018. Copyright by World Economic Forum / Boris Baldinger

 

 

How do you typically approach projects?

The methodology is that we find a core community of people who want to work on a specific project and then we test that by piloting it. All of our projects have a multi-stakeholder community such as government, companies, academia etc. The idea at the end of any project is to scale what was created globally.

We’re working on a project with the UK government around how governments purchase AI for their own use. That’s really important because it allows governments to govern the technology that is produced simply by using their buying power.

What we have done so far is to create high level guidelines for the procurement of AI across the government, which were adopted by the UK government back in September. Beyond that we’re working on a workbook which will help those procurement officers to actually use the high-level guidelines.

We have now piloted this with four different UK industries. We have also piloted in the UAE and Bahrain and we are now looking to pilot in India and Colombia.

Once we’ve learnt everything we can from the pilot, we will then expect that our government partners will use the tool and we will have created these ethical guidelines and set standards across the world.

 

 

Do you aim to scale things globally so that we’re all using the same standards?

All governments realise they have to do something in this space. Instead of all reinventing the wheel, we’re hoping we can invent the wheel and then they can use it.

We don’t necessarily advocate regulation because in most countries regulation takes a long time and what you want to regulate in a fast moving space like AI has changed by the time you get to regulation.

We can see that in the GDPR. They started working on that years before it came into effect. That’s not to say that it hasn’t been really seminal legislation and important across the world, but when it came in it was no longer the same problem as when they started.

 

 

Are there some things where you need regulation so that people can’t use technology in unethical ways?

Another project that we’re working on with the government of France around facial recognition. This is an area where there has been a lot of talk about regulation.

I did a panel last January with Brad Smith, who is the president of Microsoft, and we were both talking about the need for some form of regulation around facial recognition technology. We come at it from slightly different backgrounds, but he has written widely about how important regulation in this area is.

He says that we need to think about how you maintain democracy and civil liberties when the state has the opportunity and the tools to be able to surveil all and everyone. That’s certainly my concern as well.

But he has the additional concern from a company perspective that if Microsoft is trying to do the best it can with the way that it uses AI and facial recognition technology and to who it sells it, then there is a bit of a race to the bottom because those people who have lower ethical standards are going to be the people who end up selling.

It is really important for governments to grasp that this technology gives them this potential extra control over their citizens. Governments really need to get to grips with how they are going to use that.

 

 

How much do cultural differences play into how much this tech is accepted?

Culturally Europe, because it has the GDPR and the European systems, has tended to be more understanding of the right to privacy. In other countries, the focus might be more on getting a good deal or protecting your property or your person.

In the States there are large numbers of people putting cameras on their doorbells so that they can protect their property – that’s facial recognition technology. If you’ve got cameras all over your street it may be more secure because if anyone is burgled the police can usually track that person.

But what rights and privacy do we give up as part of that? That takes us back to our civil liberties. If you were to go and demonstrate, for example, you could probably go out of your house and not be surveilled from your house to the meeting point, but if you’ve got all these doorbells watching you walk down the street that doesn’t happen anymore.

 

 

 

What are some of the biggest issues that brands should be conscious of around AI?

If you go into a store and you’re being tracked and your profile is being kept, what privacy do you have? Stores have to be very careful not to be seen as manipulating people into buying things. What free will do you have to say no to a computer that’s really good at knowing your preferences and nudging you in the right direction?

Do you have signs up saying you are being tracked by facial recognition and AI? And if not, how does the public react if they find out? Maybe you should be thinking more about edge computing so data stays in location rather than goes out into the cloud if you’re going to start having that sort of surveillance in your stores. It might be a better way of convincing customers that their data is safe.

I think there are some big ethical issues, and ethical issues tend to turn into brand issues for businesses. AI is an interesting technology because it moves so quickly, it can destroy your brand probably quicker than almost anything else.

It’s an area that it’s really important that companies think through. We’ve produced a toolkit for boards of directors because we found that boards of directors were not understanding what AI is and what it’s capable of doing and how it might be used in their companies, and therefore couldn’t really do their oversight role in the company.

We will be piloting that with boards of directors, but we’ve also heard from members of the C-Suite that what we really should have done is create a toolkit for the C-Suite so that’s what we are going to go on and do.

One of the things we point out is your brand strategy is the core of who you are and is very much susceptible to how the customer thinks about the AI that you’re using.

There are protection cameras being sold to check up on your kids because you’re at work and they’ve come home from school. The question there should be where’s the data about what my kids are doing going? Is it being monetised, ie is being sold, because if so my kids can then be sold things? Or people who are going to want them to vote in the future can find out all sorts of information about their childhood and does that affect the way our democracy might go?

There is a law that you can’t advertise to children but supposing you have a toy that says that it’s cold and the child goes to its parents and says ‘dolly’s cold can we buy it a coat?’ It’s difficult to know whether that’s actually part of the conversation or whether that’s advertising to kids.

In the States there’s a law that protects kids called COPRA and educational devices or toys are exempt to that, which means that the data can be stored. A lot of people are saying ‘my toy is educational’ and once you get your toy into the educational realm then you can keep the data which is problematic.

We know that young children are incredibly easy to influence. They are forming their views about the world in a short period, so should there be some thinking about who is forming their views and how those views are being formed?

These sound like farfetched things, but if you’ve got the data, we know from Cambridge Analytica that it’s very easy to influence people. Another project that we’re doing is with UNICEF who are in the process of writing an addendum to the Convention on the Rights of the Child around what are children’s rights in the age of AI.

That work has now been joined by our partner LEGO. LEGO have always been wanting to do right by kids with their own pieces of work but as they incorporate AI in their toys, they want to make sure that they’re doing the right thing.

One of the things that we’ve been thinking about is how do we make sure that kids still have creative play and the time to grow their imagination if all of their toys come backloaded with stories which are AI enabled.

How we play and how we learn are the fundamentals of who we are as humans. It’s really important and it may be that our children need to be playing from an early age with AI-enabled toys because they are all going to be working with AI enabled devices and may be living with them when they’re old. So maybe the most important thing is for them to develop that understanding from a very early age. But equally we don’t know, and do we really want to make a test from the most vulnerable population?

 

 

Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning, World Economic Forum speaking in the Unlocking Artificial Intelligence in the Public Sector session at the World Economic Forum Annual Meeting 2020 in Davos-Klosters, Switzerland, 22 January. Congress Centre – xChange. Copyright by World Economic Forum/Sikarin Fon Thanachaiary

 

 

How should businesses be preparing themselves for AI? What would you like to see them doing more of?

They need to be really informing themselves about what AI is, how it could and how it should be used in their companies. If you are going to use it then you may want to have an AI ethicist in your AI department working with the team to help with the design and development.

I started my career in AI leading an ethics advisory panel for an AI company. We had a number of high-level advisors that I spent a lot of time with that were looking at who they were selling to and how they going to develop that product. These things are really important for companies, especially companies that don’t really understand the technology.

Salesforce, for example, recently created the position of chief technology ethics officer. What she is doing is not just on AI, but generally she’s looking across all of the work that they do and who they sell to and who they buy from to ensure that the materials are ethically sourced and who they sell to are people that they actually want to be selling to.

You need a person who can do the AI and you need the person who can make sure you don’t go badly wrong with the AI. You need somebody in there to help you think about the consequences. You can distinguish your brand by saying we have responsible AI. We’re doing this the right way. That’s actually a brand value.

That sort of corporate responsibility with AI is I think going to be hugely important, especially to avoid the backlash likely to come. As customers become more understanding of what’s happening to their data and the pervasive surveillance of their behaviour by companies that we’re likely to see a backlash.

I don’t want to see that backlash because I do want to see AI used to benefit us. What we need to do is work together as companies, society, academics and countries to put in the really firm foundations so technology can flourish.

Consumers don’t have sufficient training really to be making choices about what we allow in our home and what we don’t. There is a big area of consumer education that one hopes that perhaps universities will begin to pick up or governments will begin to pick up.

Governments are in a difficult situation because they want to see the use of AI advance because it’s very important and will benefit citizens. But equally we need to put in the protections as well. I think in 2020 we will see some law cases starting to come through where people are going to be testing some of these devices that don’t necessarily fit into one piece of law or another. That will be interesting in itself.

 

 

 

How do companies benefit from engaging in the work you do?

Governments are thinking about AI and it’s better to be part of the conversation than have something imposed upon you.

The idea of what we do is we want everybody to come and help us invent the particular wheel that’s necessary and then everybody to use that, because we think that it’s going to be much more powerful long term to have created policy that many people have bought into.

The forum has over 900 company members. But there are obviously hundreds of thousands of companies that are affiliated to the forum. We’re more than happy to work with everybody – that’s what makes this rich and important

All of these things affect everybody, every company is going to be affected. Having companies be involved and helping the governments to understand the problems that they have and vice versa is extremely valuable.

 

Images courtesy of World Economic Forum

 

Want some AI inspiration? Here are 34 top retail executions.

Want to go straight to the hottest retail technologies, latest disruptive thinking and simplest new ways to lower costs and boost sales? Transform your team’s thinking using Insider Trends’ little black book. Find out how here.