Subscribe
About

Who is responsible for the ethics of AI?

Paula Gilbert
By Paula Gilbert, ITWeb telecoms editor.
Johannesburg, 04 Dec 2018
Who is responsible for making sure AI and machine learning are used for good and not evil?
Who is responsible for making sure AI and machine learning are used for good and not evil?

The rise of machine learning (ML) and artificial intelligence (AI) brings with it questions of who is responsible for making sure these powerful technologies are used for good and not evil.

Amazon Web Services (AWS) CEO Andy Jassy is cognisant that the company's ML and AI services could be used by some to do bad things, but he thinks the potential for good outweighs the bad.

"Even though we have not had a reported abuse case, at least with our machine learning services, we are very aware people will be able to do things with these services, much like they can do with any technology, that would do harm in the world," he told a press briefing at the company's annual re:Invent conference in Las Vegas last week.

"If you look at the advent of artificial intelligence and machine learning, and you look at the potential for the problems it can solve in humankind in the world, it's very large. We already see that with the ability to help fight human trafficking, to reunite missing kids with their parents, for education services and security services, and there is a huge amount of good happening in the world right now based on using these types of machine learning services."

AI is big business for a lot of tech companies, and earlier this year, Gartner said the global business value derived from AI is projected to total $1.2 trillion in 2018, an increase of 70% from 2017. Gartner forecasts that AI-derived business value will reach as high as $3.9 trillion in 2022.

AWS has seen a lot of success from its managed machine learning service, Amazon SageMaker, launched in November 2017 and which now has over 10 000 customers after just one year in existence.

This year, the company continued to build on the success of SageMaker, launching an additional 13 new ML services and capabilities last week in Las Vegas, after releasing 200 significant services and features in the machine learning and AI space over the past year.

Jassy said that with any technology there is the potential for some to use it irresponsibly or unethically.

"If you think about, over the past few years, all of the evil, surreptitious things that people have done with computers or servers, and yet we would live in a very different world if people didn't have computers or servers."

He said to combat this it is important to set the right types of standards so that people use technology responsibly.

"We have our own acceptable use policies and terms and conditions where we won't allow people to use any of our services if we think they are violating people's civil liberties or constitutional rights. So we set a standard that if you are going to violate people's civil liberties or constitutional rights then you can't use our servers, and the same is true of our machine learning services as well."

Amazon Web Services CEO Andy Jassy.
Amazon Web Services CEO Andy Jassy.

Balancing act

Jassy believes that using AI ethically comes down to a combination of things.

"Firstly, I think the algorithms that different companies produce have to constantly be benchmarking and refining so that they are as accurate as possible, and then I also think it has to be clear how you recommend them using those services.

"For instance, with facial recognition, which is a topic that people are very interested in, if you are using facial recognition for something like matching celebrity photos, then it's maybe acceptable to have a confidence level or threshold that is around 80%, but if you are using facial recognition for law enforcement or something that can impact people's civil liberties, then you need a very high threshold. We recommend that people use at least a 99% threshold for things like law enforcement, and even then what we say is it shouldn't be the sole determinant in making a decision.

"There should be a human [involved] and they should make a number of inputs. The machine learning algorithm could be one of the inputs that you use, but only when you are over 99% confidence levels, but overall it has to be a responsible decision by a human being using multiple inputs."

Although AWS gives recommendations on how its tech should be used, Jassy doesn't think it's entirely up to companies like AWS to police AI or ML ethics in society.

"At the end of the day, we build services which we assure are secure, which work right, are accurate and will give you confidence levels and then our customers are building applications on top of these services. They get to make the decisions about how they want to use our platform. We give a lot of guidance and if we think people are using them in a way that is violating our terms of service we will suspend and disable people from using it."

He believes individual nations have to decide what standards, regulations or guidance they are going to give the companies that use these types of technologies.

"I think that if society as a whole or countries as a whole want to make rules around how things must be used then they should make those rules and we will participate and we will abide by them. We will continue to give guidance and best practices, and provide benchmarking and then also try and provide features which allow people to manage it in a responsible way.

"We are having very active conversations with a lot of countries and governments, and we are trying to participate in the education side as well. We will work with governments, but we don't control the laws in the different lands. I think some governments are more interested in having that collaboration with us than others. We are interested in participating, but at the end of the day, the countries are going to set the rules around how this technology is going to be used," he concluded.

Share