Key takeaways

  • As the implementation of artificial intelligence continues at an increasing speed now is the time to be asking ourselves questions about the path our future will follow.
  • Current AI can be biased, breach privacy expectations and is learning from data that lacks diversity.
  • As businesses benefit from the complex solutions AI brings, it is crucial that they question how their AI solutions are built and used.

2017 was the year that artificial intelligence went mainstream, dominating headlines and board meetings. There’s no doubt that AI promises substantial benefits to business and society, but we must acknowledge that it comes with the potential for misuse and consequences unintended.

In the last year or so, many of today’s technology giants have begun to engage with the ethical implications of AI*.

Mark Zuckerberg and Elon Musk have been furiously debating whether AI will bring utopia or ruin1. Microsoft believes future AI programmers will take ethics classes or subscribe to an oath of no harm, and that their programs adhere to a code of being “fair, reliable and safe, private and secure, inclusive, transparent, and accountable.”2

Amazon, DeepMind, Google, Apple, Facebook, IBM and Microsoft formed the research group ‘Partnership on AI’ to advance understanding and discuss concerns about AI’s impact3. DeepMind, Google’s research offshoot, created its own Ethics & Society group to conduct and fund interdisciplinary research into the role of ethical standards and AI4.

Society has reached a tipping point in the development of this technology. As it goes from promise to reality, questions around AI’s use and creation are coming to the fore, and answering them crucial. The power of AI can be safely harnessed, as long as we are paying attention.

Whether creating new AI, implementing ready-made solutions or simply entering the public dialogue, now is the time to question our assumptions and where they might lead us.

The bias
of the algorithms

Neutrality was supposed to be one of the benefits of AI, but it turns out that programmers have unwittingly created biased bots. Due in part to the fact that the tech sector is predominantly white and male, unconscious biases relating to that subset of the population have crept into AI programs.

Image recognition software is a great example of how bias can lead to big problems. It’s been well documented that certain facial recognition software has trouble recognising people who aren’t white5. This includes blink-detecting cameras not understanding that some people’s eyes are not constructed the same as others6 to photo captioning that mislabels humans as animals7.

Often this is because those creating AI applications are not questioning their assumptions. For example, designing recognition around facial features – brows, nose shape, eye shape – that can be race-specific, or not examining how a computer might interpret data when it doesn’t know a bias is at play.

In another example, this time of gender bias, a robot repeatedly asks a woman to smile, and guilts her when it perceives she hasn’t. It inadvertently replicates a type of verbal shaming and emotional manipulation women are regularly subjected to that its (presumed) male programmers likely hadn’t experienced nor anticipated8.

Siri, on ‘her’ launch, did not recognise women’s healthcare options in its search function9.

Dodgy
data

Compounding the coding problem is that AI programs learn and test from data lacking in diversity. Machine learning AI programs learn from datasets,** both in training and subsequent testing.  If the datasets are made by data scientists from homogenous backgrounds, computers end up taking on their unconscious biases, be they racial, cultural, gendered or other – they literally have no data that tells them to think otherwise.

This applies to all sorts of data, from images, audio, text and anything in between. One example can be seen in the spate of voice assistants rivalling each other for market dominance and the diversity of voice samples they are built on.

For those with accents, or non-native speakers, being able to access these services can prove troublesome when their assistant has learnt only from native-speaking, similarly-accented voice data. Mozilla, in recognition of this bias, has created Common Voice, an open source dataset crowdsourcing people’s voices for AI to learn from10.

When it comes to an individual business’s responsibility, PwC’s 2018 AI predictions: 8 insights to shape business strategy, believes that data cleansing falls in line with, “the ancient rule – garbage in, garbage out –incomplete or biased data sets will lead to flawed results.” While an AI project shouldn’t start with questions of data cleanliness, they will need to be addressed.

The good news for businesses is that data governance for one AI solution will mean a head start on the next, synthetic data is becoming a real possibility, and third party vendors are emerging to create and organize large public sources of data for AI.

Of course when it comes to using data, whether diverse or not, questions of privacy and consent aren’t far away.

The privacy-transparency
conundrum

The UK’s Royal Free NHS Foundation Trust in London found out the hard way that consent is not as simple as asking and getting a ‘yes’. It handed over 1.6 million patient records to DeepMind to test an app which would diagnose and monitor kidney injury.

While participants gave their consent for the NHS to share data for this purpose, they were not made aware of the scope of data in question – which included HIV statuses, drug, abortion and mental health histories11. The Information Commissioner’s Office (ICO) concluded that the deal had breached UK data protection law. The implied consent, that patients would be happy handing over everything if they said yes to participating, was not good enough.

David Bray, Chief Ventures Officer at the National Geospatial-Intelligence Agency, argues that with the new wave of AI programs all demanding data, context will become key when it comes to consent12. “We’ve gone from individual machines, networks, to […] looking for patterns at an unprecedented capability.”

“A the end of the day, it still goes back to what is coming from [where, and] what the individual has given consent to.”

Gaining consent for contexts that might not even have arisen yet is an undeniably nightmarish quagmire. And as most of us know when using online services, hundreds of pages of an agreement is simply not something a human will reasonably read before clicking ‘accept’.

PwC predicts there will be increasing pressure from customers – and regulators – to understand the AI they are being exposed to, as well as how it works and for what reasons. The challenge will be opening the ‘black box’ of AI when its contents are both difficult to understand, and likely valuable IP for a company.

Values
and morality

Ethical questions to do with responsible AI will become more and more common and how they are handled, particularly across regions with different data laws, will be an issue that must be tackled to protect people’s rights.

Of course, there is is the added complication of whose rights, values and morality should be protected. As the infamous ‘trolley problem’ shows,*** making decisions on what is ethical depends largely on whose morality is doing the deciding.

Laws are different from city to state to nation and they change as society’s moral compass shifts. Even when AI does adhere to something solid in the short-term, such as federal law, it has been found to be biased or faulty when it comes to criminal conviction, sentencing and prediction of crime13. And that’s not even considering when this technology is misused by its human ‘controllers’.

As big tech grapples with these issues it would be easy to just leave them to it. But if nothing else, what these issues underline is that everyone must be involved if we are to have a hope of approaching unbiased, fair solutions.

If not this,
then that

For business this means two things.

  1. If designing your own programs, think about the implications and potential biases. Question where your data comes from and what it includes or excludes. This doesn’t only apply to companies big enough to have programming teams. Google’s new AutoML service, for instance, allows businesses to train AI algorithms from scratch simply by dragging and dropping images14.“It’s important to put in place mechanisms to source, cleanse and control key data inputs and ensure data and AI management are integrated,” says Sizing the Prize. “Transparency is not only important in guarding against biases within the AI, but also helping to increase human understanding of what the AI can do and how to use it most effectively.”
  2. For those implementing ready-made AI solutions, you are not immune. Even something as seemingly innocuous as a chatbot can learn, or be taught, to be biased. Above all, businesses “need to keep the customer experience in focus” says PwC’s Cognitive solutions lead, Brandon Stafford. “Just because a virtual agent, for example, can learn how to deal with varying service scenarios does not mean it will always treat customers with empathy when they share sensitive information with it.” Questions to ask when setting up an AI solution include asking who designed the software and how? Will it use customer data for something your customers wouldn’t feel falls within the context with which it was given? Stay vigilant and question the software you’re investing in, as well as its use by your customers or staff.

As according to the 2018 AI predictions, “In all cases, stakeholders will want to know that organisations are using AI responsibly, so that it strengthens the business and society as a whole.”

“Organisations that don’t wait for policymakers to issue orders, but instead use new technology responsibly, will reduce risks, improve ROI, and strengthen their brands.”

Far from being the end of humanity, artificial intelligence does have the power to transform the world and business in amazingly positive ways.

But it’s up to us all to make sure it does.


*Artificial intelligence is a broad concept, what Zuckerberg and Musk are talking about, and what most people think of when worrying about killer robots, is more specifically referred to as artificial general intelligence. In essence, when a robot can completely think for itself and replicates what a human can do.
**Machine learning and deep learning, where computer programs teach themselves an outcome without being programmed specifically to do so, is already being heavily used in AI applications. This is what’s going on under the hood when people talk about machines that get smarter, or learn to adapt to their users.
***You are driving a trolley car when the brakes go out. If you do nothing and continue on the track,  you will hit five people tied to the track. If you pull a lever to take a side track, you will hit one person. What do you do? These are the kinds of questions now being applied to the programming of autonomous vehicles.

 

Contributor

Amy Gibbs

Dr Amy Gibbs is a manager at PwC Australia, and the global content editor for Digital Pulse.

More About Amy Gibbs