Understanding the ‘black box’ of artificial intelligence

Artificial intelligence (AI) is playing an increasingly influential role in the modern world, powering more of the technology that impacts people’s daily lives.

For digital marketers, it allows for more sophisticated online advertising, content creation, translations, email campaigns, web design and conversion optimization.

Outside the marketing industry, AI underpins some of the tools and sites that people use every day. It is behind the personal virtual assistants in the latest iPhone, Google Home, and Amazon Echo. It is used to recommend what films you watch on Netflix or what songs you listen to on Spotify, steers conversations you have with your favorite retailers, and powers self-driving cars and trucks that are set to become commonplace on roads around the world.

An employee drives a Tesla Motors Inc. Model S electric automobile, equipped with Autopilot hardware and software, hands-free on a highway in Amsterdam, Netherlands, on Monday, Oct. 26, 2015. Tesla started equipping the Model S with hardware — radar, a forward-looking camera, 12 long-range sensors, GPS — to enable the autopilot features about a year ago. Photographer: Jasper Juinen/Bloomberg via Getty Images

What is perhaps less widely known is that AI may also decide whether you are approved for a loan, determine the outcome of a bail application, identify threats to national security, or recommend a course of medical treatment.

And as the technology progresses and becomes ever-more complex and autonomous, it also becomes harder to understand, not just for the end users, but even for the people who built the platforms in the first place. This has raised concerns about a lack of accountability, hidden biases, and the ability to have clear visibility of what is driving life-changing decisions and courses of action.

These concerns are particularly prevalent when looking at the uses of deep learning, a form of artificial intelligence that requires minimal guidance, but ‘learns’ as it goes through identifying patterns from the data and information it can access. It uses neural networks and evolutionary algorithms which are essentially AI being built by AI, and can quickly resemble a tangled mess of connections that are nearly impossible for analysts to disassemble and fully understand.

What are neural networks?

The neural networks behind this new breed of deep machine learning are inspired by the connected neurons that make up a human brain. They use a series of interconnected units or processors and are adaptive systems that can adjust their outputs based on doing, essentially ‘learning’ by example as they go, adapting their behavior based on results.

This mimics evolution in the natural world, but at a much faster pace, with the algorithms quickly adapting to the patterns and results discovered to become increasingly accurate and valid.

Neural networks can identify patterns and trends among data that would be too difficult or time-consuming to deduce through human research, consequently creating outputs that would otherwise be too complex to manually code using traditional programming techniques.

This form of machine learning is very transparent on some levels, as it reflects the human behavior of trial and error, but at a speed and scale that wouldn’t otherwise be possible. But it is this speed and scale that makes it hard for the human brain to drill down into the expanding processes and keep track of the millions of micro-decisions that are powering the outputs.

Why transparency is important in artificial intelligence

What is black box AI? Put simply it is the idea that we can understand what goes in and what comes out, but don’t understand what goes on inside.

As AI is used to power more and more high profile and public facing services, such as self-driving cars, medical treatment, or defense weaponry, concerns have understandably been raised about what is going on under the hood. If people are willing to put their lives in the hands of AI-powered applications, then they would want to be sure that people understand how the technology works and how it makes decisions.

The same is true of business functions. If you’re a marketer entrusting AI to design and build your website or make important conversion optimisation decisions on your behalf, then wouldn’t you want to understand how it works? After all, design changes or multivariate tests can cost or make a business millions of dollars a year.

There have been calls to end the use of ‘black box’ algorithms in government because, without true clarity on how they work, there can be no accountability for decisions that affect the public. Fears have also been raised over the use of bias within decision-making algorithms, but with a perceived lack of a due process in place to prevent or protect against this.

There is also a strong case for making AI systems accountable and open to interrogation a legal as well as an ethical right. If machines are making life-changing decisions, then it stands to reason that those decisions should be able to be held up to the highest scrutiny.

A report from AI Now, an AI institute at NYU, has warned that public agencies and government departments should rethink the AI tools they are using to ensure they are accountable and opaque when used for making far-reaching decisions that affect the lives of citizens.

So are all these fears over black box AI well founded, and what can be done to reassure users about what is going on behind the machines?

Work on a need to know basis

Many digital marketers and designers have an overall understanding of digital processes and systems, but not necessarily a deep understanding of how all of those things work. Many functions are powered by complex algorithms, code, programming or servers, and yet are still deemed trustworthy enough for investing large chunks of the marketing budget.

Take SEO, for example. How Google ranks search results is a notoriously secret formula. But agencies and professionals make careers out of their own interpretation of the rules of the game, trying to deliver what they think Google wants to be able to boost their rankings.

Similarly, Google AdWords and Facebook Ads have complex AI behind them, yet the inner workings of the auctions and ad positions are kept relatively quiet behind closed doors of the internet giants. While there is an argument that such companies should be more transparent when they wield such power, this doesn’t stop marketers from investing in the platforms. Not understanding the complexities does not stop people being able to optimize their campaigns, instead focusing on what goes in and monitoring the results to gain an understanding of what works best.

There is also an element of trust in these platforms that if you play by the rules that they do publicize and work to improve your campaigns, then their algorithms will do the right thing with your data and your advertising spend.

By choosing reputable machine learning platforms and constantly monitoring what works, you can feel confident with the technology, even if you don’t have a clear understanding of the complex workings behind them.

A lot of people will also put their trust in mass-market AI hardware, without expecting to understand what’s inside the black box. A layman who drives a regular car with no real understanding of how it changes gear, is no more in the dark than somebody who does not know how their self-driving car changes direction.

But of course, there is a key distinction between end users understanding something, and those who can hold it accountable having clarity over how and why an autonomous vehicle chose its path. Accident investigators, insurance assessors, road safety authorities, car maintenance companies, would all have a vested interest in understanding how and why driving decisions are made.

Deep learning networks could be made up of millions, billions or even trillions of connections. Therefore, auditing each connection in order to understand every decision would often be unmanageable, convoluted and potentially impossible to interpret. So, if attempting to address concerns over accountability and opacity of AI networks, then it’s important to prioritize what you need to know, what you want to understand and why.

Deep learning can be influenced by its teachers

As we’ve seen, deep learning is in some ways a high volume system of trial and error, testing out what works and what doesn’t, identifying measures of success, and building on the wins. But humans don’t evolve through trial and error alone; there’s also teaching passed down to help shape our actions.

Eat a bunch of wild mushrooms foraged in the forest, and you’ll find out the hard way which ones are poisonous. But luckily we’re able to learn from the errors of those who’ve gone before us, and we also make decisions on imparted as well as acquired knowledge. If you read a book on mushrooms or go out with an experienced forager then they can tell you which ones to avoid, so you don’t have to go through the gut-wrenching trial and error of eating the dangerous varieties.

Likewise, many neural networks allow information to be fed into them to help shape the decision-making process. This human influence should give a level of reassurance that the machines are not making all their decisions based only on black box experiences of which we don’t have a clear view.

To use AI-powered optimization platform Sentient Ascend as an example, it needs input from your CRO team in the shape of hypotheses and testing ideas in order to run successful tests.

In other words, Ascend uses your own building blocks and then uses evolutionary algorithms to identify the most powerful combinations and variations of those building blocks. You’re not giving free reign to an opaque AI tool to decide how to optimize your site, but instead harnessing the power and scale of AI in order to test more of your ideas, faster and more efficiently.

Focus on your key results

As we’ve seen, when it comes to cracking open the black box of AI tools in marketing, it raises the question of how many of your other marketing tools do you truly understand? For performance-based professionals, AI offers another tool for your belt, but the most important thing is whether it is delivering the results you need.

You should be measuring and testing your tools and strategies with AI tools, as with any other technology. This gives you visibility of what is working for your business.

By adopting CRO principles of testing, measuring and learning, this should give you the confidence that any business decisions you make are based on AI are solid and reliable – even if you couldn’t stand in front of your CEO and explain the nitty gritty of how each connected node under the hood worked together.

But despite the opaque reputation, many AI-powered platforms do allow users to peek inside the black box. Evolutionary algorithms which make their decisions based on trial and error can also be a little easier to understand for those without expert knowledge in machine learning processes.

Sentient Ascend users, for example, get access to comprehensive reporting, which includes graphs allowing you to hone in on the performance of each different design candidate. This allows full visibility to understand the ‘thought process’ behind the algorithms’ decisions to progress or remove certain variations.

Of course, scale can be a sticking point for those who want to deep dive into the inner workings of the software. The advantage of using AI to power your optimization testing is that it can run tests at a greater volume and scale than traditional, manually-built A/B testing tools. Therefore spending time to go back through and investigate every single variation could be very time-consuming. For example, what appears to be a relatively simple layout above the fold could easily have a million different variations to be tested.

The same applies to many other use cases for AI. If you’re using machine learning to analyze different datasets to be able to predict stock price changes, then going back in to check every data point assessed is not going to be a very efficient use of time. But it’s reassuring to know that the option is there to delve into the data should you need to audit performance or get a deeper understanding.

But this volume of data is why it’s important to prioritize what the KPIs are that are most important to you. And if you are measuring against your key business metrics and getting positive results, then the idea of taking a slight leap of faith as to how the black box tools deliver their results becomes much easier to swallow. Carry out due process of the tools you use, and you should be willing to accept accountability yourself for the results they deliver.

Making the machines more accountable

It’s the convoluted and complex nature of neural networks that can make them difficult to interrogate and understand. There are so many layers and a tangled web of connections that lead to outputs, that detangling them can seem a near-impossible task.

But many systems are now having some additional degrees of accountability built into them. MIT’s Regina Barzilay has worked on an AI system for mining pathology reports, but added in an additional step whereby the system pulls out and highlights snippets of text that represent a pattern discovered by the network.

Nvidia, which develops chips to power autonomous vehicles, has been working on a way of visually highlighting what the system focuses on to make its driving decisions.

While such steps will help offer reassurances and some clarity as to how deep learning networks arrive at decisions, many AI platforms are still some way off being able to offer a completely opaque view. It seems natural that in a world becoming increasingly reliant on AI, there will need to be an element of trust involved as to how it works, in the same way that there is inherent trust in humans who are responsible for decision making. Jury members are not quizzed on exactly what swayed their decision, nor are their brain activities scanned and recorded to check everything is functioning as planned. Yet jury decisions are still upheld by law on good faith.

With the evolving complexity of AI, it is almost inevitable that some of its inner workings will appear to be a black box to all but the very few who can comprehend how they work. But that doesn’t mean accountability is out of the question. Use the data you have available, identify the key information you need to know, and make the most of the reporting tools within your AI platforms, then the black box machines will not appear as mysterious as first feared.