Vox Markets Logo

Artificial Intelligence is here but can we make it trustworthy?

12:51, 19th April 2019
Anita Riotta
Industry Snapshot
TwitterFacebookLinkedIn

On Monday 8th April 2019, the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) revealed ethics guidelines aimed at forming best practices for creating “trustworthy AI.” 

In fact, many argue this issue of trust in the AI system is one of the main hurdles the technology must overcome for more widespread implementation.

A Forbes survey found that nearly 42% of respondents “could not cite a single example of AI that they trust”; in another survey, when respondents were asked what emotion best described their feeling towards AI, “Interested” was the most common response (45%), but it was closely followed by “concerned” (40.5%), “skeptical” (40.1%), “unsure” (39.1%), and “suspicious” (29.8%).

The Commission’s guidelines are a new roadmap for businesses to align their AI systems. While these guidelines are not policy, it is easy to imagine that they will serve as the building blocks for such regulations.

So, if you are interested in investing in the AI space, which today means a whole range of sectors, these are some of the characteristics a company’s AI system should have, according to the European Union:

  1. Privacy and data governance practices

A key guideline set out by the Commission centers on how companies should handle the data that drives their AI systems. As we learned, the performance of the learning machine is wholly contingent on the quality of the data used. 

The guidelines argue citizens should have “full control” over their own data. On the companies’ side, they must work to keep data anonymous and, importantly, unbiased. Bias in data sets is pervasive, so the Commission recommends training also be implemented to identify such scenarios.

  1. Transparency and robustness

AI systems, according to the guidelines, must be able to substantiate the accuracy of their results by ensuring they are explainable and reproducible. 
In practice, this means that if a patient is denied a medical claim based on the decision of an AI system, the corresponding medical practitioner must be able to clearly outline why and how that decision was made.

Robustness, in this case, refers to the notion that a ‘trustworthy AI’ system is one that is resilient to attack and, moreover, has fall-back procedures in place to protect users’ data should an attack occur.

  1. Accountability

The European Commission’s guidelines also call for mechanisms to be established that would ensure that the outcomes of an AI system fall into a clear structure of accountability and responsibility for those results.

Such a mechanism should, however, take into consideration the “nature and weight of the activity.” For example, then, misreading a medical claim should be addressed with reimbursement, discrimination resulting from unmonitored biased data should strike a far stronger response.

So, why should this matter to you? 

There are three central consequences to glean from these guidelines.

First, this push to regulate the ethics of AI before it devolves into a more serious issue is an attempt break the cycle many companies and governments have been trapped in of being reactive rather than proactive when it comes to regulating technologies. 

Moreover, pre-emptively addressing ethical use of AI, as the issue has already begun to dominate news cycles relating to the technology, is also an effort to avoid the crisis of trust currently plaguing social media platforms-- which failed, massively and publicly, to set such guidelines. 

Secondly, these guidelines are also emblematic of the role the European Union is carving out for itself in the AI sector. Simply put, Europe cannot compete with the United States or China when it comes to levels of investment or cutting-edge research. 

The idea, then, is that Europe is hedging a bet that it can become a leader of AI by being the first to create its ethical use norms. The hope here is that these higher, and increasingly entrenched, ethical standards will become the competitive edge for European companies. 

Should other G7 countries, Canada is already working on its own panel with France, align themselves to these standards, European companies will have had a head start on the rest of the world in making sure their usage practices comply with such ethical standards. 

And, in light of the reckoning that faced social media and the dystopian chatter that already exists around AI, it seems likely other large countries will follow suit. But they’ll have done so significantly later.

So, again, if you are looking to invest in companies that use AI, which today means any number of sectors, try to see if they have anything in place, or any plans to put something in place, that assesses and addresses the way they use AI.

Arguably, as Europe and other G7 begin to prioritize this push for AI ethics regulations, the companies that were proactive, or compelled to fall in line, will be able to transition more smoothly.

Plus, wouldn’t you just rather the way your data is used and decisions are made be… ethical?

TwitterFacebookLinkedIn

Disclaimer & Declaration of Interest

The information, investment views and recommendations in this article are provided for general information purposes only. Nothing in this article should be construed as a solicitation to buy or sell any financial product relating to any companies under discussion or to engage in or refrain from doing so or engaging in any other transaction. Any opinions or comments are made to the best of the knowledge and belief of the writer but no responsibility is accepted for actions based on such opinions or comments. Vox Markets may receive payment from companies mentioned for enhanced profiling or publication presence. The writer may or may not hold investments in the companies under discussion.

Recent Articles
Watchlist