Ethical Artificial Intelligence (AI) has become a central conversation in the workplace.
AI and deep learning technologies are revolutionizing entire industries, from health care to finance. This tech has the power to drive large-scale, positive change. However, if the ethical dimensions of AI are not carefully considered, it opens the door to the pitfalls of biases and flawed data.
According to a 2021 survey by FICO and Corinium of 100 AI-focused global leaders, almost 70% could not explain how specific AI model decisions or predictions are made. Only 35% said their organization made an effort to use AI in a way that was transparent and accountable.
Driven by the lack of understanding of data models, what information they are trained on, and the risk of bias, there is much room for improvement when it comes to ensuring that AI is driving more ethical and equitable products.
How Will Transparent AI Enable More Equitable Products?
During the 2022 WomenTech Global Conference, Sorcero CEO and Co-founder Dipanwita Das led a session to answer this question and zero in on:
- Why we should care about transparency and equitability in AI
- Risk levels of applying AI to decision-making in the world of business
- How to design systems that account for each risk level
- Benefits of a “human in the loop” approach
- Importance of the quality of data
- How to reduce the risk of bias and increase transparency in AI
In this article, we share the key takeaways from the session and why transparency is essential to ethical AI.
Let's take it from the top.
What is transparency?
In this context, transparency speaks to any efforts that enable people to peek in and inspect how models make inferences. This enables continuous improvement in the performance of the model. If there are any failures, knowing how and why they happened is essential to avoiding repeat occurrences.
Where is transparency essential, and why?
Transparency is beneficial across the board, but there are varying levels of risk to assess when making a decision.
Oftentimes, when a decision matters most - it happens in a highly regulated industry.
In the life sciences and health care, examples of decision points could be:
- Treatment recommendations
- Clinical trial design & populations
We can also find examples in the financial sphere, such as:
- Automatic profiling
- Credit scoring & underwriting
- Automated trading & decisions
Each of these impacts human beings in major ways. When we do not have transparency, or in cases where the biases are so embedded that the companies using the models aren’t in a position to handle appeals from end-users, outcomes could be unfair. They could put our health and livelihoods at stake.
When we as humans make decisions, and those decisions are wrong, there is room for others to give us feedback. From there, we can improve. In turn, hopefully, we can avoid those mistakes in the future.
Similarly, transparency in AI makes it possible to trace mistakes and improve.
Black box applications make it nearly impossible to determine why and where any mistakes were made, and thus, we cannot fix them.
What is explainability?
Explainability is the model’s ability to explain why it outputs X when receiving input A.
This needs to be done entirely in human-understandable terms. It also makes it so that the decision of a model (ex: the prediction, suggestion) is fair to the users.
What is bias?
Here bias is defined as a set of assumptions that have been made in the training data. These assumptions push the model in a direction that is different from the real world in which is applied. It separates what is theoretical from what is real. This can cause many real-world ramifications.
Let’s look at an example.
If we apply a data set on social determinants of health, but this data set hasn’t taken into account a particular demographic or geography - despite that the model is being applied in a scenario where that population is essential and included. Bias could play a dangerous role in the outcome.
What is artificial intelligence (AI)?
When we talk about AI, we are talking about an ensemble of agents that use a variety of linguistics or statistical models to look at a data set or a problem set and derive inferences.
In this session, Das encouraged the audience to imagine an orchestra.
An orchestra can have violas, violins, pianos - an assortment of instruments that gather to create a great symphony. In this metaphor, each of these instruments can be looked at as a model that drives a particular type of inference.
The “conductor” is the human expert. It is the subject matter expert who can verify the performance of each of the models, so to create a “symphony” in the end. At Sorcero, our customers, or life sciences experts, play this part.
>> Step behind the AI: What's behind the AI at Sorcero?
Building a “Human in the Loop” Approach: Not all decisions carry the same risk
What is “human in the loop”?
A “human in the loop” approach to using AI means subject matter experts (SMEs) verify and adjudicate the decisions that AI pushes for - before they hit the end stakeholder. In the life sciences, this is mission-critical.
It’s important to note that this is not the same as a human being doing all of the work. This structure and solution architecture do not slow anything down. Instead, it takes a high accuracy model and minimizes the error bar.
Our approach at Sorcero is hybrid and human in the loop. This means that when we build a product, we use both statistical and linguistic approaches to AI, as well as always retain a human in the loop.
➡️ See how a “human in the loop” strategy works in practice: Read the Moderna, Coherus, Medistrava, Sorcero Case Study
“It shifts the focus from just purely scoring to enhancing our customers’ workflow,” explains Das. “Casually, we like to say that we give our customers superpowers. We’re enabling them to work at greater scales and speeds with increased accuracy.”
Every decision carries its own level of risk. It comes down to striking the right balance between the involvement of human experts and AI - depending on the circumstance.
Consider the following three degrees of involvement, as demonstrated by Gartner:
- The machine is responsible for making the decisions
- The human acts as a safeguard when needed
Ex: Next best action for digital ordering
- The machine makes recommendations
- The human evaluates these recommendations - they work in unison
Ex: Financial investments
- The machine provides visualizations and tools to support
- The human makes all of the decisions based on the machine’s output and a variety of circumstances and criteria
Ex: Medical diagnoses
A Real-World Sorcero Example: Pharmacovigilance
Sorcero Language Intelligence models achieve very high accuracy. But, if a single new side-effect is missed, it can significantly impact health outcomes.
At Sorcero, we shifted the focus from scoring to enhancing customers’ workflow. This approach allows experts to identify the relevant documents faster and with less effort. Rather than replacing humans, it helps reduce human error.
💡What were the real-world results of this shift? See how Sorcero Language Intelligence is helping major global diagnostic companies obtain full approval for IVDR compliance
The Value of Reliable Data Sets
AI is only as good as the data it learns from.
In a highly technical industry like Life Sciences, it’s important to train models with domain-specific language. They should be able to understand biomedical language and identify when there is insufficient data for key populations and unmet needs.
In the diagram below, we can see that representations of diversity in genomic data are imbalanced. When this core data was collected, different populations were not adequately or proportionately represented. To make it worse - this raw data set now informs what happens in the world of AI, meaning that it will all be flawed.
“At Sorcero, we’ve begun to place focus on unmet needs. We can help our customers understand where this is a paucity of data. Continuing my theme of music, we take an ensemble approach to avoiding this bias and increasing transparency,” shares Das.
Building a more equitable framework: 3 Tips to Reduce Bias and Increase Transparency
1. Break down complex decisions into simpler indicators
It’s much easier to understand the final output when we see what it took to get there.
Similar to when an orchestra builds its ensemble of musicians, we can better understand how the final output is coming to be when we understand how each piece works and its role within the system.
2. Increase redundancy in your AI workflow with failsafe models
Failsafe models can use multiple models and AI techniques to serve the same decision. This provides a multifaceted view so we can make adjustments as needed. Building this into workflows allows us to keep a handle on transparency.
3. Present unequivocal and clear information to the end-user
Let’s take a look at the following output. On its own, a dry score of 0.871 is hard to understand. But, a heatmap of a document coloring the elements that led to a conclusion is humanly interpretable.
Why are transparency and equitability in AI so important to our mission at Sorcero?
After working at the intersection of analytics and health data for about 12 years, building digital scientific platforms for the world’s largest public health effort, Das grew Sorcero out of the challenges faced in communicating health science.
Today, Sorcero serves leading life sciences companies in transforming how they collect and communicate their most critical scientific and clinical product data.
“We’re in the business of ensuring that the data that backs the use of a drug or product to treat patients is explainable, transparent, auditable, and focused on efficacy and use,” says Das. “This is the shift from working in public health to using AI and data science to push for more equitable use of data and making effective use of the world’s knowledge.”
💡Get all the details on Dipanwita Das at the 2022 WomenTech Global Conference here and follow us on LinkedIn and Twitter for news on upcoming events and speaking engagements.
Want to learn more about Sorcero’s commitment to doing business for good?
Read about our recent B Corp Certification and what we’re doing to hold ourselves to the highest social and environmental responsibility, legal accountability, and public transparency standards.