venturebeat.com Open in urlscan Pro
192.0.66.2  Public Scan

Submitted URL: https://venturebeat.com/2022/01/29/bias-in-ai-is-spreading-and-its-time-to-fix-the-problem/
Effective URL: https://venturebeat.com/datadecisionmakers/bias-in-ai-is-spreading-and-its-time-to-fix-the-problem/
Submission: On February 10 via api from CH — Scanned from DE

Form analysis 1 forms found in the DOM

GET https://venturebeat.com/

<form method="get" action="https://venturebeat.com/" class="search-form" id="nav-search-form">
  <input id="mobile-search-input" class="" type="text" placeholder="Search" name="s" aria-label="Search" required="">
  <button type="submit" class="">
    <svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
      <g>
        <path fill-rule="evenodd" clip-rule="evenodd"
          d="M14.965 14.255H15.755L20.745 19.255L19.255 20.745L14.255 15.755V14.965L13.985 14.685C12.845 15.665 11.365 16.255 9.755 16.255C6.16504 16.255 3.255 13.345 3.255 9.755C3.255 6.16501 6.16504 3.255 9.755 3.255C13.345 3.255 16.255 6.16501 16.255 9.755C16.255 11.365 15.665 12.845 14.6851 13.985L14.965 14.255ZM5.255 9.755C5.255 12.245 7.26501 14.255 9.755 14.255C12.245 14.255 14.255 12.245 14.255 9.755C14.255 7.26501 12.245 5.255 9.755 5.255C7.26501 5.255 5.255 7.26501 5.255 9.755Z">
        </path>
      </g>
    </svg>
  </button>
</form>

Text Content

WE VALUE YOUR PRIVACY

We and our partners store and/or access information on a device, such as cookies
and process personal data, such as unique identifiers and standard information
sent by a device for personalised ads and content, ad and content measurement,
and audience insights, as well as to develop and improve products. With your
permission we and our partners may use precise geolocation data and
identification through device scanning. You may click to consent to our and our
partners’ processing as described above. Alternatively you may click to refuse
to consent or access more detailed information and change your preferences
before consenting. Please note that some processing of your personal data may
not require your consent, but you have a right to object to such processing.
Your preferences will apply to this website only. You can change your
preferences at any time by returning to this site or visit our privacy policy.
MORE OPTIONSDISAGREEAGREE

Skip to main content
Events Special Issues
VentureBeat Homepage

Subscribe

 * Artificial Intelligence
   * View All
   * AI, ML and Deep Learning
   * Auto ML
   * Data Labelling
   * Synthetic Data
   * Conversational AI
   * NLP
   * Text-to-Speech
 * Security
   * View All
   * Data Security and Privacy
   * Network Security and Privacy
   * Software Security
   * Computer Hardware Security
   * Cloud and Data Storage Security
 * Data Infrastructure
   * View All
   * Data Science
   * Data Management
   * Data Storage and Cloud
   * Big Data and Analytics
   * Data Networks
 * Automation
   * View All
   * Industrial Automation
   * Business Process Automation
   * Development Automation
   * Robotic Process Automation
   * Test Automation
 * Enterprise Analytics
   * View All
   * Business Intelligence
   * Disaster Recovery Business Continuity
   * Statistical Analysis
   * Predictive Analysis
 * More
   * Data Decision Makers
   * Virtual Communication
     * Team Collaboration
     * UCaaS
     * Virtual Reality Collaboration
     * Virtual Employee Experience
   * Programming & Development
     * Product Development
     * Application Development
     * Test Management
     * Development Languages


Subscribe Events Special Issues

Community


BIAS IN AI IS SPREADING AND IT’S TIME TO FIX THE PROBLEM

Loren Goodman
January 29, 2022 8:17 AM
 * Share on Facebook
 * Share on Twitter
 * Share on LinkedIn

Image Credit: kentoh/Getty

Check out all the on-demand sessions from the Intelligent Security Summit here.

--------------------------------------------------------------------------------



This article was contributed by Loren Goodman, cofounder and CTO at InRule
Technology.

Traditional machine learning (ML) does only one thing: it makes a prediction
based on historical data.

1
/
2
Women in data & ai breakfast panel: How women can help avoid bias in ai and data
applications
Read More





Video Player is loading.
Play Video
Unmute

Duration 0:00
/
Current Time 0:00
Playback Speed Settings
1x
Loaded: 0%

0:00

Remaining Time -0:00
 
FullscreenPlayUp Next

This is a modal window.



Beginning of dialog window. Escape will cancel and close the window.

TextColorWhiteBlackRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentBackgroundColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentTransparentWindowColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyTransparentSemi-TransparentOpaque
Font Size50%75%100%125%150%175%200%300%400%Text Edge
StyleNoneRaisedDepressedUniformDropshadowFont FamilyProportional
Sans-SerifMonospace Sans-SerifProportional SerifMonospace SerifCasualScriptSmall
Caps
Reset restore all settings to the default valuesDone
Close Modal Dialog

End of dialog window.

Share
Playback Speed

0.25x
0.5x
1x Normal
1.5x
2x
Replay the list

TOP ARTICLES






 * Powered by AnyClip
 * Privacy Policy




Women in data & ai breakfast panel: How women can help avoid bias in ai and data
applications


Machine learning starts with analyzing a table of historical data and producing
what is called a model; this is known as training. After the model is created, a
new row of data can be fed into the model and a prediction is returned. For
example, you could train a model from a list of housing transactions and then
use the model to predict the sale price of a house that has not sold yet.

There are two primary problems with machine learning today. First is the “black
box” problem. Machine learning models make highly accurate predictions, but they
lack the ability to explain the reasoning behind a prediction in terms that are
comprehensible to humans. Machine learning models just give you a prediction and
a score indicating confidence in that prediction.


EVENT

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case
studies. Watch on-demand sessions today.

Watch Here

Second, machine learning cannot think beyond the data that was used to train it.
If historical bias exists in the training data, then, if left unchecked, that
bias will be present in the predictions. While machine learning offers exciting
opportunities for both consumers and businesses, the historical data on which
these algorithms are built can be laden with inherent biases.

The cause for alarm is that business decision-makers do not have an effective
way to see biased practices that are encoded into their models. For this reason,
there is an urgent need to understand what biases lurk within source data. In
concert with that, there needs to be human-managed governors installed as a
safeguard against actions resulting from machine learning predictions.

Biased predictions lead to biased behaviors and as a result, we “breathe our own
exhaust.” We are continually building on biased actions resulting from biased
decisions. This creates a cycle that builds upon itself, creating a problem that
compounds over time with every prediction. The earlier that you detect and
eliminate bias, the faster you mitigate risk and expand your market to
previously rejected opportunities. Those who are not addressing bias now are
exposing themselves to a myriad of future unknowns related to risk, penalties,
and lost revenue.


DEMOGRAPHIC PATTERNS IN FINANCIAL SERVICES

Demographic patterns and trends can also feed further biases in the financial
services industry. There’s a famous example from 2019, where web programmer and
author David Heinemeier took to Twitter to share his outrage that Apple’s credit
card offered him 20 times the credit limit of his wife, even though they file
joint taxes.

Two things to keep in mind about this example:

 * The underwriting process was found to be compliant with the law. Why? Because
   there aren’t currently any laws in the U.S. around bias in AI since the topic
   is seen as highly subjective.
 *  To train these models correctly, historical biases will need to be included
   in the algorithms. Otherwise, the AI won’t know why it’s biased and can’t
   correct its mistakes. Doing so fixes the “breathing our own exhaust” problem
   and provides better predictions for tomorrow.

advertisement



REAL-WORLD COST OF AI BIAS

Machine learning is used across a variety of applications impacting the public.
Specifically, there is growing scrutiny with social service programs, such as
Medicaid, housing assistance, or supplemental social security income. Historical
data that these programs rely on may be plagued with biased data, and reliance
on biased data in machine learning models perpetuates bias. However, awareness
of potential bias is the first step in correcting it.

advertisement


A popular algorithm used by many large U.S.-based health care systems to screen
patients for high-risk care management intervention programs was revealed to
discriminate against Black patients as it was based on data related to the cost
of treating patients. However, the model did not take into consideration racial
disparities in access to healthcare, which contribute to lower spending on Black
patients than similarly diagnosed white patients. According to Ziad Obermeyer,
an acting associate professor at the University of California, Berkeley, who
worked on the study, “Cost is a reasonable proxy for health, but it’s a biased
one, and that choice is actually what introduces bias into the algorithm.”

Additionally, a widely cited case showed that judges in Florida and several
other states were relying on a machine learning-powered tool called COMPAS
(Correctional Offender Management Profiling for Alternative Sanctions) to
estimate recidivism rates for inmates. However, numerous studies challenged the
accuracy of the algorithm and uncovered racial bias – even though race was not
included as an input into the model.


OVERCOMING BIAS

The solution to AI bias in models? Put people at the helm of deciding when to
take or not take real-world actions based on a machine learning prediction.
Explainability and transparency are critical for allowing people to understand
AI and why technology makes certain decisions and predictions. By expanding on
the reasoning and factors impacting ML predictions, algorithmic biases can be
brought to the surface, and decisioning can be adjusted to avoid costly
penalties or harsh feedback via social media.

advertisement


Businesses and technologists need to focus on explainability and transparency
within AI.

There is limited but growing regulation and guidance from lawmakers for
mitigating biased AI practices. Recently, the UK government issued an Ethics,
Transparency, and Accountability Framework for Automated Decision-Making to
produce more precise guidance on using artificial intelligence ethically in the
public sector. This seven-point framework will help government departments
create safe, sustainable, and ethical algorithmic decision-making systems.

To unlock the full power of automation and create equitable change, humans need
to understand how and why AI bias leads to certain outcomes and what that means
for us all.

Loren Goodman is cofounder and CTO at InRule Technology.


DATADECISIONMAKERS

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data
work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best
practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers




INTELLIGENT SECURITY SUMMIT ON-DEMAND

Did you miss a session at Intelligent Security Summit? Head over to the
on-demand library to hear insights from experts and learn the importance of
cybersecurity in your organization.

Watch Here


 * DataDecisionMakers
 * Follow us on Facebook
 * Follow us on Twitter
 * Follow us on LinkedIn
 * Follow us on RSS

 * Press Releases
 * Contact Us
 * Advertise
 * Share a News Tip
 * Contribute to DataDecisionMakers

 * Careers
 * Privacy Policy
 * Terms of Service
 * Do Not Sell My Personal Information

© 2023 VentureBeat. All rights reserved.