www.schneier.com Open in urlscan Pro
199.16.172.203  Public Scan

Submitted URL: https://www.cybersecurityinformer.com/edition/weekly-artificial-intelligence-security-awareness-2024-06-08/?open-article-id=27073468&a...
Effective URL: https://www.schneier.com/blog/archives/2024/06/ai-and-the-indian-election.html
Submission: On June 20 via api from US — Scanned from DE

Form analysis 2 forms found in the DOM

GET https://duckduckgo.com/

<form method="get" action="https://duckduckgo.com/">
  <input type="hidden" name="kh" value="1"><!-- use https -->
  <input id="search" name="q" size="15" maxlength="255">
  <input type="submit" value="Go"><br>
  <input type="radio" name="sites" id="searchblog" value="www.schneier.com/blog">
  <label for="searchblog">Blog</label>
  <input type="radio" name="sites" id="searchessays" value="www.schneier.com/essays">
  <label for="searchessays">Essays</label>
  <input type="radio" name="sites" id="searchall" value="www.schneier.com" checked="">
  <label for="searchall">Whole site</label>
</form>

POST https://www.schneier.com/wp-comments-post.php

<form action="https://www.schneier.com/wp-comments-post.php" method="post" id="commentform" class="comment-form" novalidate="">
  <a href="https://www.schneier.com/wp-login.php?redirect_to=https%3A%2F%2Fwww.schneier.com%2Fblog%2Farchives%2F2024%2F06%2Fai-and-the-indian-election.html" title="Login">Login</a>
  <p class="comment-form-author"><label for="author">Name</label> <input id="author" name="author" type="text" value="" size="30" maxlength="245" autocomplete="name"></p>
  <p class="comment-form-email"><label for="email">Email</label> <input id="email" name="email" type="email" value="" size="30" maxlength="100" autocomplete="email"></p>
  <p class="comment-form-url"><label for="url">URL:</label> <input id="url" name="url" type="url" value="" size="30" maxlength="200" autocomplete="url"></p>
  <p class="comment-form-cookies-consent"><input id="wp-comment-cookies-consent" name="wp-comment-cookies-consent" type="checkbox" value="yes"> <label for="wp-comment-cookies-consent">Remember personal info?</label></p>
  <p class="comment-form-author">
    <label for="comm_capt_challenge"> Fill in the blank: the name of this blog is Schneier on ___________ (required): </label>
    <input id="comm_capt_challenge" name="comm_capt_challenge" size="30" type="text">
  </p>
  <div class="comment-form-comment">
    <label for="comment">Comments:</label>
    <textarea id="comment" name="comment" cols="45" rows="8" maxlength="65525" required="required"></textarea>
    <div id="preview-box" class="preview-box hide"></div>
    <img class="comment-loading hide" src="https://149400697.v2.pressablecdn.com/wp-content/themes/schneier/assets/images/loader.gif">
  </div>
  <p id="allowed">
    <strong>Allowed HTML</strong> &lt;a href="URL"&gt; • &lt;em&gt; &lt;cite&gt; &lt;i&gt; • &lt;strong&gt; &lt;b&gt; • &lt;sub&gt; &lt;sup&gt; • &lt;ul&gt; &lt;ol&gt; &lt;li&gt; • &lt;blockquote&gt; &lt;pre&gt; <strong>Markdown Extra</strong> syntax
    via <a href="https://michelf.ca/projects/php-markdown/extra/">https://michelf.ca/projects/php-markdown/extra/</a>
  </p>
  <input type="hidden" id="wp_comment_nonce" name="wp_comment_nonce" value="868b83a31a"><input type="hidden" name="_wp_http_referer" value="/blog/archives/2024/06/ai-and-the-indian-election.html">
  <input type="button" id="comment-preview" class="comment-preview comment-actions" value="Preview">
  <input type="button" id="comment-write" class="comment-write comment-actions hide" value="Edit">
  <p class="form-submit"><input name="submit" type="submit" id="submit" class="submit" value="Submit"> <input type="hidden" name="comment_post_ID" value="69019" id="comment_post_ID">
    <input type="hidden" name="comment_parent" id="comment_parent" value="0">
  </p>
  <p style="display: none;"><input type="hidden" id="akismet_comment_nonce" name="akismet_comment_nonce" value="97fc9d3402"></p>
  <p style="display: none !important;" class="akismet-fields-container" data-prefix="ak_"><label>Δ<textarea name="ak_hp_textarea" cols="45" rows="8" maxlength="100"></textarea></label><input type="hidden" id="ak_js_1" name="ak_js"
      value="1718858380406">
    <script>
      document.getElementById("ak_js_1").setAttribute("value", (new Date()).getTime());
    </script>
  </p>
</form>

Text Content

SCHNEIER ON SECURITY

Menu
 * Blog
 * Newsletter
 * Books
 * Essays
 * News
 * Talks
 * Academic
 * About Me


SEARCH

Powered by DuckDuckGo


Blog Essays Whole site


SUBSCRIBE



HomeBlog


AI AND THE INDIAN ELECTION

As India concluded the world’s largest election on June 5, 2024, with over 640
million votes counted, observers could assess how the various parties and
factions used artificial intelligence technologies—and what lessons that holds
for the rest of the world.

The campaigns made extensive use of AI, including deepfake impersonations of
candidates, celebrities and dead politicians. By some estimates, millions of
Indian voters viewed deepfakes.

But, despite fears of widespread disinformation, for the most part the
campaigns, candidates and activists used AI constructively in the election. They
used AI for typical political activities, including mudslinging, but primarily
to better connect with voters.


DEEPFAKES WITHOUT THE DECEPTION

Political parties in India spent an estimated US$50 million on authorized
AI-generated content for targeted communication with their constituencies this
election cycle. And it was largely successful.

Indian political strategists have long recognized the influence of personality
and emotion on their constituents, and they started using AI to bolster their
messaging. Young and upcoming AI companies like The Indian Deepfaker, which
started out serving the entertainment industry, quickly responded to this
growing demand for AI-generated campaign material.

In January, Muthuvel Karunanidhi, former chief minister of the southern state of
Tamil Nadu for two decades, appeared via video at his party’s youth wing
conference. He wore his signature yellow scarf, white shirt, dark glasses and
had his familiar stance—head slightly bent sideways. But Karunanidhi died in
2018. His party authorized the deepfake.

In February, the All-India Anna Dravidian Progressive Federation party’s
official X account posted an audio clip of Jayaram Jayalalithaa, the iconic
superstar of Tamil politics colloquially called “Amma” or “Mother.” Jayalalithaa
died in 2016.

Meanwhile, voters received calls from their local representatives to discuss
local issues—except the leader on the other end of the phone was an AI
impersonation. Bhartiya Janta Party (BJP) workers like Shakti Singh Rathore have
been frequenting AI startups to send personalized videos to specific voters
about the government benefits they received and asking for their vote over
WhatsApp.


MULTILINGUAL BOOST

Deepfakes were not the only manifestation of AI in the Indian elections. Long
before the election began, Indian Prime Minister Narendra Modi addressed a
tightly packed crowd celebrating links between the state of Tamil Nadu in the
south of India and the city of Varanasi in the northern state of Uttar Pradesh.
Instructing his audience to put on earphones, Modi proudly announced the launch
of his “new AI technology” as his Hindi speech was translated to Tamil in real
time.

In a country with 22 official languages and almost 780 unofficial recorded
languages, the BJP adopted AI tools to make Modi’s personality accessible to
voters in regions where Hindi is not easily understood. Since 2022, Modi and his
BJP have been using the AI-powered tool Bhashini, embedded in the NaMo mobile
app, to translate Modi’s speeches with voiceovers in Telugu, Tamil, Malayalam,
Kannada, Odia, Bengali, Marathi and Punjabi.

As part of their demos, some AI companies circulated their own viral versions of
Modi’s famous monthly radio show “Mann Ki Baat,” which loosely translates to
“From the Heart,” which they voice cloned to regional languages.


ADVERSARIAL USES

Indian political parties doubled down on online trolling, using AI to augment
their ongoing meme wars. Early in the election season, the Indian National
Congress released a short clip to its 6 million followers on Instagram, taking
the title track from a new Hindi music album named “Chor” (thief). The video
grafted Modi’s digital likeness onto the lead singer and cloned his voice with
reworked lyrics critiquing his close ties to Indian business tycoons.

The BJP retaliated with its own video, on its 7-million-follower Instagram
account, featuring a supercut of Modi campaigning on the streets, mixed with
clips of his supporters but set to unique music. It was an old patriotic Hindi
song sung by famous singer Mahendra Kapoor, who passed away in 2008 but was
resurrected with AI voice cloning.

Modi himself quote-tweeted an AI-created video of him dancing—a common meme that
alters footage of rapper Lil Yachty on stage—commenting “such creativity in peak
poll season is truly a delight.”

In some cases, the violent rhetoric in Modi’s campaign that put Muslims at risk
and incited violence was conveyed using generative AI tools, but the harm can be
traced back to the hateful rhetoric itself and not necessarily the AI tools used
to spread it.


THE INDIAN EXPERIENCE

India is an early adopter, and the country’s experiments with AI serve as an
illustration of what the rest of the world can expect in future elections. The
technology’s ability to produce nonconsensual deepfakes of anyone can make it
harder to tell truth from fiction, but its consensual uses are likely to make
democracy more accessible.

The Indian election’s embrace of AI that began with entertainment, political
meme wars, emotional appeals to people, resurrected politicians and persuasion
through personalized phone calls to voters has opened a pathway for the role of
AI in participatory democracy.

The surprise outcome of the election, with the BJP’s failure to win its
predicted parliamentary majority, and India’s return to a deeply competitive
political system especially highlights the possibility for AI to have a positive
role in deliberative democracy and representative governance.


LESSONS FOR THE WORLD’S DEMOCRACIES

It’s a goal of any political party or candidate in a democracy to have more
targeted touch points with their constituents. The Indian elections have shown a
unique attempt at using AI for more individualized communication across
linguistically and ethnically diverse constituencies, and making their messages
more accessible, especially to rural, low-income populations.

AI and the future of participatory democracy could make constituent
communication not just personalized but also a dialogue, so voters can share
their demands and experiences directly with their representatives—at speed and
scale.

India can be an example of taking its recent fluency in AI-assisted
party-to-people communications and moving it beyond politics. The government is
already using these platforms to provide government services to citizens in
their native languages.

If used safely and ethically, this technology could be an opportunity for a new
era in representative governance, especially for the needs and experiences of
people in rural areas to reach Parliament.

This essay was written with Vandinika Shukla and previously appeared in The
Conversation.

Tags: artificial intelligence, deepfake, democracy, India

Posted on June 13, 2024 at 7:02 AM • 13 Comments

 * Two clicks for more privacy: The Facebook Like button will be enabled once
   you click here. No data is loaded from Facebook until you enable the button.
   Click the [i] button for more information.
   not connected to Facebook
   
 * Two clicks for more privacy: The Tweet button will be enabled once you click
   here. No data is loaded from Twitter until you enable the button. Click the
   [i] button for more information.
   not connected to Twitter
   
 * If you click to activate the share buttons, data will be loaded from a third
   party, allowing them to track your visit to schneier.com. For more details
   click the [i] button.


COMMENTS

What Price common sense? • June 13, 2024 9:25 AM

@Bruce Schneier
@ALL

“But Karunanidhi died in 2018. His party authorized the deepfake.”

I do not know about Indian law but that is clearly “an act for gain” which is
usually considered the first step of an “intent to commit fraud”.

The next step is some form of ” unlawful act” be it a breach of a civil or
private duty.

I suspect that “his party” did not have a signed contract with either
Karunanidhi or his appointed Estate.

Thus the word “authorized” is misleading without qualification and in effect
hides what many would consider and probably is “Fraud”.

What Price common sense? • June 13, 2024 9:42 AM

@Bruce Schneier
@ALL

Was the use of AI a failure?

It rather depends on how you look at it.

One aspect to consider is that the use of AI increasingly causes switch-off or
disconnect by electors with the politicians, parties, and process.

In the West / First World especially WASP nations those turning out to vote was
seen for decades as dropping thus it was assumed “the young were disconnecting
with the process”.

The use of AI can be argued from this election to be causing not a disconnect
from the process but “the parties and their politicians”.

If we take a longterm view it could be that the use of AI will cause the break
down of not the process but the major parties and large area politicians, in
favour of real face to face contact with the more local or very local
politicians.

There are quite a few who would see this as a net benefit.

Eddie Bernays • June 13, 2024 10:18 AM

Has anyone noticed that the one option Mr. Schneier doesn’t consider is that
perhaps society would be better off without AI?

Every essay, every talk, every carefully crafted talking point seems to pivot on
the unspoken assumption that AI will somehow not make us lazy, stupid,
traceable, and dangerously reliant on tech?

Has someone gotten to Mr. Schneier? He seemed so concerned about mass
surveillance and yet oddly his skepticism is notably absent in this case?

Winter • June 13, 2024 10:40 AM

@Eddie Bernays

> Has anyone noticed that the one option Mr. Schneier doesn’t consider is that
> perhaps society would be better off without AI?

It has been considered that humanity would have been better off without
agriculture, gunpowder, or nuclear weapons. But none of these can be uninvented.

It is simply not possible to uninvent AI. Someone somewhere will recreate it.

echo • June 13, 2024 10:59 AM

If I was forced to take a position this article makes me AI hostile. I’ve seen
too much political damage done with unchallenged puffery in a race to sunny
uplands to trust this article one little bit. When my intuition starts twitching
and I feel uncomfortable with something I’ve learned the hard way not to ignore
it. I’m more inclined to lean towards wanting to see AI banned after reading
this article. It’s a big hard “NO” until I know what I’m dealing with. There’s
just too much instability and BS floating about to add AI to the mix. NO. Just
NO.

There’s been a run of articles I’m not happy with. This latest one shilling AI
without any political understanding in my mind is reputation damaging. I’ve
noticed other tech types pushing their beaks into the soft sciences so it’s not
just a one off. In some quarters here is trampling over political scientists,
and gender studies experts, and sociologists. It’s like someone is trying to
stage a tech coup. Given STEM and security employment levels for women still
only hover around 30% and workplace bad habits still exist after the past 20
years I’m feeling even more twitchy about it. It’s more like a “bro” coup.

Eddie Berneys • June 13, 2024 11:09 AM

@winter

It’s not about inventing, it’s about using…

Of course the road to extinction is paved by those who insist that “There Is No
Alternative, you cannot un-invent something, resistance is futile, blah blah
blah… ”

AI in the best case scenario will render the average person incapable of
independent thought and turn billionaires into monarchs. In the worst case it
will destroy us as retired experts from Google have warned.

Yet Mr. Schneier has carefully framed his narrative to eschew opting out.

Winter • June 13, 2024 11:19 AM

@Eddie Berneys

> It’s not about inventing, it’s about using…

Historically, a ban on using a useful tool has never worked.

Let’s face it, even chemical and biological weapons that are so horrible that
humanity has decided to ban them and never use them are still stockpiled. We
even know people that have used them in the recent past.

Winter • June 13, 2024 2:10 PM

@Eddie Bernays

> According to your argument, it would appear that people are hairless monkeys
> who simply cannot help themselves when it comes to brandishing about the means
> of their own demise.

History largely shows this to be the case.

Borodin • June 13, 2024 2:44 PM

Firstly, I agree with the top comment that I cannot imagine these dead people
consented. At best, their families did. We can only hope they were familiar
enough to know the will of the deceased. I would not dare take responsibility
for authorizing such a thing, not even if the deceased were my twin brother.
I will certainly look into including some legalese in my will to prevent this
kind of abuse of my likeness and voice after my passing.

Secondly, I am amazed that Mr. Schneier chose to omit the concerns one might
have when considering the usage of AI in elections. Perhaps Your intention was
to leave such matters to the scholars of the respective disciplines? Yet the
last paragraph seems to rule out that possibility.

One commentor mentioned voter disconnect. This thought occurred to me as well:
If they use AI to reach voters, how will voters’ concerns reach them? Will the
AI create a list of bullet points so that some underpaid aid can discard them?
Will our politicians still feel connected to the people they represent?
Of course, any human being cannot personally interact with a billion people
while also crafting legislation and running a nation. Is that not why we formed
parties? If the people I trust all trust someone, I will likely trust them as
well.
Using AI to send targeted political ads to every voter will finally create the
use case that made me invested in a right to privacy: voter manipulation. Using
the copious amounts of data collected by Facebook and co., people will receive
just the content and promises they want to hear.
Politics may become a game of reach: Who can post ads that reach the most
people? Who can aggregate the best data?
Manipulating people using their emotions is easy. And now it will be convenient
as well.

If You wish for a democratic use of AI, perhaps give its power to the people.
Right now, most of them have neither the resources nor the knowledge to make use
of it.
Well, those are my two cents.

echo • June 13, 2024 3:22 PM

“If” is doing a lot of heavy lifting.

What Price common sense? • June 13, 2024 3:25 PM

@Eddie Bernays

There is a couple of sayings / truisms

“You can’t unring the bell.”

“On balance there is always a minimum of two sides.”

So in the normal course of events anything that gets invented will remain untill
something else replaces it for some reason.

But any invention no matter how narrow the scope will have more than one use. A
knife cuts food and you can stab/cut someone or something.

If a use is good or bad is a choice made by a usually uninvolved observer either
at the time of the use or later.

Two or more observers may see the same use as good or bad and importantly for
different reasons.

Personally I see the current AI as being put to use that is both good and bad
under different circumstances. However my overriding view is that it is a con
game like snake oil being sold to gullible people.

Whilst people talk about jobs being lost to AI this is no different to any other
evolutionary process.

Most jobs are actually not worth doing at the best of times and in the middle
between necessary and desirable there is a massive chunk of “make-work” that
unbelievably people have become specialists in…

So getting rid of those jobs mostly will have only transitory effects no matter
how painful they are at the time.

The reason makework actually exists is “mental health” and “social cohesion”.

As the truism has it

“The devil makes work for idle hands”

And in reality that is what makework is for, keeping otherwise idle hands out of
trouble and giving people a place in a hierarchy so they can work their way up.
That is they get a sense of place in what they view as society.

This is what AI will unfortunately disrupt the most and where it’s real danger
exists.

Could it destroy society as we currently know it yes and it most certainly will.
Will society survive, yes but it will be a diferent society. So no I don’t think
AI will be able existential in the way, way to many are portraying it.

Look at this way,

Is a car better than a horse and cart?

The answer rather depends on if you are a horse or a human. Humans had the same
issue with other forms of mechanisation powered looms gave us the word
“Sabotage” from the French word “sabot” which was a wooden shoe or base of a
boot that traditional loom users threw into the powered looms. Hence “sabotage”
really does mean “putting the boot in”.

A study of the past three hundr d years of industrial/mechanical history will
give you a feeling for what AI could do, but is most likely to do. The fastest
and safest way to get the transition past is to deal with the 1% of the 1%
basically do to them what they have done to society which is “asset strip” them
and put it to more worthwhile use.

cls • June 14, 2024 12:38 AM

@Eddie Bernays

Re: Has someone gotten to Mr. Schneier? He seemed so concerned about mass
surveillance and yet oddly his skepticism is notably absent in this case?

He’s been replaced by an AI Bot!

Scaler • June 14, 2024 3:13 PM

@Borodin “Manipulating people using their emotions is easy.” I’m not saying I
agree but if I did that strikes me as a challenge that should be solved. Therein
may be the actual problem and solution.

Subscribe to comments on this entry


LEAVE A COMMENT CANCEL REPLY

Login

Name

Email

URL:

Remember personal info?

Fill in the blank: the name of this blog is Schneier on ___________ (required):

Comments:


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> •
<ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via
https://michelf.ca/projects/php-markdown/extra/





Δ

← Using AI for Political Polling Demo of AES GCM Misuse Problems →

Sidebar photo of Bruce Schneier by Joe MacInnis.

Powered by WordPress Hosted by Pressable


ABOUT BRUCE SCHNEIER



I am a public-interest technologist, working at the intersection of security,
technology, and people. I've been writing about security issues on my blog since
2004, and in my monthly newsletter since 1998. I'm a fellow and lecturer at
Harvard's Kennedy School, a board member of EFF, and the Chief of Security
Architecture at Inrupt, Inc. This personal website expresses the opinions of
none of those organizations.




RELATED ENTRIES

 * Rethinking Democracy for the Age of AI
 * Using LLMs to Exploit Vulnerabilities
 * Using AI for Political Polling
 * LLMs Acting Deceptively
 * Online Privacy and Overfishing


FEATURED ESSAYS

 * The Value of Encryption
 * Data Is a Toxic Asset, So Why Not Throw It Out?
 * How the NSA Threatens National Security
 * Terrorists May Use Google Earth, But Fear Is No Reason to Ban It
 * In Praise of Security Theater
 * Refuse to be Terrorized
 * The Eternal Value of Privacy
 * Terrorists Don't Do Movie Plots

More Essays


BLOG ARCHIVES

 * Archive by Month
 * 100 Latest Comments

BLOG TAGS

 * 3d printers
 * 9/11
 * A Hacker's Mind
 * Aaron Swartz
 * academic
 * academic papers
 * accountability
 * ACLU
 * activism
 * Adobe
 * advanced persistent threats
 * adware
 * AES
 * Afghanistan
 * air marshals
 * air travel
 * airgaps
 * al Qaeda
 * alarms
 * algorithms
 * alibis
 * Amazon
 * Android
 * anonymity
 * Anonymous
 * antivirus
 * Apache
 * Apple
 * Applied Cryptography
 * artificial intelligence

More Tags


LATEST BOOK

More Books


 * Blog
 * Newsletter
 * Books
 * Essays
 * News
 * Talks
 * Academic
 * About Me