mlex.shorthandstories.com Open in urlscan Pro
2600:9000:206f:8a00:b:91a8:1500:93a1  Public Scan

URL: https://mlex.shorthandstories.com/clearview-ai-has-reached-turning-point-with-us-legal-challenges-ceo-says/index.html?utm_source=p...
Submission: On February 05 via manual from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

MLEX® EXCLUSIVE:
CLEARVIEW AI FOUNDER HOAN TON-THAT ON WEATHERING A GLOBAL REGULATORY STORM, AND
THE ROAD AHEAD




MLEX® EXCLUSIVE:
CLEARVIEW AI FOUNDER HOAN TON-THAT ON WEATHERING A GLOBAL REGULATORY STORM, AND
THE ROAD AHEAD

Even as it surpasses a database of 40 billion faces to train its algorithms, the
founder and chief executive of Clearview AI told MLex® in an exclusive interview
that it isn't going back to doing business in Europe, the UK, Canada or
Australia, where the controversial facial-recognition startup has faced
regulatory opposition. In the US, however, Clearview has put many of its most
thorny legal problems behind it, according to Hoan Ton-That.

Tune in to the interview below—or keep scrolling for key insights from MLex
correspondents Mike Swift, James Panichi and Ryan Cropp.





IMAGE CAPTIONS




CLEARVIEW AI HAS REACHED 'TURNING POINT' WITH US LEGAL CHALLENGES, CEO SAYS, BUT
WON'T RETURN TO EU, UK

24 January 2024
By Mike Swift

The controversial US facial-recognition startup, which has spent much of the
last three years in court battles in the US and in a face-off with regulators in
the EU, UK, Canada, and Australia, is in a good place in terms of regulation,
having resolved most of its legal troubles in its home country and built a
roster of US law enforcement customers, its founder and chief executive told
MLex.

“We've been in the eye of the storm,” Hoan Ton-That said in an extended
interview. “I think today, we've gotten through a lot of misconceptions on a
media-perception level about how facial recognition is really used. And I also
think we've, you know, achieved a turning point now with our litigation.”

Ton-That can say that because Clearview recently reached a settlement in
class-action litigation alleging the company violated the Illinois Biometric
Information Privacy Act — perhaps the strongest US biometric privacy law.
Clearview also won a ruling in December in litigation brought by Vermont’s
attorney general, with a judge acknowledging benefits such as identifying
rioters who attacked the US Capitol in 2021 in deciding she couldn't rule that
the privacy cost of its scraping of Vermonters’ faces from the Internet
outweighed those benefits.

The proposed Illinois settlement in particular “is a really big deal for us,
because there's potentially a lot of liability there.  And it's a very strong
statute, as everyone knows in the privacy world,” Ton-That said.

With European lawmakers close to finalizing the AI Act, a comprehensive 892-page
regulation that limits the use of facial recognition and other uses of AI,
Ton-That said he doesn’t see Clearview returning in the foreseeable future to
Europe, Australia or other countries where it faced regulatory challenges,
although he says he hears frequently from law enforcement agencies in the UK and
Europe that want to use its services.

“I don't see anything changing. And we have the EU AI Act and other things like
[the General Data Protection Regulation], where I think Europe is taking a
different view on not just data privacy, but AI and technology in general,” he
said.

“We don't do business in the EU, Australia, the UK or Canada,” Ton-That said. He
said the ruling Clearview won this fall from a UK appeal court against the
country’s Information Commissioner Office “basically affirmed what we believed
all along — that we're not subject to jurisdiction by these foreign countries,
as we don't do business there.”

Ton-That said it was US federal agencies, particularly the US Department of
Homeland Security and the FBI, that helped create the demand for Clearview’s
facial recognition algorithms beyond the US.

Homeland Security and the FBI “showed it to law enforcement from all around the
world. And so we had requests for demo accounts from Australia, Canada, the UK
and parts of the EU, and they were saving children with it,” Ton-That said. “So,
some of these UK agencies, it's heartbreaking for them to e-mail us and say,
hey, you know, we really would love to use you for a case.”

The focus on the US is serving Clearview well, Ton-That says, and even though
more than a dozen US states have passed privacy laws, those new laws in general
have an exemption for law-enforcement uses that allow the company to provide its
services.

“We've been growing quite quickly, especially in our state and local business
now, in the US,” he said. “The federal business is really growing as well. It's
also at an inflection point where a lot of these US federal government customers
have gone through their privacy policies, training policies around facial
recognition.”

Ton-That declined to share the number of US law enforcement agencies using its
services. But he said Clearview’s meetings with regulators, state attorneys
general, members of Congress and the media about benefits such as identifying
Jan. 6 seditionists, Russian war criminals in the Ukraine and other anti-crime
uses are winning it support.

“That, I think, has put a lot of people who were skeptical about the
technologies and the intentions of the company really at ease,” he said. “So,
yeah, I think it's still just really the beginning for us as a company.”

MORE FACES

With Clearview the subject of a bestselling book by New York Times reporter
Kashmir Hill, however, there are still major concerns from privacy advocates. In
November, Consumer Watchdog asked California’s attorney general and the
California Privacy Protection Agency to investigate whether Clearview violated
the right to privacy guarantee in the state constitution.

“Clearview AI’s facial recognition software represents a clear and present
danger to our societal norms and our privacy,” Consumer Watchdog said, adding
that Clearview is illegally collecting the facial data of children and not
complying with California’s state privacy law to opt-out of the collection and
sale of personal data.

Both CPPA and the California Department of Justice declined to confirm or deny
whether they are investigating the consumer advocate’s allegations.

Ton-That said in response to Consumer Watchdog that Clearview is “fully
compliant with California law, including law relating to the data of minors, and
has completely and effectively opted-out many Californians.”

One key criticism of facial recognition technology is that it is biased because
it's more likely to misidentify racial minorities and women. The relentless
growth in Clearview AI’s database from 3 billion facial images in 2020 to more
than 40 billion faces to train its algorithms feels ominous to those who fear a
dystopian future of Orwellian screens that watch people instead the other way
around.

Ton-That argues, however, that Clearview’s vast and growing catalogue of faces
makes people safer, and will ultimately result in police making fewer mistakes
and interactions with people on the basis of race.

“The reason why we've got more and more accuracy is because we use more and more
training data,” he said. Like other AI technologies such as OpenAI’s ChatGPT, “a
lot of the advances come from, 'let's try 10 times the amount of training data
to see if that accuracy goes up.' And that typically works really well. And
that’s accuracy across all demographics, right?”

Clearview’s app is now accurate in identifying a person 99.85 percent of the
time, which “is much better than the human eye,” Ton-That said.

UNIQUE PATH

Ton-That says he’s not in the business for money but for impact. An accomplished
guitarist whose mother wanted him to become a professional musician, he grew up
in Australia but left for Silicon Valley at age 19 after one year of university.
“I didn’t really enjoy it,” he said of school.

He spent eight years in Silicon Valley and had some successes developing games.
But after a while he became disillusioned about the Valley mantra that software
startups are “changing the world.”

“I was working on consumer apps and games and things like that. And you could
look at those and say, ‘Oh, wow, people have played, you know, five lifetimes of
your game.’ And it's cool, but it's not satisfying,” he said. “A lot of people
talk about changing the world in the Bay Area, but not many of them really,
actually mean it. So over time, you do get a little jaded there: ‘Oh, that guy
is changing the world.’ No, he really wants a pay day.”

Ton-That got into facial-recognition technology with the idea of selling it for
automated building security, not-crime fighting. He said a former police officer
suggested he provide the Clearview app to police departments.

“And you know, they started solving crimes,” he said. “And that's when I
realized, yeah, we're unironically changing the world. And I think that it's a
rare thing to have in a startup company."

SCRAPING

Asked if he thinks people should be able to have an expectation of privacy or
anonymity in public, Ton-That said he believes the legal answer to that question
is no, but he acknowledged “on human-feel side, it's a little different.” He
said it is a question about how facial-recognition technology is used.

“I think law enforcement and government, especially with the way the Fourth
Amendment [of the US Constitution] is done, and other protections are in place,
they use the technology when needed and judiciously,” he said.

One of the key differences between Clearview AI and other facial recognition
technologies, Ton-That said, is Clearview’s model of scraping billions of facial
images from payment apps such as Venmo and social media platforms such as
Facebook and Instagram. The autonomous scraping of data remains a legal gray
area, with a federal judge in California handing down a ruling just this week
that one major scraping operation of Meta Platforms was legal.

“I mean, the simple answer is, it's public, right?” Ton-That said of Clearview’s
scraping. “I do think Venmo made a mistake, honestly, by having profiles be
public by default.”

Tuesday’s decision by US District Judge Edward Chen found that Bright Data, an
Israeli data scraping company, acted legally in scraping Meta Platforms such as
Facebook and Instagram because its wasn’t doing the scraping through logged-in
accounts, in violation of Meta’s terms of service.

Because Clearview didn’t do its scraping of Venmo, Facebook or other platforms
“while creating an account to break the terms of service,” but only through what
industry and Chen referred to as logged-out crawling, “we think we're fine there
as well,” Ton-That said.




IMAGE CAPTIONS




CLEARVIEW AI’S HANDS HAVE BEEN LEFT TIED BY AUSTRALIAN PRIVACY RULING,
CO-FOUNDER SAYS

24 January 2024
By James Panichi, Ryan Cropp and Mike Swift

The founder of controversial facial-recognition company Clearview AI has pushed
back at an order from the privacy watchdog in his native Australia to purge his
global database of all locally gathered images, suggesting that the ruling was
inconsistent and impossible to achieve.

In an exclusive interview with MLex, Clearview Chief Executive Officer Hoan
Ton-That said it would be impossible to expunge photos of Australian citizens
from Clearview’s database because there’s no way of determining the nationality
of people appearing in a photo.

Ton-That also said if the 2021 ruling by the Office of the Australian
Information Commissioner, or OAIC, only applied to images scraped from servers
located in Australia, then this wouldn’t cover all scraped images that had been
uploaded to US-based platforms.

“Australian people could be on Instagram or Facebook, where it is public, and
those servers are hosted in the United States,” Ton-That told MLex.

“So, there are a lot of interesting questions about how these things could apply
cross-jurisdictionally,” he said. “But also, for us, in practice — how do we
even know? We are just collecting public data that comes to us and it’s only
used in the context of solving crime.”

Ton-That’s comments suggest that Clearview is grappling with the OAIC’s November
2021 determination, which ordered the image-scraping company to end the trial of
its technology with Australian police forces and to delete all images collected
“in Australia”.  

In his interview with MLex, Ton-That said the OAIC’s determination, along with
the subsequent review of the decision by Australia’s Administrative Appeals
Tribunal, compelled the company to purge its database of all images of
Australian citizens.

This, Ton-That said, is simply impossible for Clearview to achieve.

“I would say that there's no way for us, from a certain photo — imagine you're
at a party and there's a group photo with you and your friends on Instagram — to
know if that particular person or that face is an Australian citizen or
resident,” he said.  

“So, unless we have — and we don't have — citizenship data tied to photos,
there's no way for us to really know if someone's Australian or not,” he said.

However, it’s unclear whether the OAIC’s 2021 order, supported by the Tribunal’s
review, requires Clearview to delete images of Australian citizens or residents.
The determination merely demands that Clearview “destroy all scraped images … it
has collected from individuals in Australia.”

This, according to Ton-That, either suggests that all images of Australians
should be destroyed, or that all images stored on Australian servers be
destroyed. The latter interpretation would exclude anything uploaded from
Australia to US servers of most tech giants.

“I think the only thing that the judge ruled … was to just prevent the
collection of data from Australian IP addresses — so, if a server is located in
Australia,” Ton-That said.

In response to questions from MLex, the OAIC agreed that the Tribunal’s review
did identify the collection of servers located in Australia as falling under the
purview of the Privacy Act.

However, the OAIC said that if Clearview continued to operate in Australia after
the Act was amended, on Dec. 13, 2022, then the collection of Australian images
uploaded to foreign servers could also be captured.

And the prospect of Clearview not adhering to the provisions of the amended
Section 5b of the Privacy Act, which outlines the law’s extra-territorial
arrangements, hasn’t been lost on Australian Privacy Commissioner Angelene Falk.

A spokesperson for the commissioner told MLex that Falk was “alert to media
reports that Clearview continues to carry on business in Australia, and is
considering her options in respect of that issue.”

‘CARRYING ON BUSINESS’

Ton-That’s comments to MLex suggest that Clearview is still unclear about how to
respond to the OAIC declarations — despite a legally binding undertaking it had
been required to sign within 90 days of the privacy enforcer’s order.

“I’m from Australia, so it is heartbreaking that the country that I’m from
decided that this is the technology that they don't want in the country, even
though … there's been success really early on from different police agencies in
Australia,” he said.

Ton-That also reiterated earlier claims that Clearview can’t be seen as
“carrying on business in Australia” — the wording of the 1988 Privacy Act that
determines whether foreign companies come under the purview of the law.

“We’re not in the jurisdiction of Australia. We don’t do business there,”
Ton-That said, echoing arguments put to the Administrative Appeals Tribunal in
December 2022.

Clearview claims that, since December 2022, the only activity it carried out in
Australia was the collection of images of Australians and the collections of
images and some meta-data from servers in Australia. As such, Ton-That told the
tribunal that the company couldn’t be seen as “carrying on business.”

However, in its ruling, the Administrative Appeals Tribunal said that while
lawmakers’ decision to use the wording “carrying on business in Australia” in
the Privacy Act may be questionable, it nonetheless suggested that what happens
“in Australia” fell under the scope of the law.

In particular, the Tribunal said acquiring images from websites with a .au
domain name amounted to carrying on business in Australia, as did the acquiring
of images from servers located in Australia.

This interpretation of the law was boosted further in March 2023, when the High
Court of Australia, the country’s supreme federal court, ruled in favor of the
OAIC in the unrelated Cambridge Analytica lawsuit targeting Meta Platforms.

A total of four Australian police forces either registered Clearview accounts or
are known to have trialed the facial-recognition software to identify suspects
in criminal investigations: Australian Federal Police, Queensland Police
Service, South Australia Police and Victoria Police.

In December 2021, the OAIC reprimanded the federal police force, saying it had
breached its obligations under the Privacy Act when it agreed to trial the
controversial image-scraping technology in 2019.

For access to breaking news and predictive analysis on AI and data privacy
regulation from our specialist journalists across the world, start your free
trial today.

Free 14-day trial



IMAGE CAPTIONS




MORE FROM MLEX

US states considering nearly two-dozen proposals to regulate election
'deepfakes'

OpenAI, GitHub AI coding tool reproduced developers’ code, warranting monetary
damages, US judge rules


FINAL DRAFT OF EU'S AI ACT GOES TO GOVERNMENTS FOR REVIEW AS QUESTIONS ON SOURCE
CODE, COPYRIGHT LINGER

AI watermarking ‘most urgent task’ for regulators, Meta’s Clegg warns


EU AI STARTUPS MAY GET EUR3 BILLION OF PUBLIC FUNDING IN NEW STRATEGY


CLEARVIEW AI SEEKS SUMMARY JUDGMENT AND DISMISSAL IN VERMONT AG’S LAWSUIT


IMAGE CAPTIONS


TopBuilt with Shorthand