Despite a surge of activism in the tech sector about the ethics of their work, Clarifai, the New York AI startup known for its work on the Defense Department’s controversial Project Maven, announced Thursday that it is pushing further into government work.
The company, which offers computer vision and analytics technology, is launching a Washington, D.C., subsidiary called Neural Net One that will focus on its work with the public sector, coming as the tech world wrestles with big questions over when it’s appropriate to harness AI technology for military and other government work.
“It’s a really exciting expansion for us,” says CEO and founder Matt Zeiler. “We think it’s a huge opportunity to work more with the government.”
Zeiler says he can’t share all of the potential client list, but explains that the company is in talks with multiple federal agencies, from intelligence organizations like the National Geospatial-Intelligence Agency to agencies interested in automatically analyzing aerial and satellite footage of disasters like wildfires and hurricanes. Those images could be used to scan for signs of people in need of rescue or to analyze damage over wide swaths of terrain.
“It doesn’t scale to throw humans at that problem,” he says.
The company’s computer vision could also be used for monitoring agriculture and climate issues, including analyzing how plants are affected by climate change and how land development affects local climate patterns, he says.
“All these problems are just huge data problems,” says Zeiler.
In addition to its work with Project Maven, Zeiler says the company has worked with government agencies when it’s spotted chid pornography through automated image moderation tools it offers and has offered to develop the tool for agencies to use for that purpose.
“As a side effect of one of our models that we use called the moderation model which detects nudity, we actually detect a lot of child porn already and we report it to the government,” he says.
At a crossroads when it comes to government work
Government work has been increasingly controversial in the tech sector recently: Google recently announced it would pull out of Project Maven, a Pentagon initiative focused on using machine learning to examine battlefield images such as those taken with drones, at the end of its current contract after widespread protests. Microsoft employees and hundreds of thousands of members of the public have called on the company to end its work with US Immigration and Customs Enforcement.
Groups like the American Civil Liberties Union and some members of Congress have also raised concerns about Amazon’s Rekognition software, which looks for faces and other objects in images, and its use by law enforcement. Amazon CEO Jeff Bezos has defended the company’s government contracts, including defense work.
“If big tech companies are going to turn their back on US Department of Defense, this country is going to be in trouble,” he said at a Wired magazine event in October.
Zeiler’s spoken similar of his company’s work for the Pentagon.
“Clarifai’s mission is to accelerate the progress of humanity with continually improving AI,” he wrote in a June blog post. “After careful consideration, we determined that the goal for our contribution to Project Maven — to save the lives of soldiers and civilians alike— is unequivocally aligned with our mission.”
Zeiler says Clarifai has been open internally about its government work, letting people voice concerns through surveys and a “open mic” question-and-answer session. A couple of people opted out of the Maven work, including one person who moved to the company’s retail sector, and the company generally lets people move from assignment to assignment if they wish, he says.
“That happens all the time,” the CEO says. “People get bored of what they’re working on.”
In June, a former Clarifai marketing executive named Amy Lai filed a complaint with the Defense Department’s Office of Inspector General saying she was forced out for urging the company to report to the Pentagon that a server involved with Maven work was compromised, allegedly by someone from Russia. At the time, the company said it was simply an automated attack on an “isolated research server” not used for customer workloads, and Zeiler declined to comment further.
“We quickly contained the situation and determined the bot did not access any data, algorithms or code,” a Clarifai spokesperson told Fast Company in June. “We voluntarily notified customers following a full assessment, including an external audit and report by a security firm.”
The company is also shifting one manager to a new role thinking full-time about minimizing bias, a common concern with AI algorithms and their training, and generally making sure projects and their implementations are ethical, he says, emphasizing that Clarifai has turned down projects in the past over such concerns.
“There’s opportunities we’ve turned down in the past where we just didn’t see that it could accelerate the progress of humanity,” he says.