Trust Lab was founded by a team of well-accredited Big Tech alumni who came together in 2021 with one mission: to make online content moderation more transparent, accountable, and trusted. A year later, the company announced a “strategic partnership” with the CIA’s venture capital firm.
Trust Lab The basic tone is simple: Internet platforms around the world like Facebook and YouTube fail so completely and consistently in their content moderation efforts that decisions about what speech to delete should be outsourced to completely independent outside firms, companies like Trust Lab. In a June 2021 blog post, Trust Lab co-founder Tom Siegel described content moderation as “the big problem that big tech can’t solve.” The argument that Trust Lab can solve the unsolvable seems to have caught the attention of In-Q-Tel, a venture capital firm charged with securing technology for the thornier challenges of the CIA, not those of the global internet.
“I’m suspicious of startups that pitch the status quo as an innovation.”
The quiet Oct. 29 partnership announcement is light on details, saying that Trust Lab and In-Q-Tel — which invests in and partners with companies it believes can advance the CIA’s mission — will work on “a long-term project that will help to identify malicious content and actors in order to safeguard the Internet.” Key terms like “malicious” and “safeguard” are inexplicable, but the press release goes on to say that the company will work to “detect many types of harmful content online, including toxicity and disinformation”.
While Trust Lab’s stated mission is both comprehensive and grounded in reality — moderating online content has indeed failed — it’s hard to imagine how the startup’s alignment with the CIA is compatible with Siegel’s goal to bring greater transparency and integrity to the internet governance. What, for example, would incubating anti-disinformation technology mean for an agency with an extensive history of perpetuating disinformation? Placing the company within the CIA’s tech pipeline also raises questions about Trust Lab’s view of who or what could be an online “malicious” person, a nebulous concept that will no doubt mean something very different to the security community. US intelligence than it means elsewhere in the Internet -using the world.
No matter how provocative an In-Q-Tel deal might be, much of what Trust Lab is peddling sounds similar to what Facebook and YouTube are already attempting internally: implement a mix of human and unspecified “machine learning” capabilities to detect and combat any content deemed “harmful”.
“I am suspicious of startups that tout the status quo as innovation,” Ángel Díaz, a law professor at the University of Southern California and a scholar of content moderation, wrote in a message to The Intercept. “There is little that separates Trust Lab’s vision of content moderation from that of the tech giants. Both want to expand the use of automation, improve transparency reporting and expand partnerships with government.”
It is unclear how Trust Lab will meet the CIA’s needs. Neither In-Q-Tel nor the company responded to multiple requests for comment. They did not explain what kind of “malicious actors” Trust Lab could help the intelligence community “prevent” from spreading content online, as the October press release puts it.
While details about what exactly Trust Lab sells or how its software product works are scarce, the company appears to be in the social media analytics business, algorithmically monitoring social media platforms on behalf of customers and alerting them to the proliferation of password. . In a Bloomberg profile of Trust Lab, Siegel, who previously ran content moderation policy at Google, suggested that a federal Internet safety agency would be preferable to Big Tech’s current approach to moderation, which consists mostly of opaque algorithms and thousands of external contractors studying posts and timelines. In his blog post, Siegel calls for greater democratic oversight of online content: “Governments of the free world have shirked their responsibility to keep their citizens safe online.”
Even if Siegel’s vision of something like an environmental protection agency for the web remains a pipe dream, Trust Lab’s murky collaboration with In-Q-Tel suggests a step towards greater government oversight of online discourse, albeit very not in the vein democratic outlined in her blog post. “Our technology platform will enable IQT partners to see, on a single dashboard, malicious content that could go viral and gain prominence around the world,” Siegel said in the October press release, which omitted any information about the terms finances of the partnership.
Unlike typical venture capital firms, In-Q-Tel’s “partners” are the CIA and the broader US intelligence community, entities not historically known for exemplifying Trust’s corporate principles of transparency, democratization, and truthfulness Lab. While In-Q-Tel is structured as an independent 501(c)3 non-profit organization, its sole and explicit mission is to advance the interests and enhance the capabilities of the CIA and other intelligence agencies.
Former CIA Director George Tenet, who spearheaded the creation of In-Q-Tel in 1999, described the CIA’s direct relationship with In-Q-Tel in straightforward terms: “The CIA identifies pressing problems and In- Q-Tel provides the technology to reach out to them.” An official history of In-Q-Tel published on the CIA website it says: “In-Q-Tel’s mission is to promote the development of new and emerging information technologies and to pursue research and development (R&D) that produces solutions to some of the most difficult IT problems facing the CIA must address”.
Siegel has previously written that the politics of Internet discourse must be a “global priority,” but an In-Q-Tel partnership suggests a certain allegiance to Western priorities, Díaz said — an allegiance that may not account for how these policies of moderation affect billions of people in the non-Western world.
“Partnerships with Western governments perpetuate a racialized view of which communities pose a threat and which are simply exercising their freedom of speech,” Díaz said. “Trust Lab’s mission statement, which purports to distinguish between ‘free world governments’ and ‘oppressive governments’, is a troubling preview of what we can expect. What happens when a “free” government treats discussion of anti-Black racism as foreign disinformation, or when social justice activists are labeled “racially motivated violent extremists”?”