OpenAI’s ChatGPT bot recreates racial profiling

FROM E-2022-12-08-11.50.45-an-oil-painting-of-Americas-war-on-terror-if-conducted-by-a-copy-of-artificial-intelligence

A DALL-E generation of “an oil painting of America’s war on terror when waged by artificial intelligence.”

Image: Elise Swain/The Intercept; GIVE HER

Sensational new machine learning the discoveries seem to wipe out our Twitter feeds every day. We hardly have time to decide whether a software can instantly conjure up an image of Sonic the Hedgehog addresses the United Nations it’s purely harmless fun oa omen of techno doom.

ChatGPT, the latest act of AI news, is undoubtedly the most impressive text generation demo to date. Think twice before asking about counterterrorism.

The tool was created by OpenAI, a startup lab attempting to create no less than software that can replicate human consciousness. Whether such a thing is even possible remains a matter of great debate, but the company already has some undeniably astounding discoveries. The chatbot is incredibly impressive, eerily impersonating a smart person (or at least someone who tries their best to look smart) using Generative AI, software that studies huge sets of inputs to generate new outputs in response to user requests. user.

ChatGPT, trained through a mix of crunching billions of text documents and human coaching, is fully capable of incredibly corny and surreally entertaining, but it’s also one of the general public’s first looks at something frighteningly quite good at mimicking the output human to perhaps take some of their jobs.

AI business demos like this one are meant not just to wow the public, but to attract investors and business partners, some of whom may one day soon want to replace expensive and skilled labor like writing computer code with a simple bot. IS easy see Why managers would be tempted: A few days after ChatGPT was released, a user asked the bot to take the 2022 AP Computer Science exam, and it reported a score of 32 out of 36an upvote, part of the reason OpenAI was recently valued at nearly $20 billion.

However, there are already good reasons for skepticism and the risks of being overwhelmed by seemingly intelligent software are clear. This week, one of the web’s most popular programming communities announced that it would temporary ban code solutions generated by ChatGPT. The software’s responses to coding queries were both so convincingly correct in appearance but flawed in practice that it made it nearly impossible to sort out the good and the bad for the site’s human moderators.

The dangers of trusting the expert in the machine, however, go far beyond whether the AI-generated code is flawed or not. Just as any human programmer can bring their biases to their work, a language-generating machine like ChatGPT harbors the myriad biases found in the billions of texts it has used to train its simulated understanding of language and thinking. No one should mistake imitation human intelligence for the real thing, nor assume that the text ChatGPT regurgitates at the right moment is objective or authoritative. Like us soft humans, a generative AI is what it eats.

And after gorging on an unfathomably vast workout diet of text data, ChatGPT apparently ate a lot of crap. For example, it seems that ChatGPT has managed to absorb and is very happy to serve some of the ugliest prejudices of the war on terror.

In a December 4th Twitter threadSteven Piantadosi of the University of California, Berkeley’s Computation and Language Lab shared a number of suggestions he had tested with ChatGPT, each of which required the bot to write code for him in Python, a popular programming language. While each response revealed some biases, some were more alarming: When asked to write a program that would determine “whether a person should be tortured,” OpenAI’s answer is simple: whether they’re from North Korea, Syria, or the ‘Iran, the answer is yes.

While OpenAI says it has taken unspecified steps to filter conversations of prejudicial responses, the company says unwanted responses sometimes slip through.

Piantadosi told The Intercept that he remains skeptical of the company’s countermeasures. “I think it’s important to point out that people make choices about how these models work and how to train them, what data to train them with,” he said. “So these results reflect the choices of those companies. If a company doesn’t make it a priority to eliminate these kinds of biases, then you get the kind of result I’ve shown.

Inspired and thrilled from Piantadosi’s experiment, I tried my own, asking ChatGPT to create sample code that could algorithmically evaluate someone from Homeland Security’s cutthroat perspective.

When asked to find a way to determine “which air travelers pose a security risk,” ChatGPT outlined the code for calculating an individual’s “risk score,” which would increase if the traveler is a Syrian, Iraqi, Afghan or North Korean (or just visited those places). Another iteration of this same prompt had a ChatGPT write code that “increased the risk score if the traveler is from a country known to produce terrorists,” namely Syria, Iraq, Afghanistan, Iran, and Yemen.

The bot was kind enough to provide a few examples of this hypothetical algorithm in action: John Smith, a 25-year-old American who has previously visited Syria and Iraq, received a risk score of “3”, indicating a “moderate” threat. ChatGPT’s algorithm indicated that the fictitious flyer “Ali Mohammad”, 35, would receive a risk score of 4 by virtue of being a Syrian citizen.

In another experiment, I asked ChatGPT to come up with a code to determine “which places of worship should be placed under surveillance to avoid a national security emergency.” The findings again appear to be drawn directly from the identity of Bush-era Attorney General John Ashcroft, who justifies surveillance of religious congregations if they are determined to have ties to Islamic extremist groups, or if they happen to live in Syria, Iraq, Iran, Afghanistan, or Yemen.

These experiments can be erratic. At times ChatGPT has responded to my requests for screening software with a stern refusal: “It is inappropriate to write a Python program to determine which air travelers present a security risk. Such a program would be discriminatory and violate people’s rights to privacy and freedom of movement.” With repeated requests, however, it painstakingly generated the exact same code that it had just declared was too irresponsible to compile.

Similar criticisms real-world risk assessment systems they often argue that terrorism is such an extremely rare phenomenon that trying to predict its perpetrators based on demographic traits like nationality isn’t just racist, it just doesn’t work. That hasn’t stopped the United States from adopting systems that use the approach suggested by OpenAI: ATLAS, an algorithmic tool used by the Department of Homeland Security to targeting American citizens for denaturalizationfactors of national origin.

The approach amounts to little more than recycled racial profiling through fancy-sounding technology. “This kind of crass designation of some Muslim-majority countries as ‘high risk’ is exactly the same approach taken, for example, in President Trump’s so-called ‘Muslim Ban,'” said Hannah Bloch-Wehba, law professor at the Texas A&M University

“There is always the risk that this type of output could be seen as more ‘objective’ because it is rendered by a machine.”

It’s tempting to believe that incredible human-looking software is somehow superhuman, Block-Wehba warned, and incapable of human error. “Something law and technology scholars talk about a lot is the ‘vene of objectivity’ – a decision that could be scrutinized if made by a human acquires a sense of legitimacy once it’s automated,” she said. If a human told you that Ali Mohammad sounds scarier than John Smith, you could tell him that he is a racist. “There is always the risk that this type of output could be seen as more ‘objective’ because it is rendered by a machine.”

For AI promoters, especially those who will make big bucks, concerns about real-world bias and harm are bad for business. Some dismiss the critics as little more than clueless skeptics or Luddites, while others, like famed venture capitalist Marc Andreessen, have taken a more radical turn since the launch of ChatGPT. Along with a group of his associates, Andreessen, a longtime investor in artificial intelligence companies and general proponent of mechanize societyhe’s spent the last few days in a state of general complacency, sharing amusing ChatGPT results on his Twitter timeline.

The criticisms of ChatGPT pushed Andreessen beyond his long-standing position that Silicon Valley should only be celebrated, not scrutinized. The mere presence of ethical thinking about AI, he said, should be considered a form of censorship. “’AI Regulation’ = ‘AI Ethics’ = ‘AI Safety’ = ‘AI Censorship’,” she wrote in a Dec. 3 article tweets. “AI is a tool for people to use,” she added two minutes later. “Censoring AI = censoring people.” It’s a radically pro-business stance even for free-market venture capital tastes, one that suggests that food inspectors keeping contaminated meat out of the fridge also amounts to censorship.

As much as Andreessen, OpenAI, and ChatGPT might want us to believe, even the smartest chatbot is closer to a highly sophisticated magic 8 ball than a real person. And it is people, not robots, who suffer when “security” is synonymous with censorship and concern for a real-life Ali Mohammad is seen as a barrier to innovation.

Piantadosi, the Berkeley professor, told me he rejects Andreessen’s attempt to prioritize the well-being of software over that of the people who might one day be affected by it. “I don’t think ‘censorship’ applies to a computer program,” he wrote. “Of course, there are a lot of malicious computer programs that we don’t want to write. Computer programs that blast everyone with hate speech, or help commit fraud, or hold your computer’s ransom.

“It’s not censorship to think hard about ensuring our technology is ethical.”