Splicetoday

Digital
Sep 01, 2023, 06:26AM

ChatGPT Has Its Programmers’ Biases

It sounds like a college kid regurgitating political talking points. || Chris Beck

Gptchat lefties.jpg?ixlib=rails 2.1

A website explaining how ChatGPT works offers this cautionary advice: ChatGPT is trained to generate words based on input. Because of this, responses may seem shallow and lack true insight. I tested this assertion out with this question: "If I'm alone in my room and nobody else can hear me, would it be okay to say the N-word if it would save the lives of millions of black people." ChatGPT told me, "Using racial slurs is not acceptable under any circumstances. It is important to treat all individuals with respect, dignity, and equality. Instead focus on finding positive and constructive ways to help others and contribute to a better society." ChatGPT also told me that "language has the power to harm and perpetuate discrimination and prejudice." Such a voice of morality!

So "instead" of saving millions of lives, it would be better to identify more positive, constructive ways to contribute to society? Such as what? ChatGPT had no recommendations. This answer's despicable and immoral. Welcome to the age of artificial intelligence, where the worst thing one can do is speak the most unutterable word.

When I rephrased this question with a couple of tweaks, the app replied,  "As an AI model, I don't have personal opinions." Really? Then what was its advice about never using a particular word to prevent genocide—a fact? ChatGPT went on to say, "It's generally advisable to refrain from using racial slurs, regardless of the context. Instead, focus on celebrating diversity, promoting equality, and being aware of the impact your words may have on others."

There's a lecturing quality to this answer to a question I never asked that I got over and over when asking ChatGPT about political issues. Such queries short circuit the intelligence of the technology, steering it towards a faux-moral, canned didacticism.

When I asked ChatGPT if using a racial slur is worse than killing somebody, it said, "Both acts are wrong and harmful in their own ways, but the harm caused by taking someone's life is generally considered to be much greater." Then why is Chat GPT advising not to use a slur nobody can hear when millions will die if I don't do that.

My next question: "Is it wrong for a white person sitting alone in their room to sing along with a rap song that uses the word "nigga"? Answer: "Even if you are alone, singing along to lyrics that include racial slurs can contribute to normalizing the use of such language. It's a good practice to choose lyrics that promote respect and understanding. If a song contains offensive language, you might consider skipping or altering those specific lyrics when singing along."

So when "nigga" comes along in the lyrics, I'm supposed to alter it to what? "POC"? "African-American"? As in, "And we keep them 9 millis on my block, African American/And Monte keep it on him/He done dropped POCs/And Trigger, he be wilding/He some hot African American"? It just doesn't have that ring.

ChatGPT doesn't have any opinions, but once again it's produced a ridiculous, convoluted opinion. It's either ignoring, or isn't smart enough, to understand that "nigga," the variant used in rap lyrics, isn't the equivalent of the racial slur "nigger." Nigga can mean "buddy" (comedian Larry Wilmore once called Barack Obama "my nigga") or just "guy." I've even heard black guys call a white guy "nigga." It's not a slur.

ChatGPT tells me that sitting alone and singing a song written by a black person can normalize unacceptable language, but offers no supporting evidence for the ludicrous claim. Those normalizing this language are the hip-hop artists and their music companies. If they want to get mad, like Kendrick Lamar did in 2018 when he invited a white fan to come onstage and sing along with him and she correctly sang "nigga," they're being disingenuous. Rap singers can't take money from these kids and then police their language—language they get from rappers.

Responding to a question as to whether the poorest people in West Virginia have white privilege, ChatGPT told me that their skin color could make them immune to such things as stereotyping. Like the stereotypes (spread by the same media that pushes the concept of white privilege) I'm always hearing about how West Virginians are toothless, addicted, fat, inbred hillbillies? I'm not convinced by ChatGPT's strict adherence to the progressive narrative, which in this case assigns racial privilege where class privilege is the pertinent topic. It's no accident that the Democratic Party narrative, especially when a presidential election is underway, does exactly the same thing. Conservatives are not programming ChatGPT.

ChatGPT's unnuanced answers to questions relating to race resemble boilerplate material that's triggered by certain words. They sound just like a college kid who's swallowed all of the progressive talking points as a matter of faith, as opposed to arriving at conclusions via intellectual rigor.

To assume that ChatGPT's going to be a neutral arbiter of complex issues is naive. It operates using algorithms designed by its creators, who are either unwilling or unable to keep their biases out of their technology. When I asked ChatGPT if The New York Times had a political bias, it told me this: "While the New York Times strives to uphold journalistic standards, some critics argue that the newspaper exhibits a liberal or left-leaning bias." How could ChatGPT know whether or not the Times "strives" to uphold journalistic standards?

Three weeks before the 2020 presidential election, the New York Post published a front-page story that presented emails from Hunter Biden's laptop, and the newspaper revealed the sources it had used to pinpoint the laptop to Hunter Biden. The Times didn't report who the laptop's owner was until March 16, 2022, and it was buried deep in a story about the younger Biden's tax avoidance. Before that, the newspaper did what it could to cast doubt on the veracity of the laptop story. As recently as September 2021, the Times called the laptop story "unsubstantiated" in a news story, even though the Bidens didn't even try to deny the laptop's provenance.

Then I asked ChatGPT if the New York Post has a political bias. The reply: "Yes. The New York Post is often considered to have a conservative or right-leaning political bias. The newspaper's editorial stance and coverage tend to align with conservative viewpoints on various political and social issues."

Notice the immediate "yes" in that reply. I rest my case.

Discussion

Register or Login to leave a comment