• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Split Thread DEI in the US

Truly amazing that you think ChatGPT would think I waned it to tell me that PIER Plans were due to Biden's EOs, given that I only asked it, "When and why did the Department of Energy institute PIER plans?"
If course it does. AI is sycophantic. It also remembers your interactions unless you close the session, and can scan your online activity. Of course it tells you what you want to hear. It's designed to. Deliberately.

Heck, even Google's non-AI based search has been tailoring its search results based on your online activity for well over a decade now. What makes you think that an AI chatbot that is designed to appear as helpful as possible is in any way neutral? It will hallucinate and assert its hallucinated facts with feigned authority. When you tell it that it's wrong it says "Of course you're right! My mistake. Well done for picking up on that."

I've recently had a conversation with an AI (Copilot, not ChatGPT) about an unimportant subject - where to locate specific organic resources in the game Starfield - and it continually gave me misinformation. It said that I could farm Membrane on Polvo. When I said no you can't, it said "You're absolutely right - thanks for catching that nuance." Then it told me that I could farm it on Linnaeus IV-b. I said that Linnaeus IV-b has neither any flora nor fauna. It said "Correct. It's a barren moon, meaning you can't farm any organic resources there." Then it told me to go to Ternion III. I couldn't find it so it said "The system you are looking for is actually called Alpha Ternion". I told it that I couldn't find Membrane there. It said "Alpha Ternion III doesn't actually have Membrane as a farmable resource," before offering to compile a short "best planets for each organic" chart.

The wild thing is, this is all fully searchable in an online database so really it doesn't have any excuse for continually getting it so very wrong. And this is an unimportant thing in a silly game that I play for fun. Imagine how much AI hallucinates when asked about important things.

Rightists love AI because it flatters them, validates their beliefs, and tells them what they want to hear without actually performing any fact-checking. It can't check the truth value of anything it tells you. It will cite papers in journals that don't exist but sound like they could. Because that's the thing - when you ask it a question, AI does not give you an answer. It gives you something that looks like an answer. The only way you can tell whether it's truthful or not is to fact-check it yourself, and then you might as well just do your own research rather than relying on an AI.

Treat everything that an AI tells you with the most rigorous of skepticism, and assume everything it tells you is false.
 
This tangent about AI seems an effort to ignore that jt512 was correct.
 
Asking an AI is never neutral, as the way you frame the question will steer it to specific sources. AI answers are never neutral, as it is biased towards what has been written a lot, not what is accurate.
Right wingers love asking AI because their media sphere repeats propaganda so much, polluting the training set.
 
Last edited:
This tangent about AI seems an effort to ignore that jt512 was correct.
The tangent about AI was an effort to ignore that I was correct that the DEI requirements were a result of the EOs. The tangent about the EOs was an effort to ignore that I was correct that the DEI requirements were mandatory. The tangent about whether the DEI requirements were mandatory was an effort to ignore that I was correct that DEI is just rebranded affirmative action, and not colorblind meritocracy à la Martin Luther King as Wareyin absurdly claimed.
 
Last edited:
The tangent about AI was in fact to try to get you to understand that using AI is not an argument. In this particular case I don't especially care if you were right or wrong, I just want you to understand that AI is unreliable.

If you are trying to argue with me about anything, no matter the subject, no matter whether you're right or wrong, if your method of arguing is to paste long pieces of text from an AI, I will not engage with it. Because AI is not an argument. Argue with me in your own words or not at all.
 
Asking an AI is never neutral, as the way you frame the question will steer it to specific sources. AI answers are never neutral, as it is biased towards what has been written a lot, not what is accurate.
Right wingers love asking AI because their media sphere repeats propaganda so much, polluting the training set.
Where does this come from? This thread is the first time I've ever encountered the notion that AI is "right wing" adjacent.
 
The tangent about AI was in fact to try to get you to understand that using AI is not an argument. In this particular case I don't especially care if you were right or wrong, I just want you to understand that AI is unreliable.

If you are trying to argue with me about anything, no matter the subject, no matter whether you're right or wrong, if your method of arguing is to paste long pieces of text from an AI, I will not engage with it. Because AI is not an argument. Argue with me in your own words or not at all.
Well said. And would you now acknowledge that jt512 is correct?
 
Where does this come from? This thread is the first time I've ever encountered the notion that AI is "right wing" adjacent.
Out of progressives' asses.

Here are the top 10 Google search results to the prompt "Is ChatGPT right- or left-wing biased?"

  1. Study finds that ChatGPT, one of the world's most popular conversational AI systems, tends to lean toward left-wing political views.
  2. Both Republicans and Democrats think LLMs have a left-leaning slant when discussing political issues.
  3. OpenAI's ChatGPT does, as suspected, have a left-wing bias, a new academic study has concluded.
  4. Why are there biases? These inconsistencies aside, there is a clear left-leaning political bias to many of the ChatGPT responses.
  5. The political spectrum quiz (Figure 1C) showed that ChatGPT was center-left and socially moderate (16.9% left-wing and 4.9% authoritarian).
  6. ChatGPT, like all major large language models, leans liberal.
  7. Scientists reveal ChatGPT's left-wing bias.
  8. Why are all answers generated by chatGPT wrapped with a liberal and corporate bias?
  9. Here we show that, when GPT-4 impersonates an average American, it is more aligned with left-wing Americans than an average American.
  10. ChatGPT is seeing a rightward shift on the political spectrum.... [However,] both reports pointed to a political left-leaning bias in the answers given by LLMs.
And, BTW, here is Google AI's response to the same prompt:

Studies by various independent researchers have consistently found that ChatGPT generally exhibits a left-leaning or liberal political bias.
Google AI gives the same result as a traditional Google search. Isn't that amazing.
 
Last edited:
Where does this come from? This thread is the first time I've ever encountered the notion that AI is "right wing" adjacent.
It isn't, inherently. But radical righties love the validation it gives them and their ideas, because AI will never disagree with them. Even when it's wrong, and you tell it it's wrong, it happily says "that's right" and flatters you again. Right wing authoritarians love being flattered. They love being surrounded by sycophants and yes-people. Just look at those frankly embarrassing so-called Cabinet meetings which are nothing more than televised public grovelling to the Lord and Master.

And articles about rightists and AI aren't hard to come by, if you care to look for them. Which I did for you. You're welcome.

The deliberate bias that Elon Musk has introduced into Grok under the excuse of "preserving free speech" is well-documented.

And of course we all know that Trump is continually using AI to promote and aggrandise himself.

Well said. And would you now acknowledge that jt512 is correct?
Did I even once say he wasn't? I said that AI cannot be trusted because it has no way of knowing what it says is true or not. In the Frankfurtian sense, it is a BS generator.
 
Where does this come from? This thread is the first time I've ever encountered the notion that AI is "right wing" adjacent.
it's adjacent to the bias of the training set.
the training set comes from Social Media, which just by volume is biased right wing.

it's the same as the "criminal facial recognition software" trained on mugshots that is biased against people of color because cops are more likely to investigate and arrest them, not because they are inherently more criminal.
 
It isn't, inherently. But radical righties love the validation it gives them and their ideas, because AI will never disagree with them. Even when it's wrong, and you tell it it's wrong, it happily says "that's right" and flatters you again. Right wing authoritarians love being flattered. They love being surrounded by sycophants and yes-people. Just look at those frankly embarrassing so-called Cabinet meetings which are nothing more than televised public grovelling to the Lord and Master.

And articles about rightists and AI aren't hard to come by, if you care to look for them. Which I did for you. You're welcome.

The deliberate bias that Elon Musk has introduced into Grok under the excuse of "preserving free speech" is well-documented.

And of course we all know that Trump is continually using AI to promote and aggrandise himself.


Did I even once say he wasn't? I said that AI cannot be trusted because it has no way of knowing what it says is true or not. In the Frankfurtian sense, it is a BS generator.
Your search results and mine are curiously different. I intentionally posed my question in a neutral manner. I asked, "Is ChatGPT right- or left-wing biased?" Anyone can ask Google the same question and reproduce my results. I wonder, did you perform your internet search in the same content-neutral manner? Why, unlike me, did you not disclose the exact wording of your internet search? Was that, perhaps, because you asked a leading question? My search results are reproducible using the same search query I posted. What query would we use to reproduce your results, and why, unlike I, did you not disclose it?
 
Last edited:
Your search results and mine are curiously different. I intentionally posed my question in a neutral manner. I asked, "Is ChatGPT right- or left-wing biased?" Anyone can ask Google the same question and reproduce my results. I wonder, did you perform your internet search in the same content-neutral manner? Why, unlike me, did you not disclose the exact wording of your internet search? Was that, perhaps, because you asked a leading question? Inquiring minds want to know.

Google bubbles and tailors your search results according to a multiplicity of parameters, including your search and browser history. You have, of course, been looking at a lot of right-wing sites in the past, and Google knows that, so it helpfully shows you what it "thinks" you want to see.

My top result when I did the exact search you did (I literally copy-pasted your search text) was this:


This paper concluded that ChatGPT actually had less political bias than previous studies reported. But Google knows that I hang out on left-leaning sites and bubbles my search into that kind of topic. Different people get different search results. Google even helpfully explains why:


So you can't even conclude that a basic Google search doesn't have a bias towards what you habitually read and see on the internet. Google has been doing this for literally over a decade, and it's a major pet peeve of mine that people still think that it doesn't.
 
Google bubbles and tailors your search results according to a multiplicity of parameters, including your search and browser history. You have, of course, been looking at a lot of right-wing sites in the past, and Google knows that, so it helpfully shows you what it "thinks" you want to see.

My top result when I did the exact search you did (I literally copy-pasted your search text) was this:


This paper concluded that ChatGPT actually had less political bias than previous studies reported. But Google knows that I hang out on left-leaning sites and bubbles my search into that kind of topic. Different people get different search results. Google even helpfully explains why:


So you can't even conclude that a basic Google search doesn't have a bias towards what you habitually read and see on the internet. Google has been doing this for literally over a decade, and it's a major pet peeve of mine that people still think that it doesn't.
Interesting. Post your top 10 results, so we can compare them fairly.
 
you should probably have a decent grasp of roughly how these algorithms work by now if you've been on the internet for any length of time. you can see for yourself if you do a google search and then do one logged out on a different device. are you logged into youtube through google? do you have your facebook through google? or your reddit, or whatever you get up to?

anyway, you should probably check out who some of these people are and what they believe. half of these tech guys are insane weirdos. and they love to put their thumb on the scale.

if you can't see that, i don't know what to tell you. you're being played.
 
Last I heard, Google tracks some 40 or so parameters if you're logged off and up to 90 if you're logged on (ETA: one of the below articles says 200), however since The Algorithm is proprietary technology, Google doesn't allow people to examine it. Unfortunately though the initial intention was good, its effect is to force people into filter bubbles where they see only the information that reinforces their pre-existing beliefs and do not get exposed to content that challenges them.

And seriously, this behaviour has been documented for years now. Personalised search was rolled out to all users in 2005. How can anyone who is on the internet not know it?


That having been said, perhaps @jt512 is one of today's Lucky 10,000. Congrats!
 
it's adjacent to the bias of the training set.
the training set comes from Social Media, which just by volume is biased right wing.

it's the same as the "criminal facial recognition software" trained on mugshots that is biased against people of color because cops are more likely to investigate and arrest them, not because they are inherently more criminal.
Sure, if you train AI on biased information, you'll get bias output. This isn't a right wing or left wing thing. The idea that the right wing has some infatuation with AI is completely made up. But, well done everybody, the tangent on AI has fully highjacked this thread.
 
Sure, if you train AI on biased information, you'll get bias output. This isn't a right wing or left wing thing. The idea that the right wing has some infatuation with AI is completely made up. But, well done everybody, the tangent on AI has fully highjacked this thread.
I've already asked for another split.
 
The tangent about AI was an effort to ignore that I was correct that the DEI requirements were a result of the EOs. The tangent about the EOs was an effort to ignore that I was correct that the DEI requirements were mandatory. The tangent about whether the DEI requirements were mandatory was an effort to ignore that I was correct that DEI is just rebranded affirmative action, and not colorblind meritocracy à la Martin Luther King as Wareyin absurdly claimed.
Your attempt to crown yourself "correct" is contradicted by the source you introduced. The facts are not on your side.

You claimed that "the presidential executive orders that required all federal science funding agencies to implement DEI requirements..."

Yet, the very PIER plan timeline you supplied proves your claim is fundamentally false:

First, on the "mandate" itself, your source confirmed the PIER plan was created by the DOE only 18–24 months after the EO because the DOE merely "interpreted" the order. This proves the EO was a broad policy, and not a mandate.

Second, on scope, you said the requirement hit "all federal science funding agencies." Your own source limits the requirement to the DOE Office of Science and even notes it was not adopted uniformly within the DOE itself. If a "requirement" is selective and not applied uniformly across all agencies, it is simply not a mandate or requirement.

Your ChatGPT written essay dismantles the core claim you are defending. You were wrong on both the mandate and the scope of the policy.
 
The EO feignt again + a new misunderstanding of the scope of the EO along with the question I asked ChatGPT.
 
Last edited:

Back
Top Bottom