• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Split Thread DEI in the US

The paper showed that all U.S. science funding agencies required funding applications to include a DEI component. Two independent peer reviewers and a decision editor agreed and published the paper. If you, in contrast, believe that the paper did not show what the authors, the reviewers, and the editor thought it shows, then you should write a rebuttal and get it peer reviewed and published.
 
The paper showed that all U.S. science funding agencies required funding applications to include a DEI component. Two independent peer reviewers and a decision editor agreed and published the paper. If you, in contrast, believe that the paper did not show what the authors, the reviewers, and the editor thought it shows, then you should write a rebuttal and get it peer reviewed and published.
The paper did claim that all U.S. science funding agencies require a DEI component. However, the paper also claimed these supposed DEI mandates were implemented by Executive Order. Two independent peer reviewers and a decision editor agreed and published that, despite the fact that the EOs did not mandate such specific requirements.

In Section 2.3, the paper claims that DEI violates or conflicts with the First Amendment. Their citation for this claim of being forced to say things they do not believe are true is a dead link. Yet, two independent peer reviewers and a decision editor agreed and published this point, despite the fact that accepting federal funding is voluntary and does not constitute a free speech violation.

The paper also explicitly claimed in Section 2.3 that hiring quotas are DEI requirements, but then immediately undercut this in the next paragraph by shifting the terminology to "de facto quotas." Not only do hiring quotas not exist within DEI policies, this rhetorical move implicitly acknowledges that the authors know this. Somehow those two independent peer reviewers and a decision editor missed that sleight of hand and agreed and published this anyway.
 
If you believe that, you should write it up and submit it for peer review. Let’s see if the reviewers agree with your claims.
 
If you believe that, you should write it up and submit it for peer review. Let’s see if the reviewers agree with your claims.
My being correct about the paper's errors is not conditional on submitting my own critique for peer review. The validity of my points: that the paper misstates the EO mandate, misrepresents constitutional law, and contains internal contradictions on quotas, is verifiable right now, in the text of the paper itself.

If you believe my critique is wrong, you are welcome to step out from behind the skirts of Epimov et al and defend those provably wrong claims yourself. I read his interview in t-invariant.org. The dude has a clear ideological axe to grind and isn't going to let a little thing like factual accuracy get in the way of his politically motivated pablum.
 
Last edited:
I’ve already explained why your claims are wrong. You’ve made your claims (repeatedly). The authors, reviewers, the editor, and I think you’re wrong. If you think everybody but you is wrong, get your rebuttal peer reviewed and published.

And you, with your blatant misrepresentation of DEI, is clearly the one who is being motivated by ideology.

ETA: We can add the editors of T-invariant to the list of scientists who know that the mandate to require science grants to include a DEI component stemmed from Biden's EOs. Seems, actually, that everybody but you knows this.
 
Last edited:
I’ve already explained why your claims are wrong. You’ve made your claims (repeatedly). The authors, reviewers, the editor, and I think you’re wrong. If you think everybody but you is wrong, get your rebuttal peer reviewed and published.

And you, with your blatant misrepresentation of DEI, is clearly the one who is being motivated by ideology.

ETA: We can add the editors of T-invariant to the list of scientists who know that the mandate to require science grants to include a DEI component stemmed from Biden's EOs. Seems, actually, that everybody but you knows this.
Your defense is a continuous loop. You have not, at any point, explained why my claims are wrong; you have only appealed to the authority of the very authors and reviewers who failed to catch the errors.

I have documented three distinct and verifiable factual errors in the published paper's own text. Your entire argument rests on citing the people who missed these errors. That is not a rebuttal.

You should read up on t-invariant before jumping in bed with them, by the way.
 
You should read up on t-invariant before jumping in bed with them, by the way.
I know all about t-invariant. What's your problem with them? I literally can't find a single objectionable sentence on their entire about page. On the contrary, they seem like a laudable organization.
 
Last edited:
I know all about t-invariant. What's your problem with them? I literally can't find a single objectionable sentence on their entire about page. On the contrary, they seem like a laudable organization.
Oh, I read more than their about page. I paid attention to what they write, how they write it, and how they frame the opposing views. (Marxism? come on)
 
Oh, I read more than their about page. I paid attention to what they write, how they write it, and how they frame the opposing views. (Marxism? come on)
So what do you find objectionable. The very first interview on their home page is actually pro-DEI. The founders grew up under Soviet rule. So, yeah, they're anti-Marxism.
 
Let's not forget that the rightist fearmongering crusade against what they call "DEI" is being used to end and suppress real world policies and initiatives that actually promote diversity, equity, and inclusion.
Very much this. While T-invariant may include one article expressing the value of DEI, that single piece doesn't negate the platform’s overall editorial pattern. The content I found shows a clear editorial imbalance, with a ratio of four anti-DEI pieces (three interviews and one article) to one pro-DEI interview. This shows clear ideological bias and scaremongering. It doesn't help that their contributors and interviewees are exclusively white and overwhelmingly male. The content also consistently echoes the same fundamental, verifiable errors found in the Efimov paper: misrepresenting Executive Orders as direct mandates, asserting the existence of quotas, and falsely claiming that DEI prevents scientific organizations from hiring the best people, etc.
 
I asked ChatGPT 5.2 one simple question: When and why did the Department of Energy institute PIER plans?

Here is the respsonse (let me know if the color is not readable on your screen and I will remove it):

The DOE “PIER Plan” requirement—PIER = Promoting Inclusive and Equitable Research—was instituted during the Biden administration, with requirements rolling out in 2022 and taking effect broadly for FY 2023 funding opportunities in the DOE Office of Science.

Below is a clear timeline and rationale.



When did DOE institute PIER Plans?

Key milestones

  • January 20, 2021
    President Biden issued Executive Order 13985: “Advancing Racial Equity and Support for Underserved Communities Through the Federal Government.”
  • 2021–2022
    Federal agencies, including DOE, conducted internal equity assessments and developed agency-specific implementation plans.
  • Mid–late 2022
    The DOE Office of Science announced that many FY 2023 Funding Opportunity Announcements (FOAs) would require a PIER Plan as part of the proposal.
  • FY 2023 (and continuing into FY 2024–2025)
    PIER Plans became a standard proposal component for many Office of Science programs (with some variation by program and FOA).
So: PIER plans were not a Trump-era policy; they are a Biden-era requirement, operationalized about 18–24 months after EO 13985.


Why did DOE institute PIER Plans?

1. Direct response to Biden executive orders

PIER Plans are explicitly justified by:

  • EO 13985 (racial equity)
  • EO 14035 (diversity, equity, inclusion, and accessibility in the federal workforce)
  • OMB and OSTP guidance instructing agencies to embed equity goals into grantmaking, not just internal hiring.
DOE interpreted this as requiring equity considerations in funded research activities, not merely in agency operations.


2. Shift from demographics to “process”

DOE framed PIER Plans as:

  • Not quotas
  • Not demographic targets
  • Not identity-based scoring
Instead, they are described as plans addressing:

  • mentoring practices
  • access to research opportunities
  • inclusive team culture
  • outreach, training, and professional development
  • removal of structural barriers in labs and collaborations
This framing was likely chosen to reduce legal vulnerability after:

  • heightened scrutiny of race-conscious policies
  • Supreme Court rulings on admissions (2023)
  • Title VI concerns

3. Alignment with NSF-style “Broader Impacts”

Internally, DOE has often described PIER Plans as analogous to:

  • NSF’s Broader Impacts criterion, but
  • focused specifically on equity and inclusion in the research enterprise
This represented a major cultural shift for DOE Office of Science, which historically emphasized:

scientific merit + mission relevance, with minimal social criteria

4. Political and institutional signaling

PIER Plans also served a signaling function:

  • Demonstrating DOE’s compliance with White House priorities
  • Showing alignment with OSTP’s “Equity in STEM” agenda
  • Insulating the agency from accusations of ignoring EO 13985
In other words: they are as much a governance instrument as a scientific one.


Important nuances

  • Not all DOE programs adopted PIER Plans uniformly
    Some applied them lightly; others made them formally reviewable but “non-scored”; still others integrated them into merit review.
  • Implementation varied widely across panels
    Reviewers were often given minimal guidance, leading to inconsistency and confusion—something many PIs quietly complained about.
  • DOE avoided calling them “DEI statements”
    The term PIER was deliberately chosen to sound procedural and neutral, rather than ideological.

Bottom line

DOE instituted PIER Plans in 2022–2023
because it interpreted Biden’s equity executive orders as requiring equity criteria in federal research funding
, and chose a process-oriented, proposal-embedded mechanism to demonstrate compliance.
 
Last edited:
I asked ChatGPT...
There's a reason rightists love AI so much. It validates everything anyone says to it. "That's a great question!" "You're absolutely right." "Would you like me to...?" AI is a sycophant, and rightists absolutely love sycophants.


Everything you get from ChatGPT should be fact-checked. You know - treated with... what's the word?.. Skepticism.
 
There's a reason rightists love AI so much. It validates everything anyone says to it. "That's a great question!" "You're absolutely right." "Would you like me to...?" AI is a sycophant, and rightists absolutely love sycophants.


Everything you get from ChatGPT should be fact-checked. You know - treated with... what's the word?.. Skepticism.
I wasn't aware "rightists" loved AI. My recent recollection is that "rightist" distrusted AI because it had a clear lefty bias (garbage in /garbage out). But I agree with you fully - don't just trust the AI output, you've got to verify it.
 
It's amazing that when asked a completely open-ended question—"When and why did the Department of Energy institute PIER plans?—the AI traced them back to Biden's EOs. Completely amazing.
 
Yes, truly amazing, when I just posted evidence that AIs will tell you exactly what you want them to.
Truly amazing that you think ChatGPT would think I waned it to tell me that PIER Plans were due to Biden's EOs, given that I only asked it, "When and why did the Department of Energy institute PIER plans?"
 
Truly amazing that you think ChatGPT would think I waned it to tell me that PIER Plans were due to Biden's EOs, given that I only asked it, "When and why did the Department of Energy institute PIER plans?"
If course it does. AI is sycophantic. It also remembers your interactions unless you close the session, and can scan your online activity. Of course it tells you what you want to hear. It's designed to. Deliberately.

Heck, even Google's non-AI based search has been tailoring its search results based on your online activity for well over a decade now. What makes you think that an AI chatbot that is designed to appear as helpful as possible is in any way neutral? It will hallucinate and assert its hallucinated facts with feigned authority. When you tell it that it's wrong it says "Of course you're right! My mistake. Well done for picking up on that."

I've recently had a conversation with an AI (Copilot, not ChatGPT) about an unimportant subject - where to locate specific organic resources in the game Starfield - and it continually gave me misinformation. It said that I could farm Membrane on Polvo. When I said no you can't, it said "You're absolutely right - thanks for catching that nuance." Then it told me that I could farm it on Linnaeus IV-b. I said that Linnaeus IV-b has neither any flora nor fauna. It said "Correct. It's a barren moon, meaning you can't farm any organic resources there." Then it told me to go to Ternion III. I couldn't find it so it said "The system you are looking for is actually called Alpha Ternion". I told it that I couldn't find Membrane there. It said "Alpha Ternion III doesn't actually have Membrane as a farmable resource," before offering to compile a short "best planets for each organic" chart.

The wild thing is, this is all fully searchable in an online database so really it doesn't have any excuse for continually getting it so very wrong. And this is an unimportant thing in a silly game that I play for fun. Imagine how much AI hallucinates when asked about important things.

Rightists love AI because it flatters them, validates their beliefs, and tells them what they want to hear without actually performing any fact-checking. It can't check the truth value of anything it tells you. It will cite papers in journals that don't exist but sound like they could. Because that's the thing - when you ask it a question, AI does not give you an answer. It gives you something that looks like an answer. The only way you can tell whether it's truthful or not is to fact-check it yourself, and then you might as well just do your own research rather than relying on an AI.

Treat everything that an AI tells you with the most rigorous of skepticism, and assume everything it tells you is false.
 
This tangent about AI seems an effort to ignore that jt512 was correct.
 
Asking an AI is never neutral, as the way you frame the question will steer it to specific sources. AI answers are never neutral, as it is biased towards what has been written a lot, not what is accurate.
Right wingers love asking AI because their media sphere repeats propaganda so much, polluting the training set.
 
Last edited:
This tangent about AI seems an effort to ignore that jt512 was correct.
The tangent about AI was an effort to ignore that I was correct that the DEI requirements were a result of the EOs. The tangent about the EOs was an effort to ignore that I was correct that the DEI requirements were mandatory. The tangent about whether the DEI requirements were mandatory was an effort to ignore that I was correct that DEI is just rebranded affirmative action, and not colorblind meritocracy à la Martin Luther King as Wareyin absurdly claimed.
 
Last edited:
The tangent about AI was in fact to try to get you to understand that using AI is not an argument. In this particular case I don't especially care if you were right or wrong, I just want you to understand that AI is unreliable.

If you are trying to argue with me about anything, no matter the subject, no matter whether you're right or wrong, if your method of arguing is to paste long pieces of text from an AI, I will not engage with it. Because AI is not an argument. Argue with me in your own words or not at all.
 
Asking an AI is never neutral, as the way you frame the question will steer it to specific sources. AI answers are never neutral, as it is biased towards what has been written a lot, not what is accurate.
Right wingers love asking AI because their media sphere repeats propaganda so much, polluting the training set.
Where does this come from? This thread is the first time I've ever encountered the notion that AI is "right wing" adjacent.
 
The tangent about AI was in fact to try to get you to understand that using AI is not an argument. In this particular case I don't especially care if you were right or wrong, I just want you to understand that AI is unreliable.

If you are trying to argue with me about anything, no matter the subject, no matter whether you're right or wrong, if your method of arguing is to paste long pieces of text from an AI, I will not engage with it. Because AI is not an argument. Argue with me in your own words or not at all.
Well said. And would you now acknowledge that jt512 is correct?
 
Where does this come from? This thread is the first time I've ever encountered the notion that AI is "right wing" adjacent.
Out of progressives' asses.

Here are the top 10 Google search results to the prompt "Is ChatGPT right- or left-wing biased?"

  1. Study finds that ChatGPT, one of the world's most popular conversational AI systems, tends to lean toward left-wing political views.
  2. Both Republicans and Democrats think LLMs have a left-leaning slant when discussing political issues.
  3. OpenAI's ChatGPT does, as suspected, have a left-wing bias, a new academic study has concluded.
  4. Why are there biases? These inconsistencies aside, there is a clear left-leaning political bias to many of the ChatGPT responses.
  5. The political spectrum quiz (Figure 1C) showed that ChatGPT was center-left and socially moderate (16.9% left-wing and 4.9% authoritarian).
  6. ChatGPT, like all major large language models, leans liberal.
  7. Scientists reveal ChatGPT's left-wing bias.
  8. Why are all answers generated by chatGPT wrapped with a liberal and corporate bias?
  9. Here we show that, when GPT-4 impersonates an average American, it is more aligned with left-wing Americans than an average American.
  10. ChatGPT is seeing a rightward shift on the political spectrum.... [However,] both reports pointed to a political left-leaning bias in the answers given by LLMs.
And, BTW, here is Google AI's response to the same prompt:

Studies by various independent researchers have consistently found that ChatGPT generally exhibits a left-leaning or liberal political bias.
Google AI gives the same result as a traditional Google search. Isn't that amazing.
 
Last edited:
Where does this come from? This thread is the first time I've ever encountered the notion that AI is "right wing" adjacent.
It isn't, inherently. But radical righties love the validation it gives them and their ideas, because AI will never disagree with them. Even when it's wrong, and you tell it it's wrong, it happily says "that's right" and flatters you again. Right wing authoritarians love being flattered. They love being surrounded by sycophants and yes-people. Just look at those frankly embarrassing so-called Cabinet meetings which are nothing more than televised public grovelling to the Lord and Master.

And articles about rightists and AI aren't hard to come by, if you care to look for them. Which I did for you. You're welcome.

The deliberate bias that Elon Musk has introduced into Grok under the excuse of "preserving free speech" is well-documented.

And of course we all know that Trump is continually using AI to promote and aggrandise himself.

Well said. And would you now acknowledge that jt512 is correct?
Did I even once say he wasn't? I said that AI cannot be trusted because it has no way of knowing what it says is true or not. In the Frankfurtian sense, it is a BS generator.
 
Where does this come from? This thread is the first time I've ever encountered the notion that AI is "right wing" adjacent.
it's adjacent to the bias of the training set.
the training set comes from Social Media, which just by volume is biased right wing.

it's the same as the "criminal facial recognition software" trained on mugshots that is biased against people of color because cops are more likely to investigate and arrest them, not because they are inherently more criminal.
 
It isn't, inherently. But radical righties love the validation it gives them and their ideas, because AI will never disagree with them. Even when it's wrong, and you tell it it's wrong, it happily says "that's right" and flatters you again. Right wing authoritarians love being flattered. They love being surrounded by sycophants and yes-people. Just look at those frankly embarrassing so-called Cabinet meetings which are nothing more than televised public grovelling to the Lord and Master.

And articles about rightists and AI aren't hard to come by, if you care to look for them. Which I did for you. You're welcome.

The deliberate bias that Elon Musk has introduced into Grok under the excuse of "preserving free speech" is well-documented.

And of course we all know that Trump is continually using AI to promote and aggrandise himself.


Did I even once say he wasn't? I said that AI cannot be trusted because it has no way of knowing what it says is true or not. In the Frankfurtian sense, it is a BS generator.
Your search results and mine are curiously different. I intentionally posed my question in a neutral manner. I asked, "Is ChatGPT right- or left-wing biased?" Anyone can ask Google the same question and reproduce my results. I wonder, did you perform your internet search in the same content-neutral manner? Why, unlike me, did you not disclose the exact wording of your internet search? Was that, perhaps, because you asked a leading question? My search results are reproducible using the same search query I posted. What query would we use to reproduce your results, and why, unlike I, did you not disclose it?
 
Last edited:
Your search results and mine are curiously different. I intentionally posed my question in a neutral manner. I asked, "Is ChatGPT right- or left-wing biased?" Anyone can ask Google the same question and reproduce my results. I wonder, did you perform your internet search in the same content-neutral manner? Why, unlike me, did you not disclose the exact wording of your internet search? Was that, perhaps, because you asked a leading question? Inquiring minds want to know.

Google bubbles and tailors your search results according to a multiplicity of parameters, including your search and browser history. You have, of course, been looking at a lot of right-wing sites in the past, and Google knows that, so it helpfully shows you what it "thinks" you want to see.

My top result when I did the exact search you did (I literally copy-pasted your search text) was this:


This paper concluded that ChatGPT actually had less political bias than previous studies reported. But Google knows that I hang out on left-leaning sites and bubbles my search into that kind of topic. Different people get different search results. Google even helpfully explains why:


So you can't even conclude that a basic Google search doesn't have a bias towards what you habitually read and see on the internet. Google has been doing this for literally over a decade, and it's a major pet peeve of mine that people still think that it doesn't.
 
Google bubbles and tailors your search results according to a multiplicity of parameters, including your search and browser history. You have, of course, been looking at a lot of right-wing sites in the past, and Google knows that, so it helpfully shows you what it "thinks" you want to see.

My top result when I did the exact search you did (I literally copy-pasted your search text) was this:


This paper concluded that ChatGPT actually had less political bias than previous studies reported. But Google knows that I hang out on left-leaning sites and bubbles my search into that kind of topic. Different people get different search results. Google even helpfully explains why:


So you can't even conclude that a basic Google search doesn't have a bias towards what you habitually read and see on the internet. Google has been doing this for literally over a decade, and it's a major pet peeve of mine that people still think that it doesn't.
Interesting. Post your top 10 results, so we can compare them fairly.
 
you should probably have a decent grasp of roughly how these algorithms work by now if you've been on the internet for any length of time. you can see for yourself if you do a google search and then do one logged out on a different device. are you logged into youtube through google? do you have your facebook through google? or your reddit, or whatever you get up to?

anyway, you should probably check out who some of these people are and what they believe. half of these tech guys are insane weirdos. and they love to put their thumb on the scale.

if you can't see that, i don't know what to tell you. you're being played.
 
Last I heard, Google tracks some 40 or so parameters if you're logged off and up to 90 if you're logged on (ETA: one of the below articles says 200), however since The Algorithm is proprietary technology, Google doesn't allow people to examine it. Unfortunately though the initial intention was good, its effect is to force people into filter bubbles where they see only the information that reinforces their pre-existing beliefs and do not get exposed to content that challenges them.

And seriously, this behaviour has been documented for years now. Personalised search was rolled out to all users in 2005. How can anyone who is on the internet not know it?


That having been said, perhaps @jt512 is one of today's Lucky 10,000. Congrats!
 
it's adjacent to the bias of the training set.
the training set comes from Social Media, which just by volume is biased right wing.

it's the same as the "criminal facial recognition software" trained on mugshots that is biased against people of color because cops are more likely to investigate and arrest them, not because they are inherently more criminal.
Sure, if you train AI on biased information, you'll get bias output. This isn't a right wing or left wing thing. The idea that the right wing has some infatuation with AI is completely made up. But, well done everybody, the tangent on AI has fully highjacked this thread.
 
Sure, if you train AI on biased information, you'll get bias output. This isn't a right wing or left wing thing. The idea that the right wing has some infatuation with AI is completely made up. But, well done everybody, the tangent on AI has fully highjacked this thread.
I've already asked for another split.
 
The tangent about AI was an effort to ignore that I was correct that the DEI requirements were a result of the EOs. The tangent about the EOs was an effort to ignore that I was correct that the DEI requirements were mandatory. The tangent about whether the DEI requirements were mandatory was an effort to ignore that I was correct that DEI is just rebranded affirmative action, and not colorblind meritocracy à la Martin Luther King as Wareyin absurdly claimed.
Your attempt to crown yourself "correct" is contradicted by the source you introduced. The facts are not on your side.

You claimed that "the presidential executive orders that required all federal science funding agencies to implement DEI requirements..."

Yet, the very PIER plan timeline you supplied proves your claim is fundamentally false:

First, on the "mandate" itself, your source confirmed the PIER plan was created by the DOE only 18–24 months after the EO because the DOE merely "interpreted" the order. This proves the EO was a broad policy, and not a mandate.

Second, on scope, you said the requirement hit "all federal science funding agencies." Your own source limits the requirement to the DOE Office of Science and even notes it was not adopted uniformly within the DOE itself. If a "requirement" is selective and not applied uniformly across all agencies, it is simply not a mandate or requirement.

Your ChatGPT written essay dismantles the core claim you are defending. You were wrong on both the mandate and the scope of the policy.
 
The EO feignt again + a new misunderstanding of the scope of the EO along with the question I asked ChatGPT.
 
Last edited:

Back
Top Bottom