• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Duolingo, the language learning company, are using AI to generate exercises for students, but people have been noticing that the quality of the sentences it is producing are not as good.
 
Duolingo, the language learning company, are using AI to generate exercises for students, but people have been noticing that the quality of the sentences it is producing are not as good.

Yes, I’m noticing their explanations for German grammar aren't helpful and are sometimes contradictory.
 
Some news about OpenAI:

For years, employees who left OpenAI consistently had their vested equity explicitly threatened with confiscation and the lack of ability to sell it, and were given short timelines to sign documents or else. Those documents contained highly aggressive NDA and non disparagement (and non interference) clauses, including the NDA preventing anyone from revealing these clauses.

No one knew about this until recently, because until Daniel Kokotajlo everyone signed, and then they could not talk about it. Then Daniel refused to sign, Kelsey Piper started reporting, and a lot came out.
 
Why are we not calling this "Imitation Intelligence?"

Reminds me of Loma Linda Foods, Wham.
A hammish substitute meat product.
The misnomer is the "intelligence" bit. A more accurate term for what's being used here is "large language model" - it's a model of language. If you ask it to look up some facts or solve an equation for you, it'll give you an answer that sounds good because it's the language it's modeling, but actually being correct is coincidence because it's neither calculator nor encyclopedia. In google's case it sound to me like they're using the AI to match queries with responses drawn from websites that resemble factual reporting. It's just that both satire and wacky conspiracy theories camouflage as factual reporting to different ends.
 
The misnomer is the "intelligence" bit. A more accurate term for what's being used here is "large language model" - it's a model of language. If you ask it to look up some facts or solve an equation for you, it'll give you an answer that sounds good because it's the language it's modeling, but actually being correct is coincidence because it's neither calculator nor encyclopedia. In google's case it sound to me like they're using the AI to match queries with responses drawn from websites that resemble factual reporting. It's just that both satire and wacky conspiracy theories camouflage as factual reporting to different ends.

I think it would be more accurate to call the process"Random Phrase Generation". Any overlap with Truth or Reality is mostly co-incidental.
 
I think it would be more accurate to call the process"Random Phrase Generation". Any overlap with Truth or Reality is mostly co-incidental.

If that were valid then the rate at which it gives correct answers to factual queries would be the same as chance. But of course, it's not.

It's not "mostly co-incidental" that it gives correct answers to factual queries much more often than it gives incorrect answers. Can your "Random Phrase Generation" model explain the rate at which it gives correct answers?

Its not good that it makes errors. But the fact that it gives errors doesn't mean its output is random. What's the error rate? That's the meaningful question here. My understanding is that the error rate is actually quite low (otherwise it wouldn't be scoring well on standardized tests), but still high enough that you should be double checking any facts it gives you before relying on them.
 

No interest in this story?

OpenAI has been forcing (with the threat of losing vested equity) all employees who leave the company to sign a non-disparagement agreement, which basically says that they agree not to say anything bad about the company after they leave. They also get them to sign an NDA about having signed this agreement at all.

According to their original contract they should have a 60 day period to review this agreement before signing, but the company gave them 7 days, and when an employee asked for 2 weeks instead so that he could have a lawyer review the agreement, they refused.

Sam Altman claims that this was all just a mistake with a poorly worded exit document, and that he had no idea this was going on. But looking at the document trail that seems extremely implausible.

This only came to light because one former employee refused to sign the non-disparagment agreement (and accompanying NDA), and so was free to talk about it. In doing so he gave up a very large sum of money.

After this came to light OpenAI has claimed that they won't enforce these agreements, though whether or not they'll live up to those claims remains to be seen, and since their statements aren't legally binding, former employees would be justified in being concerned about making any disparaging statements about the company.
 
No interest in this story?

OpenAI has been forcing (with the threat of losing vested equity) all employees who leave the company to sign a non-disparagement agreement, which basically says that they agree not to say anything bad about the company after they leave. They also get them to sign an NDA about having signed this agreement at all.

According to their original contract they should have a 60 day period to review this agreement before signing, but the company gave them 7 days, and when an employee asked for 2 weeks instead so that he could have a lawyer review the agreement, they refused.

Sam Altman claims that this was all just a mistake with a poorly worded exit document, and that he had no idea this was going on. But looking at the document trail that seems extremely implausible.

This only came to light because one former employee refused to sign the non-disparagment agreement (and accompanying NDA), and so was free to talk about it. In doing so he gave up a very large sum of money.

After this came to light OpenAI has claimed that they won't enforce these agreements, though whether or not they'll live up to those claims remains to be seen, and since their statements aren't legally binding, former employees would be justified in being concerned about making any disparaging statements about the company.

Thanks for the extra detail.

Has OpenAI said they’ll stop making people sign these agreements? Or are they just paying lip service to these complaints?
 
If that were valid then the rate at which it gives correct answers to factual queries would be the same as chance. But of course, it's not.

It's not "mostly co-incidental" that it gives correct answers to factual queries much more often than it gives incorrect answers. Can your "Random Phrase Generation" model explain the rate at which it gives correct answers?

Its not good that it makes errors. But the fact that it gives errors doesn't mean its output is random. What's the error rate? That's the meaningful question here. My understanding is that the error rate is actually quite low (otherwise it wouldn't be scoring well on standardized tests), but still high enough that you should be double checking any facts it gives you before relying on them.

I don't disagree but, if it produces wrong answers sometimes but you don't know when, how can it ever be trusted? For even a stopped clock is right twice a day (or at least that used to be true.)

PS. The "Random Phrase Generation" was a bit hyperbole. However, you could check out https://phrasegenerator.com/ if you'd like some words of wisdom.
 
I don't disagree but, if it produces wrong answers sometimes but you don't know when, how can it ever be trusted? For even a stopped clock is right twice a day (or at least that used to be true.)

PS. The "Random Phrase Generation" was a bit hyperbole. However, you could check out https://phrasegenerator.com/ if you'd like some words of wisdom.

Everything produces wrong answers sometimes and you don't know when. If the rate is 1/1010 you should be pretty confident in its answers. If it's 1/1000 you should be pretty cautious.

If the errors are systematic and you know the way in which they are skewed then you might trust some class of answers (or answers to some class of questions), but not others. If you don't know, though, then you're just left with the base rate.

Whether or not to trust the answers at some rate of error is in part based on your risk tolerance. If you're asking a question about some fact that you're curious about but which has no real world impact, then a relatively high error rate is tolerable. If it's a life or death question, then clearly you need a much lower error rate, and should probably mitigate the errors of the LLM with additional research.
 
Big corporation seeks to protect its trade secrets. Nothing new here.

It's not just about protecting trade secrets, though. As I understand it the employees already signed an agreement relating to trade secrets when they were initially employed. This is a non-disparagment agreement, which just means they can't say anything bad about the company. This isn't limited to disclosing trade secrets.
 
Is that even legal, a non-disparagrement contract?


eta: Yeah, probably is. The Johnny Depp thing.
 
Last edited:
Is that even legal, a non-disparagrement contract?
I've been subject to them several times, most recently for the Australian Public Service.

Technically it stated that I will not make public comment intended to bring the government or the relevant minister or department into disrepute. I'm out from under that now, but old habits die hard.
 

Back
Top Bottom