• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

If a simulation of understanding is sufficiently accurate, can we not call it "understanding"? Why does AI have to understand things in the same way we do in order to be called legitimate?

We can if it works.

But as we have seen, it's not a good idea to let a LLM write something of legal consequences.
 
Nor would it be a good idea to let a child of 12 write something of legal consequence. And LLMs aren't even that old yet.

And non one would market a 12 year old with no special training to pass a bar exam - but LLMs have while being hyped by their creator.

there is a weird "none-artificial-intelligence" thing going on here, where instead of trying to make A.I. smarter, some are trying to make the comparison person dumber.

Yes, I agree that a blind paraplegic in a coma is less skilled at driving than a Tesla.
 
And non one would market a 12 year old with no special training to pass a bar exam - but LLMs have while being hyped by their creator.
Nobody should be surprised that LLMs are good at tests of rote memorization of a substantial corpus.

Practicing law requires two things of humans: abstract, intuitive, creative reasoning; and comprehensive knowledge of the rules of their profession. Humans are much better at the former than at the latter. That's why the bar exam is so hard - because it tests the thing humans aren't good at, and have to really work on. But being better at rote memorization and regurgitation doesn't make LLMs competent lawyers.
 
This story was surfaced for me today https://www.cnbc.com/amp/2024/03/06...ing-ai-models-for-copyright-infringement.html

It's about a few of the LLM AIs reproducing copyright text. It could be the article, wouldn't be the first time a report got the wrong end of the stick, but in this case I would have thought it was good news that GPT 4 could return copyrighted text? They apparently asked for the first lines of books and extracts from books, all that falls under fair use, for example there are shedloads of YouTube channels that will give you the first line spoken by every character in a given TV series. All entirely legal even though the TV series is copyrighted. If you was the type to use AI assistants you could be quite legitimately asking it for the first line of a novel you want to write about or "what's that big speech the protagonist gives about free speech?" you'd want the "AI assistant" to be able to do your research.
 
Not sure if this is the best thread for this or if it should go in one of the threads about Elon, but this happened:

Elon Musk sues OpenAI and claims it has achieved AGI

Late Thursday night, Elon Musk filed a lawsuit against OpenAI, claiming the company and its leadership breached the firm’s “founding agreement” that stipulated OpenAI would develop artificial general intelligence for the benefit of all humanity. The lawsuit, first reported by Courthouse News, alleges that OpenAI’s for-profit model, partnership with Microsoft, and the March 2023 release of the GPT-4 model “set the founding agreement aflame” because the GPT-4 model has already reached the threshold for artificial general intelligence (AGI).
 
This story was surfaced for me today https://www.cnbc.com/amp/2024/03/06...ing-ai-models-for-copyright-infringement.html

It's about a few of the LLM AIs reproducing copyright text. It could be the article, wouldn't be the first time a report got the wrong end of the stick, but in this case I would have thought it was good news that GPT 4 could return copyrighted text? They apparently asked for the first lines of books and extracts from books, all that falls under fair use, for example there are shedloads of YouTube channels that will give you the first line spoken by every character in a given TV series. All entirely legal even though the TV series is copyrighted. If you was the type to use AI assistants you could be quite legitimately asking it for the first line of a novel you want to write about or "what's that big speech the protagonist gives about free speech?" you'd want the "AI assistant" to be able to do your research.

I would not trust chatGPT to give accurate answers. I asked

Who are the people mentioned in the book 100 A ranking of the most influential persons in history?

The first few names were correct, but the rest were wrong. Some were out of order.

Ref: https://chat.openai.com/c/a4379aa3-6993-466b-af52-c3df8376682f
 
Is he asking for a Judge to rule that gpt-4 is sentient?

Lol.

Here is the actual complaint:

https://www.courthousenews.com/wp-content/uploads/2024/02/musk-v-altman-openai-complaint-sf.pdf

If you are curious you can read it. I skimmed it.

If you go to page 34, there is a section called "PRAYER FOR RELIEF" in which his demands are listed. He is asking for, inter alia:
B. For a judicial determination that GPT-4 constitutes Artificial General Intelligence and is thereby outside the scope of OpenAI’s license to Microsoft;
C. For a judicial determination that Q* and/or other OpenAI next generation large language models in development constitute(s) Artificial General Intelligence and is/are outside the scope of OpenAI’s license to Microsoft;

"Sentience" is probably outside the scope of Artificial General Intelligence. You can have AGI that isn't sentient in principle, I think. Then we get into philosophical discussions about what "sentience" actually is and whether computers can have it.
 
Lol.

Here is the actual complaint:

https://www.courthousenews.com/wp-content/uploads/2024/02/musk-v-altman-openai-complaint-sf.pdf

If you are curious you can read it. I skimmed it.

If you go to page 34, there is a section called "PRAYER FOR RELIEF" in which his demands are listed. He is asking for, inter alia:


"Sentience" is probably outside the scope of Artificial General Intelligence. You can have AGI that isn't sentient in principle, I think. Then we get into philosophical discussions about what "sentience" actually is and whether computers can have it.

The thing is that no one is claiming, except that poor deluded programmer from Google that any of these are sentient nor general AIs. As ever I'm sure that you could find a crank that will claim it is but there won't be one serious AI expert that would testify that ChatGTP is a GAI.

Plus of course there is the issue of his standing etc. even if it was. Would suggest we take this to the general Musk business thread - it's nothing but a frivolous suit that will have no impact on AI going forward. I could see the likes of the NYT case being of more general interest - perhaps start a thread in "Trials...." for legal challenges to current AI methodologies? And keep this one for our amatuer musings about the science and technology?
 
Last edited:
Lol.

Here is the actual complaint:

https://www.courthousenews.com/wp-content/uploads/2024/02/musk-v-altman-openai-complaint-sf.pdf

If you are curious you can read it. I skimmed it.

If you go to page 34, there is a section called "PRAYER FOR RELIEF" in which his demands are listed. He is asking for, inter alia:


"Sentience" is probably outside the scope of Artificial General Intelligence. You can have AGI that isn't sentient in principle, I think. Then we get into philosophical discussions about what "sentience" actually is and whether computers can have it.


Do anybody have a definition of AGI? Can’t the complaint be dismissed because nobody can know what AGI is without a clear definition?
 
Do anybody have a definition of AGI? Can’t the complaint be dismissed because nobody can know what AGI is without a clear definition?

judges make legally binding determinations about things they know nothing about all the time.
 
Do anybody have a definition of AGI? Can’t the complaint be dismissed because nobody can know what AGI is without a clear definition?

Yes, it is defined.

https://en.wikipedia.org/wiki/Artificial_general_intelligence

An artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks,[1] as opposed to narrow AI, which is designed for specific tasks.[2]

It's a what they call a "colorable" argument.

But, I'm not making a legal claim. Just talking about whether AGI has a definition. People have offered various definitions for it.

Probably the most relevant definition would be how OpenAI itself defines the word, but I don't know. I'm guessing that the bigger problem is that Musk doesn't have standing. Authors and artists might have better legal standing??
 

Back
Top Bottom