• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Puppycow

Penultimate Amazing
Joined
Jan 9, 2003
Messages
31,949
Location
Yokohama, Japan
I think we need a general thread on the topic. There are some others but they are narrower in scope.

Google claims new Gemini AI 'thinks more carefully'

Google has released an artificial intelligence (AI) model which it claims has advanced "reasoning capabilities" to "think more carefully" when answering hard questions.
Gemini was tested on its problem-solving and knowledge in 57 subject areas including maths and humanities.
Google is making some big claims for its new model, describing it as its "most capable" yet and has suggested it can outperform human experts in a range of intelligence tests.

Gemini can both recognise and generate text, images and audio - but is not a product in its own right.

Instead it is what it known as a foundational model, meaning it will be integrated into Google's existing tools, including search and Bard.
[Google] claims the most powerful version of Gemini outperforms OpenAI's platform GPT-4 - which drives ChatGPT - on 30 of the 32 widely-used academic benchmarks.

However, a new, more powerful version of the OpenAI software is due to be released next year, with chief executive Sam Altman saying the firm's new products would make its current ones look like "a quaint relative".

So, a number of interesting claims here.
1) The new AI can "outperform human experts in a range of intelligence tests"
2) It can also outperform GPT-4 on 30 out of 32 academic benchmarks
3) It can recognise and generate text, images and audio: not merely a chatbot

Is this, finally "AGI"? (I'm not asking if it's sentient, that's a separate question)

What is AGI (artificial general intelligence)?

An artificial general intelligence (AGI) is a hypothetical type of intelligent agent.[1] If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform.[2][3] Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.[4][promotion?] Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI,[4] DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.

I'll say that it still needs to be demonstrated that it can meet either of those definitions. But even if it could "surpass human capabilities" in just a subset of economically valuable tasks, it would be quite interesting.
 
Last edited:
What I am interested in is how this A.I. is reaching its conclusions - is it just parsing the input more carefully and checks more references before copy&pasting something someone else already wrote?

Or can it really do advanced versions of "If A=B and B=C then..." and not just by brute force?
 
I'm noticing some new types of challenges for ReCaptcha and the like, perhaps to better combat AI solving the puzzles.
 
I think we need a general thread on the topic. There are some others but they are narrower in scope.

Google claims new Gemini AI 'thinks more carefully'



So, a number of interesting claims here.
1) The new AI can "outperform human experts in a range of intelligence tests"
2) It can also outperform GPT-4 on 30 out of 32 academic benchmarks
3) It can recognise and generate text, images and audio: not merely a chatbot

Is this, finally "AGI"? (I'm not asking if it's sentient, that's a separate question)

What is AGI (artificial general intelligence)?



I'll say that it still needs to be demonstrated that it can meet either of those definitions. But even if it could "surpass human capabilities" in just a subset of economically valuable tasks, it would be quite interesting.

No, this isn't AGI, though while some would disagree, I do think it's another step toward it. Will we eventually get to AGI with just larger models, bigger LLMs? I'm not really sure, but I'll give a definite maybe.

On the topic of performing better than GPT4 on 30 of 32 metrics, it seems it only performed marginally better, and there's some reason to suspect that it was tailored to those specific tasks.

So, in all, it seems like this is another step forward, but not a huge one.
 
I'm fascinated to see that lawsuits are springing up where AI group 1 accuses AI group 2 of using AI 1 to train AI2 (i.e. copying their work) ...

So... It's OK for you to rip off everything on the internet, but not OK when someone else does it to you?

Hypocrisy much?
 
I'm fascinated to see that lawsuits are springing up where AI group 1 accuses AI group 2 of using AI 1 to train AI2 (i.e. copying their work) ...

So... It's OK for you to rip off everything on the internet, but not OK when someone else does it to you?

Hypocrisy much?

Link please? I don't doubt it, but I don't know what you are referring to. What are the actual names of these "AI groups"?
 
I'm noticing some new types of challenges for ReCaptcha and the like, perhaps to better combat AI solving the puzzles.

I've read bots are now better at passing Captcha than humans. At this point it's just a worthless annoyance for people.
 
A new paper published in Nature this month, entitled Discovery of a structural class of antibiotics with explainable deep learning, in which a deep learning model was used to discover a new class of antibiotics.

https://www.nature.com/articles/s41586-023-06887-8
The discovery of novel structural classes of antibiotics is urgently needed to address the ongoing antibiotic resistance crisis1,2,3,4,5,6,7,8,9. Deep learning approaches have aided in exploring chemical spaces1,10,11,12,13,14,15; these typically use black box models and do not provide chemical insights. Here we reasoned that the chemical substructures associated with antibiotic activity learned by neural network models can be identified and used to predict structural classes of antibiotics. We tested this hypothesis by developing an explainable, substructure-based approach for the efficient, deep learning-guided exploration of chemical spaces. We determined the antibiotic activities and human cell cytotoxicity profiles of 39,312 compounds and applied ensembles of graph neural networks to predict antibiotic activity and cytotoxicity for 12,076,365 compounds. Using explainable graph algorithms, we identified substructure-based rationales for compounds with high predicted antibiotic activity and low predicted cytotoxicity. We empirically tested 283 compounds and found that compounds exhibiting antibiotic activity against Staphylococcus aureus were enriched in putative structural classes arising from rationales. Of these structural classes of compounds, one is selective against methicillin-resistant S. aureus (MRSA) and vancomycin-resistant enterococci, evades substantial resistance, and reduces bacterial titres in mouse models of MRSA skin and systemic thigh infection. Our approach enables the deep learning-guided discovery of structural classes of antibiotics and demonstrates that machine learning models in drug discovery can be explainable, providing insights into the chemical substructures that underlie selective antibiotic activity.
 
A new paper published in Nature this month, entitled Discovery of a structural class of antibiotics with explainable deep learning, in which a deep learning model was used to discover a new class of antibiotics.

https://www.nature.com/articles/s41586-023-06887-8

Explainable deep learning: concepts, methods, and new developments
Explainable AI (XAI) is an emerging research field bringing transparency to highly complex and opaque machine learning (ML) models. In recent years, various techniques have been proposed to explain and understand ML models, which have been previously widely considered black boxes (e.g., deep neural networks), and verify their predictions. Surprisingly, the prediction strategies of these models at times turned out to be somehow flawed and not aligned with human intuition, e.g., due to biases or spurious correlations in the training data.
Surprisingly? I would have thought it was expected.
 
I thought THE WHOLE POINT of A.I. is to have something that thinks differently - we have already billions of human-like minds.

Of course, if our brains are Turing Complete, they and a machine will always think alike on a basic level.
 
I thought THE WHOLE POINT of A.I. is to have something that thinks differently - we have already billions of human-like minds.

It could be a benefit, but it's not the whole point. If you had an artificial mind that worked like a human's it could drive a car or run a robot that cleans your room, or works in a factory, etc. Pretty valuable, in spite of not being capable of insights humans can't arrive at or doing jobs humans can't do.

And it would have other benefits, like being copyable and being compatible with hardware upgrades so it could run faster given newer hardware, etc.
 

I'm not sure if this is meant as a criticism of the paper.

They discovered a new class of antibiotic, and that antibiotic was effective against MRSA in mice. (mouse models doesn't mean simulated mice, it means actual mice whose skin was infected with MRSA and then treated with the new antibiotic as a model of what would happen if humans were infected with MRSA and then treated with the antibiotic). In general there are plenty of things that work in mice but not humans, but the effectiveness of a new antibiotic seems unlikely to be one of them.

New classes of antibiotics that are effective against bacterial strains that have developed resistance to our current crop of antibiotic are sorely needed. As far as I can see this is extremely good news, even if it's just a one off. If it signals a beginning of a series of discoveries using the same technique it's potentially revolutionary.
 
That's very cool, the antibiotic thing! ...And if it's been done once, then no need to assume it's a one-off. Chances are if it's been done once, then people can hone the thing further, so that this becomes a part of research going forward.

This thing's moving forward very fast, if it's already started to help discover new generics! At this rate, we might be living in an unrecognizably different sci-fi-future where AI's part of everything, not in some far far future date but actually well within our lifetime!
 
I'm not sure if this is meant as a criticism of the paper.
Not at all. The paper shows how effective AI can be when used properly - as a tool designed for the job, not some kind of general purpose 'intelligence' that people hope will magically appear if they throw enough data at it.

My comment was on previous researchers being 'surprised' that this lazy attempt to get more out than they put in backfired.

Meanwhile I am seeing more and more articles using AI generated images to illustrate them - totally worthless as they impart no information. It won't be long before there will be no point having images turned on in the web browser. Such progress! Might as well go back to using 1995 tech...
 
Meanwhile I am seeing more and more articles using AI generated images to illustrate them - totally worthless as they impart no information. It won't be long before there will be no point having images turned on in the web browser. Such progress! Might as well go back to using 1995 tech...

How is AI generated image different from illustration or stock photo ?
 
Last edited:
Dwarkesh Patel has a good post today about the question (often raised in this thread) of whether or not scaling alone can lead to AGI. That is, will bigger models with more compute but basically the same architecture continue to show gains in ability, or will those gains level off?

He structures it as a dialogue between a believer and skeptic, both of whose arguments are well thought out.

Anyway, here's the link:
https://www.dwarkeshpatel.com/p/will-scaling-work
 
Dwarkesh Patel has a good post today about the question (often raised in this thread) of whether or not scaling alone can lead to AGI. That is, will bigger models with more compute but basically the same architecture continue to show gains in ability, or will those gains level off?

He structures it as a dialogue between a believer and skeptic, both of whose arguments are well thought out.

Anyway, here's the link:
https://www.dwarkeshpatel.com/p/will-scaling-work

That was interesting. Thanks.

He links to this paper, which says that AI has been used to solve some "open problems" in mathematics theory:

Mathematical discoveries from program search with large language models
 
I thought THE WHOLE POINT of A.I. is to have something that thinks differently - we have already billions of human-like minds.

Of course, if our brains are Turing Complete, they and a machine will always think alike on a basic level.

No the field of AI has been trying to replicate human "thought" since it first began, going right back to when clockwork was the most sophisticated technology we had. The idea was to make it faster and more reliable than HI.

I think it's only comparatively recently we started to consider making AI which is meant to be unlike human "thought".
 
How is AI generated image different from illustration or stock photo ?

Or for those without the training and talent and time to produce artwork to a commercial level themselves commissioning a commercial artist to produce an illustration for an article etc?

I'd say to a good level of accuracy that outside the specialist niche of art training and art education books and articles 99% of all books and articles and videos are not illustrated with artwork created by the authors.

ETA: I've started to use AI to generate reference work for my own artwork, especially for composition given I now know my non typical neurology and it's not a lack of talent but a quirk of neurology. For those not into creating their own artworks using references is a typical approach, one that is taught in pretty much every art course. So say you are drawing a figure and you need to draw some feet, grabbing a ton of other artists' attempts at drawing feet, grabbing photos of feet, even looking at your own foot is all part of a normal artistic process.
 
Last edited:
I thought THE WHOLE POINT of A.I. is to have something that thinks differently - we have already billions of human-like minds.
I'm pretty sure getting something that thinks differently is a curiosity for future researchers to explore.

Computers already think differently from humans. That's why they're so much better at rote, brute force cognitive tasks. What we really want from AI is something that is good at rote tasks, but also able to apply abstract values and intuitive leaps in a way consistent with our expectations about how humans reach conclusions.

We don't want a lawyer-bot that understands and applies the law differently from how humans would do it. We want a lawyer-bot that practices the same kind of law as the best human lawyers, plus has a truly encyclopedic, knowledge of case law, statute law, common law, etc.

ChatGPT doesn't think like a human thinks. That's why it hallucinates so much. An AI that thinks differently from humans ends up being functionally the same as a schizophrenic - or a psychopath.
 
Happened probably within the first hours of captchas being used. It's nothing new.
Not only that, but the captchas themselves are used to produce training data, and that's the primary reason for their continued existence. They haven't worked particularly well for stopping bots in a long time, it's mostly about duping everybody into doing free labor these days.
 
Do Androids Know They're Only Dreaming of Electric Sheep?

A riff on the title of one of my favourite novels of all time, but this isn’t about the novel it’s about a paper into hallucinations of LLMs. It really is a most apt title for such a paper: https://arxiv.org/abs/2312.17249

The research into how the new AIs do what they do is as fascinating as the LLMs themselves.
 

Back
Top Bottom