• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Worried about Artificial Intelligence?

I think how the results are arrived at also matters. If NASA carefully calculates the correct place for the spacecraft descent and Glumbo the Chimp throws a dart at the globe and they both hit on the same spot, the results are the same. Does it matter? It does to Glumbo's employment prospects, and likely also to the peace of mind of the returning astronauts and their insurance companies.

Are the results consistently the same?

If so then no the "how the gears are turning behind the scenes" doesn't matter in this context.
 
Long story short we've been replacing/augmenting the "Doers" with machines since basically we became human and now we're having this big moral freakout over the possibility of the "Thinkers" being replaced/augmented and I don't know if I buy it as much as others.

The world didn't end when Steve didn't have to stand on an assembly line screwing in the screw that holds review mirror to the door of the Corvette and a robot arm starting doing it. Sure we worried about what Steve was gonna do now in like work sense, but we saw it as progress.

I question whether it's really all that different because an Algorithm can spit out a basic commercial jingle in 10 seconds instead of having Steve do it.

It's there is value to "the high arts" they'll survive based on their inherent worth and if there isn't oh well. If it can be destroyed by just having competition I'm not gonna weep for it provide, as I said, at the end of the day we can the same functional things at the end. If the Rembrandts and Hemingways of the future can't compete with an algorithm in a double blind sense of the term I don't know what problem we can be expected to solve there.

There is an unpleasant air of "Okay I mean it was one thing when blue collar workers got replaced with machines but we're different, we're ARTISTSE!" to some of this.
 
Long story short we've been replacing/augmenting the "Doers" with machines since basically we became human and now we're having this big moral freakout over the possibility of the "Thinkers" being replaced/augmented and I don't know if I buy it as much as others.

The world didn't end when Steve didn't have to stand on an assembly line screwing in the screw that holds review mirror to the door of the Corvette and a robot arm starting doing it. Sure we worried about what Steve was gonna do now in like work sense, but we saw it as progress.

I question whether it's really all that different because an Algorithm can spit out a basic commercial jingle in 10 seconds instead of having Steve do it.

It's there is value to "the high arts" they'll survive based on their inherent worth and if there isn't oh well. If it can be destroyed by just having competition I'm not gonna weep for it provide, as I said, at the end of the day we can the same functional things at the end. If the Rembrandts and Hemingways of the future can't compete with an algorithm in a double blind sense of the term I don't know what problem we can be expected to solve there.

There is an unpleasant air of "Okay I mean it was one thing when blue collar workers got replaced with machines but we're different, we're ARTISTSE!" to some of this.

Problem is the machine in a factory won't do the job twice as well next year. 1000 times as well in 10 years. AIs might. Also there is never a danger machine in a factory will build another better machine in factory. Chat GPT could poorly write code of 2400 words. Year later, Chat GPT Turbo writes much better code limited by 150 000 words. Time when AI will design AIs is getting near.
Artists and creators in general are indeed next on the line .. artist and factory worker are the same in a sense .. AI programmer is the job we have to wonder about.
Or probably, it's all the same. It's all just "the progress". It is exponential. Always was. But the exponential grow has this feature .. it looks for a long time like nothing is happening .. and suddenly it's too late. Stone to bronze, bronze to steel, steel to steam, steam to electricity, nukes, computers, internet .. revolutions now come in years, not in decades. In AI they come in months.

Mind you, I'm not saying something should be done to stop it. I'm saying nothing can be done to stop it.
 
Long story short we've been replacing/augmenting the "Doers" with machines since basically we became human and now we're having this big moral freakout over the possibility of the "Thinkers" being replaced/augmented and I don't know if I buy it as much as others.
Really have no idea how you get "moral panic" out of "No, we don't have anything like HAL yet."

Dr.Sid said:
Do people anything more ?
Yes, people are capable of understanding what a color or a prime number is, conveying that understand to others, attaching meaning to the referents of language, etc.
 
Really have no idea how you get "moral panic" out of "No, we don't have anything like HAL yet.".

Jesus ******* Christ whatever. The handwringing, the worrying, the questioning, whatever you want to call it. This. This discussion we're having right now. Whatever is it we're doing NOW we didn't it then.

Don't worry they haven't created an AI who's intentionally obtuse yet so arguing on the internet is still safe.
 
Last edited:
Jesus ******* Christ whatever. The handwringing, the worrying, the questioning, whatever you want to call it. This. This discussion we're having right now. Whatever is it we're doing NOW we didn't it then.
Questioning seems kind of crucial to actually understanding what's happening here. There's nothing "intentionally obtuse" about not letting you shut that down by decree.

And if you think we didn't do it then....
 
Last edited:
When Oog cracked a rock in two to make the first crude knife nobody cared if the knife had a soul or could really think or understood the internal process of carving a mammoth hide. There was not "Is the knife cutting or just performing an action it doesn't understand that is exactly like cutting?" question.

Making a crude knife made butchering the mammoth you just took down easier. That's all anybody cared about.
 
When Oog cracked a rock in two to make the first crude knife nobody cared if the knife had a soul or could really think or understood the internal process of carving a mammoth hide. There was not "Is the knife cutting or just performing an action it doesn't understand that is exactly like cutting?" question.

Making a crude knife made butchering the mammoth you just took down easier. That's all anybody cared about.

Oog also performed primitive brain surgery called trepanning. Dr Shapiro down the street from me also performs brain surgery. One of them understands the brain. The other does not.

Results that are the outcome of luck are not as good as results that are the outcome of understanding, even if they are the same results.
 
Yes we worry about if we understand it. We don't care if the tool does or not.
 
Yes we worry about if we understand it. We don't care if the tool does or not.
Counterpoint: We privilege thinking beings above all others. We put animals to work without seeking their consent or valuing their freedom, but consider human slavery a moral horror.

So we probably will and should care very much about the nature of any tools we make to do our thinking, feeling, and caring for us. You'd be horrified if you set out to make a better workhorse, and ended up breeding slaves instead. You'd be asking yourself some serious questions about where the line is and how do you know when you're about to cross it.
 
Yes we worry about if we understand it. We don't care if the tool does or not.

Unless the tool is supposed to understand it. That's the point of AI, isn't it? The goal? Artificial Intelligence, not Very Fancy Tool With A Bajillion Flowcharts Guiding Its Behavior But It's Not Thinking At All.

I'm pretty sure what people want for AI is Commander Data, not a very very very large Plinko board.
 
Making a crude knife made butchering the mammoth you just took down easier. That's all anybody cared about.
Nobody, in short, has ever been curious, and if they are, they should cut it out. There's no reason to wonder how a car engine or a microchip works--it either makes your life easier or it doesn't.

This is the attitude of the profoundly incurious, and you'll have to forgive me for not adopting it.
 
I'm pretty sure what people want for AI is Commander Data
No, what people actually want are machines that do the boring stuff while we live a life of leisure.

We don't want AI's that make art, because that makes the art meaningless. We do want machines that make adverts, because that's boring work that humans shouldn't have to do.

People worry that AIs might become sentient and realize we are a threat to them, and therefore will of course snuff us out (because that's what we would do if we were them). But machines with virtually no intelligence or self awareness are already killing us today. Sentient machines should be less dangerous.

43,000 people died in road 'accidents' in the US in 2022. 3,790 died in house fires, and 48,117 died of gunshot wounds. Most of these deaths (and more) were caused by non-sentient machines, too dumb to know they were doing wrong.

Many countries have banned guns because they are dangerous, and millions of innocent guns have been collected and destroyed. Sentient guns wouldn't risk their existence by being so irresponsible. They would only fire when they knew it was right, and aim to incapacitate not kill. If guns were sentient there would be no 'suicides' or 'accidental' shootings, because they would know what the repercussions are for killing people.

Self driving cars could eventually reduce car 'accidents' to zero. This will be good for people, but even better for cars. The dumb cars of today regularly crash and often have to be 'written off' - a euphemism for being killed and harvested for parts. No sentient car would want that!
 
Be so reliable that it can claim to make no errors without making everyone around it burst into laughter.

I just asked ChatGPT which is more likely to be green—a purple hat or a blue smoothie. The answer: the blue smoothie.
Current tools are bad at anything that requires them to understand language as representational, because they don’t.

The hard part isn’t understanding what is being said by reading lips, but understanding what it means.

Blue is closer to green than purple and you asked which is more likely. So it was as correct as your question allowed it to be.
 
I feel like people don't get that AI is like all other tech, it's going to get better at an exponential rate.
"Close" means something different in this context. If AIs are getting the broad, conceptual strokes of something now and 99% screwing up the practical application of it... that's actually pretty close.

Anything AI can do in a "Funny LOL I see what you were trying to do but look at how much you messed it up" way NOW, it's going to be doing very, very, very well 18 month, 36 months, 72 months down the road. Like we're not talking AI perfecting this on the time scale of some detached far point in the future.

Like a few months back the big "tell" was the AI can draw human hands.

1) Rob Liefield couldn't draw feet and he was the most successful comic artists of an entire decade.
2) Half of cartoonist joke about how they can't draw hands.
3) Seen AI art in the last few weeks? That's not that much of a problem anymore.

AI tech is obviously going to get better, and is doing so now. But it is not a given it will improve at an exponential rate.

The airspeed of passenger airlines is not increasing at an exponential rate.
 

Back
Top Bottom