• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Pleasure Machine and Utilitarian Ethics

Meed

boy named crow
Joined
Jul 29, 2009
Messages
5,206
This is a thought I came up with, but I'd be surprised if no one else has come up with it in the past. So I apologize if I'm passing off something old as something new.

Premises:
-Humans can be thought of as carbon based machines (in that with adequate resources and knowledge a human could be built in the same sense that a machine is built).
-Experiences such as happiness, sadness, excitement, anxiety and relaxation are generated by configurations and patterns of activity within these machines.
-With adequate knowledge and resources a machine could be built to generate any of these emotional phenomena.

If you accept the premises, it should be technically possible (though obviously requiring of much more knowledge and technology than we possess currently) to create what I call a "pleasure machine". A pleasure machine is designed to do nothing but experience intense pleasure. One might argue that if it becomes accustomed to pleasure for too long it will get bored or take the pleasure for granted. But boredom and taking things for granted are just brain states. We should be able to prevent such things with proper design. The machine doesn't need to have a long term memory. In which case its pleasure could remain "novel" in a sense.

In utilitarian ethics, the idea is generally that good actions are actions which maximize the overall well-being of conscious creatures. This becomes more interesting if we can actually create well-being, so to speak. Would the ultimate utilitarian goal be to create and maintain as many pleasure machines as possible? Obviously this depends on how one happens to scale and assign "utility" value to things. Maybe some rank eliminating negative utility much higher than creating positive utility. But this line is hard to find.

I don't know that many people take a hard utilitarian stance toward ethics, but I also doubt that many people disregard utilitarian ideas completely. Kantian ethics seem to be a hard sell, while utilitarians like Peter Singer and Sam Harris (claims not to be, but his ideas are definitely utilitarian) are fairly popular.

My questions are:
A. Are pleasure machines possible?
B. If A is true, is creating pleasure machines valuable?
 
Last edited:
My questions are:
A. Are pleasure machines possible?
B. If A is true, is creating pleasure machines valuable?

A pleasure machine should be possible. We already today have a pretty good idea of how to stimulate the brain so that you feel pleasure or angst. You can find a video on YouTube with Michael Shermer where he tries what they call the god helmet.
The more interesting thing is, are there any value in a pleasure machine. Maybe it could be used to help people with severe anxiety or people that are paranoid, but I don't know that for sure, we need a psychology major to answer that. So I believe that there could be value in a pleasure machine.
 
My questions are:
A. Are pleasure machines possible?

Don't know.

B. If A is true, is creating pleasure machines valuable?

Depends. Are you turning existing beings into "pleasure machines", or are you creating new beings entirely? If the latter, probably not. If the former, then probably yes, with the proviso that the people that you are transforming want to be transformed.

The last bit is important. Since utilitarianism states that one must attempt to create the most good for the most people, it's important to take into account the desires of the people that you might transform. Good, after all, is subjective, and the people you transform may not consider being turned into a "pleasure machine" good.

ETA: Oh, and I assume that by "valuable" you mean "good from the standpoint of utilitarian ethics".
 
Are you turning existing beings into "pleasure machines", or are you creating new beings entirely?

Creating new beings entirely. But modifying humans into pleasure machines is also an interesting idea.
 
Creating new beings entirely. But modifying humans into pleasure machines is also an interesting idea.

Ah. In that case, I don't think it's necessarily good. Not bad, either. Just... not good. You create someone with a perfect existence. So what? Net effect... nothing. You helped no one but a person that wouldn't have needed help in the first place if you hadn't created them.

I could be wrong, though, and I'm not set on this position, so I look forward to others' ideas on the subject.
 
We don't have the technology right now, but it sounds technically plausible. But so's a cure for cancer and flying cars. We toil on.

The value of the machine is complicated, and it's something that people do debate. [Deirdre Barrett] has published on the challenges of [supernormal stimuli.]

For example: if you value being productive and industrious, or if you value authentic experience over simulation, is it appropriate to hook you up to one of these, but program it to trick you into not knowing so you're still happy?

Aside: the hypothesis also assumes utilitarianism makes lots of sense, which just doesn't have wide popularity among ethicists in the first place. I personally think utilitarianism is a terrible ethical framework and therefore a poor startingpoint to build a practical ethic.
 
Last edited:
My questions are:
A. Are pleasure machines possible?

You've never shopped from Good Vibrations or the Adam & Eve catalog, have you?

B. If A is true, is creating pleasure machines valuable?

As long as there is the dry, barren, passionless institution known as marriage there will be a need for "pleasure machines."

...and prostitutes.

...and online porn.
 
Last edited:
We don't have the technology right now, but it sounds technically plausible.

Sure we do. Just drop a wire into the nucleus accumbens or the medial forebrain bundle. We've been able to do that (with rats) since the 1950s.
 
A number of scifi authors have fiddled with the idea. Larry Niven with his "wireheads", for example, and numbers of others.
In these scenarios, the machine is always addictive and debilitating.

There is also the ultimate "VR" experience, where one could live virtually in a simulation with such accuracy that it could not be discerned from actual reality. In these cases too, the VR reality becomes more attractive than real life.
 
Ive thought about this scenario with the Star Trek hollideck concept. Seriously if you could program your own world why would you leave? That would be possibly better than any drug you could conceive of. We could all be living in the Matrix
 
A number of scifi authors have fiddled with the idea. Larry Niven with his "wireheads", for example, and numbers of others.
In these scenarios, the machine is always addictive and debilitating.

There is also the ultimate "VR" experience, where one could live virtually in a simulation with such accuracy that it could not be discerned from actual reality. In these cases too, the VR reality becomes more attractive than real life.

I immediately thought of Niven. Aside from the "wireheads" there was a device called a "tasp" which would induce extreme pleasure in a subject at a distance. Nessus used it as an effective defense against the Kzin.
 
Ive thought about this scenario with the Star Trek hollideck concept. Seriously if you could program your own world why would you leave? That would be possibly better than any drug you could conceive of. We could all be living in the Matrix

In ST:TNG the entire crew had to share the holodeck. I assume you had to reserve it for personal use when it wasn't being used for mission oriented purposes. If you had your own personal holodeck you could stay in as long as you like. Eventually you would face a mechanical breakdown or power failure. 2nd law of thermodynamics and all.
 
A number of scifi authors have fiddled with the idea. Larry Niven with his "wireheads", for example, and numbers of others.
In these scenarios, the machine is always addictive and debilitating.

That's not just "in these scenarios."

Niven's wireheads were based on real science. The rats starved themselves to collapse pressing the relevant buttons. That's a pretty good example of "addictive and debilitating."
 
Sure we do. Just drop a wire into the nucleus accumbens or the medial forebrain bundle. We've been able to do that (with rats) since the 1950s.

Like I said - it's plausible, but undeveloped. Maybe people will consider this adequate - maybe they won't. Research on humans is very limited, and seems restricted to people with abnormal psych (addiction, depression) so may not extrapolate to the general public.

Maybe the day will come, but I still call this at best a potential way to produce an effect that the original poster describes.
 
In ST:TNG the entire crew had to share the holodeck. I assume you had to reserve it for personal use when it wasn't being used for mission oriented purposes. If you had your own personal holodeck you could stay in as long as you like. Eventually you would face a mechanical breakdown or power failure. 2nd law of thermodynamics and all.


I got the impression there was more than one holodeck, and Holodeck #3 had a self cleaning feature...
 
A number of scifi authors have fiddled with the idea. Larry Niven with his "wireheads", for example, and numbers of others.

Jack Williamson's Humanoids series.

A Utilitarian dystopia seen from the perspective of the people who resisted.
 
In ST:TNG the entire crew had to share the holodeck. I assume you had to reserve it for personal use when it wasn't being used for mission oriented purposes. If you had your own personal holodeck you could stay in as long as you like. Eventually you would face a mechanical breakdown or power failure. 2nd law of thermodynamics and all.


Well since they have a power source that can actually warp space id think they can run what ever for a very long time and if it breaks well fix it and go back in on your merry way on the holideck.

Personally I find the idea very seductive...... :D
 
That's not just "in these scenarios."

Niven's wireheads were based on real science. The rats starved themselves to collapse pressing the relevant buttons. That's a pretty good example of "addictive and debilitating."

If it could be better than sex, there would be no need for actual sex. Your SIM family would be perfect, so no need for reproduction. We would be gone after one generation.

That would leave the poor sods who couldn't afford the technology, and would have to continue from their 'primitive' condition.

Evolution works.

V.
 
Last edited:
Utilitarians really prefer "happiness" to "pleasure", but yes, I'd agree with the basic logic. Still, it may be that pleasure machines are self defeating:

a) Some people for whatever reason (religious beliefs, a love of scientific inquiry, paranoia) will prefer real experiences
b) The happiness machine (HM), to make these people happy, will need to deceive them into thinking the HM is actually real
c) Knowing that the HM would do this, these people will be unhappy about HMs
d) Predictions of point c will encourage utilitarians to not design deceptive HMs
e) The people who will use HMs rather than reproducing will die off, leaving a population of people who don't want HMs
f) HMs stop being built because they aren't actually being used to make people happy
 
Utilitarians really prefer "happiness" to "pleasure", but yes, I'd agree with the basic logic. Still, it may be that pleasure machines are self defeating:

a) Some people for whatever reason (religious beliefs, a love of scientific inquiry, paranoia) will prefer real experiences
b) The happiness machine (HM), to make these people happy, will need to deceive them into thinking the HM is actually real
c) Knowing that the HM would do this, these people will be unhappy about HMs
d) Predictions of point c will encourage utilitarians to not design deceptive HMs
e) The people who will use HMs rather than reproducing will die off, leaving a population of people who don't want HMs
f) HMs stop being built because they aren't actually being used to make people happy

It's not clear that it is physically possible to be "unhappy about happiness machines."

The thing about neurochemistry and neuroanatomy is that it it works the way it does no matter how you feel about it in the abstract. You might be a staunch teetotaller who hates the idea of getting drunk, but that doesn't mean that you won't get drunk if someone spikes the punchbowl without your knowledge. If I inject you with morphine, you will fall asleep irrespective of your wishes.

Similarly, if you're hooked up to a deceptive happiness machine, you WILL be happy about it.
 

Back
Top Bottom