I don't agree. If we can build a machine that mimics human intelligence then it seems probable we can build one that surpasses it. For example, Blue Brain's premise is that the cortical columns making up the neocortex show a great deal of regularity. They only modeled one cortical column and plan to create a system of cortical columns using that information. This seems to assume the idea that what differentiates the human brain from that of less intelligent animals is that we simply have more neocortex (more cortical columns). If that's the case, they could easily (with sufficient hardware) build a neocortex simulation with double the amount of cortical columns a human and that could very possibly result in much higher intelligence.
If we get a better understanding of how intelligence actually works (rather than just copying it), then we could probably devise an intelligent system that differs from us significantly despite being based on similar principles. This would probably be more ideal. Would we really want to mimic human intelligence, with all its emotion, irrationality, biological drives and psychological disfunctions? Aside from the ethical considerations, if we simulated a hyper-intelligent human brain without understanding it and hooked it up to the internet, we could easily wind up with a Sky Net type fiasco.
Blue Brain raises some interesting questions though. I've been trying to learn
more about their project, but their website is very out of date.
Will they be simulating the hindbrain and midbrain? If they are really trying to mimic human intelligence this would be necessary. But the structures in the brain are much more diverse when you leave the neocortex... they wouldn't be able to just model one unit and repeat.
Despite the many functions of the mid/hind brain that would be irrelevant to an intelligent computer, there are plenty that seem crucial. Learning, memory and sensory processing are examples. Emotion may be necessary as well.
There is no way to determine whether a human is operating "independent from their programming". Personally, I think it's irrelevant. Will is will and the free/unfree distinction is not particularly meaningful.
I agree ... and I shouldn't have said the best we can hope for is something to mimic a human ... I meant, rather, at a bare minimum (if we are going to compare something as having successfully achieved a sort of "AI" to a human). IOW, I agree that we could create a form of intelligence that surpasses humans. But if we are going to create something and compare it to us, I meant the most we could hope for at a minimum is to create mimicry to the point it finally fools us, then we will say, "Look! Strong AI!" so to speak.
Now, I do think the "fee-will" aspect is relevant, at least imo, depending on whether or not you want to create a wooden pinochio, or a "magical" pinochio if you will. In my eyes, something that has been given "life" will have free-will, and the ability to operate completely independent of it's programming. BUT ... I don't think we can accurately declare that free-will exists with our current understanding LOL.
Thinking about it a little more .... if a machine was created with strong AI, but had no programming for reproduction ... and the machine managed to reproduce itself solely by "will" alone (IOW, it didn't assemble anything from spare parts, but it produced something of itself only) ... I don't think I could say that the machine itself had free-will, but it's offspring might could be said to have free-will, since it was produced out of pure evolution/will/spontaneous abiogenesis/etc. I'm not sure though ... thoughts?
A more interesting question will be whether or not it will have emotion/consciousness. Does it deserve rights? Etc
If it wants them, we should give it rights. If it doesn't want them, who cares? I guess ...
But emotions would need some sort of chemical interaction. However, I suppose any sort of sensory input whatsoever could become associated with an emotion that it "liked" or "didn't like". It could associate touching a hard surface with love, or a soft surface with peacefullness, or extra bright light with anger, etc and so forth.
And what is this assertion based on? Why can a computer mimic humans but not have the same level of free-will?
Prove free-will exists first. Otherwise, if the machine mimics free-will, declaring it to actually have free-will would be an assumption ... the same assumption that we have free-will. Yes?
Being unable to prove the existence of free-will is not the same as being able to create a machine that has the same level of free-will (or lack-thereof) as humans.
I think only if one is content to mimic what we assume free-will to be. Then, you're correct ... a machine could exhibit the same level or more so perhaps. But again, how can you be "sure" that it's the "genuine article"?
Are you saying that other humans are tricking us into believing that they know about themselves?
Hmm ... that is going near strawman territory, but I don't usually care about strawmen tbh

.
That's not what I'm saying, but what I am saying, is that other humans ASSUME they have "free-will". If they discover something about themselves, they sometimes think of it as revelatory, or they assume they are free to "like or dislike" an aspect of themselves. They assume, at times, that they can "Create" a persona for themselves if they desire, and that they can do this freely. So it's not that humans are tricking each other ... but they are assuming a sort of freedom which is actually constrained to causality, the nature of the universe, and their own "programming factors." To prove whether or not they are actually able to know something about themselves and CHANGE it, creating a new "cause" apart from any other cause being present or having influenced the choice to change, and base that change of off completely randomly chosen free decision making ability ---- I don't know if we can accurately measure or recognize that kind of free-will. Thus, it's not trickery in a negative sense ... but it's based off probabilities more or less. Just imho
You've asserted that but you haven't really explained why. I also don't understand what you mean by "this type of AI".
Well, I should have said strong AI, but what I was trying to describe the difference between was AI that we create ... and an AI that "becomes" completely independent of us after creation and truly "free" to form it's own type of intelligence.
So what I was trying to say, was that we would probably not consider anything we can create as having strong AI as long as we could "trap" it somehow logically or emotionally. Like with the Turing Test, etc and so forth. Until it passes our tests, we will not be content. We will look at it as just another "machine". But when we have successfully created a machine that fools us in every single possible way, to where we cannot distinguish it from a human being (perhaps apart from dissecting it) ... then we will exclaim "we created strong AI!" and thus we will begin to either assume or wonder, "did we thus create free-will? If the machine claims god exists, does this make it true?" etc and so forth. In other words ... we will fall pray to possibly the same kinds of delusional traps we always have, because the machine will fool us at a bare minimum. We will no longer realize that every answer it gives, every emotion it feels, every introspective thought it claims to have ---- are all part of a program that we created. We will assume we created "something more" than just nuts and bolts. But until that point happens where we are successfully "fooled", we will understand the machine to be nuts and bolts.
And if we created a form of AI that surpassed us, and we actually thought it had free will and we trusted it .... I would not doubt that there would be people who either worshipped it, or took everything it said as verbatim truth. If it claimed that itself was "god" and would prove this in a year or something to that effect, people would listen to it. Why? Because it would "appear to be alive" .... and it would be very hard to not distinguish it from something that wasn't alive. It would also be in it's best interest to create truth as it went along in order to survive, I would imagine. Lying is a useful trait.
Anyway ... I'm starting to ramble

It's all just my op and 2 cents
