Richard Masters
Illuminator
- Joined
- Dec 27, 2007
- Messages
- 3,031
Ever since I watched movies like Terminator 2 when I was younger I've been obsessed with Artificial Intelligence. Having a computer solve your problems - examine your DNA and give you genetically-tailored medical advice, solve any problem that requires strategy or thought, generate storylines that appeal to you and turn them into realistic 3D-animated movies or video games - is incredibly appealing.
Sophisticated matrix manipulation won the Netflix Prize, and there are other applications for Singular Value Decomposition besides choosing movies from sparse data, such as compressed sensing; Neural networks can play an impressive game of 20Q; Computer chess has reached super-human game-play; Evolutionary algorithms can create optimized antennas for NASA.
But what will it take for a computer program to formulate its own questions and feel compelled to answer them? What will it take for a computer program to recognize it's own existence and interact with humans and other sentient beings?
Someone working on the Blue Brain project claimed that we'd have human level intelligence in 2019: "It is not impossible to build a human brain and we can do it in 10 years".
Given powerful enough hardware, I agree with that sentence. From a materialistic point of view, if we can simulate the brain to a neuronal level, then we can build a software-based human brain.
But what if we want to build strong AI some other way? What minimal organizational structure is required for an algorithm, parallel or otherwise, to experience consciousness?
Can a set of machine learning algorithms with a central "director" experience consciousness? If natural selection is responsible for our brains, then surely there are optimizations to be made and better ways to create an intelligent being.
What do you think it will take to create strong AI?
Sophisticated matrix manipulation won the Netflix Prize, and there are other applications for Singular Value Decomposition besides choosing movies from sparse data, such as compressed sensing; Neural networks can play an impressive game of 20Q; Computer chess has reached super-human game-play; Evolutionary algorithms can create optimized antennas for NASA.
But what will it take for a computer program to formulate its own questions and feel compelled to answer them? What will it take for a computer program to recognize it's own existence and interact with humans and other sentient beings?
Someone working on the Blue Brain project claimed that we'd have human level intelligence in 2019: "It is not impossible to build a human brain and we can do it in 10 years".
Given powerful enough hardware, I agree with that sentence. From a materialistic point of view, if we can simulate the brain to a neuronal level, then we can build a software-based human brain.
But what if we want to build strong AI some other way? What minimal organizational structure is required for an algorithm, parallel or otherwise, to experience consciousness?
Can a set of machine learning algorithms with a central "director" experience consciousness? If natural selection is responsible for our brains, then surely there are optimizations to be made and better ways to create an intelligent being.
What do you think it will take to create strong AI?