I see a fundamental problem with artificial intelligence. I haven't read any other heavy criticisms of the idea that AI is possible, save for an issue of Skeptic Magazine. Still, after a great deal of thought and research, and including my own expertise in psychology and neurology, I seriously doubt we will ever achieve AI. At least not AI in the classic, sci-fi sense.
For that kind of AI, we need to be able to construct self-awareness from the outside in. Basically, we study the brain, we study how it works, learn the language of thought as we would a computer language, and than construct a program that is self-aware. I really doubt this is possible. because of the way in which intelligence arises. It doesn't arise from the outside in. A creature never came along (Unless you're a Catholic) and just imbued something with self-awareness. The revelation of self awareness comes from within, when the entity evolves into it.
I don't think there is any "language" of self awareness since no language is required. As neuroscience shows time and time again, the human brain can function with different layouts, sizes, and structure to identical ends. It's because each computer programs itself as it grows. It's because of that that I doubt we will ever be able to program something ourself.
What we could do is create a network of computers with similar basic programming to the primordial brain, and just watch and hope that some sort of consciousness arises within the system. I think this is a possibility. Granted, we don't have the technology to do it, but it's a possibility. Unfortunately, that doesn't fulfill the God desire: the desire to create actively, not passively. It's not AI per se.
In fact, it may be an artificial environment, a computer one, but the development of the intelligence would be very natural. One that follows the rules of the system laid down but otherwise meanders about its own path. And, again, in the end we would be unable to read the code that was created and "see" the intelligence since the code is only readable by the code. Any pattern works. Any code. Any compiler, as it were. It doesn't matter. The syntax and whatnot come from itself. One aware orgamism can be 99% different from another aware organism. The basic structure of our brains may be the same, but for all we know, every brain, in every person EVERWHERE "speaks" another programming language, since the logic to the whole mess only needs to make sense to itself.
I think the movie Bicentennial Man is a good representative of how AI may arise. We may create a system that is intended to be as close to human as possible, and during its development, that one-in-a-million developmental direction takes place and a robot becomes self-aware. But what kind of awareness would it be? How do we program a system of neurochemical awards? Could we create a robot that can get addicted to drugs? Our own consciousness stems from these basic concepts. We cannot create the gross elements of our behavior and build down. It is impossible.
Would our robot have the capacity to be happy? Or sad? Would it have ambition? These are a complex mixture of various elements of the brain that build up to form "I" and no computer could recreate that. For example, I want food. How could we program a computer to want food. We could program 'acquire food,' but there's a huge difference between a mere command and the word "want." How can we program want. It's something that doesn't exist from the outside-in.
That means the programming to seek out food must come from a basic level. This basic level is apparent with us, our ancient microbial ancestors that had some basic chemical process that made them travel towards food survived. This resulted in a more complex mechanism of having the means to travel towards food, still a basic chemical process, which went on and on. We "want" because it's a fundamental part of what we as human animals are. How are we to solve that? We would need to first solve the problem of programming fundamental parts. And, again, those parts cannot be created from the outside in.
As such, I don't think AI is an endeavor that even needs to be made. In fact, I think a quest for true AI is a waste. The Turing Test had it right. We should focus on AI only so far as it relates to interactions with humans for a purpose. For example, Rosie the Maid from The Jetsons doesn't need to be a cantankerous robot from the depths of Brooklyn, she only needs to be able to interact with her environment and achieve a simple set of tasks. That's achievable, hell, that's downright easy. But true AI, forget it.