Artificial Intelligence
2004-10-05
Category: philosophiesHow many times have you seen, in movies or such, an intelligent computer? Are you familiar with the phrase "artificial intelligence"? That's a term that gets thrown around regularly. That leads to two big questions: what is it and is it a good idea?
Both of those questions are complex. I'll try to give the simple answers here, starting with what artificial intelligence (AI) is. (Note: There is a difference between artificial intelligence and fake intelligence. Fake intelligence is when your annoying coworker quotes some "fact" he or she read on the internet and pretends to be smart because of it.)
When computer science people talk about AI we are talking about making a machine operate like an animal would to accomplish a task. That's kind of vague but it can be made more specific depending on the task. For example, a system that identifies people by looking at them requires the same type of face recognition that you or I can do. Similarly, a robot, like those exploring mars, need to be able to navigate like a dog navigates the same sort of terrain. Those types of intelligence show complex decision making within their area of use. The face recognition doesn't need to figure out how to design rocket ships, so we're not talking about that kind of intelligence.
Of course there are people who would like to go so far as to make machines have the kind of general intelligence that most people have. Think about the stupidest person you know. That person is far more intelligent than the most powerful computer in existence. How can that be? Simple, our current computer technology gives us machines that are just big calculators. That's all they are. They can't make complex decisions. They can't fend for themselves. They have no desires or wants. They just manipulate numbers in accordance with a series of instructions provided by the programmer.
At this point, I'd like to switch to the second question. Should we make computers that are intelligent? Again, this is a complex question. To answer it, we need to think about why we make computers in the first place.
Humans are tool builders. We thrive in this world because we make tools to help us do our tasks. We make saws to cut wood. We make spears to hunt our food. We make video games to distract us from the fact that life doesn't require as much effort anymore. We build sex toys to keep losers from trying to socialize. We make tools to help us build other tools. We build computers to do the complex calculations we need to accomplish other tasks.
If we are making computers only as tools to accomplish our work, we can use that to make decisions about AI. Just ask, "Does this AI help us accomplish something?" If the AI does something for us, then it is probably acceptable to use it, provided there are no other negatives. Where does this leave us when on creating a sentient computer?
The idea of the sentient computer is that there would be a computer with the same level of intelligence and self-awareness that the average human has. I wonder what that accomplishes. Can we not assume that a sentient computer will have tasks of its own? That means that it will have work of its own to do and that may conflict with the work we need done. As we know from our long history with slavery, forcing a sentient being into servitude will result in conflict.
We can learn a great deal from our research into AI. If we understand how intelligence works at every level, we can help people better themselves. Some machines will work better by borrowing from animal behavior. I just don't think we should try to create something that will someday become aware of the fact that we're all jerks.
Comments (4)