Greg Price recently submitted an article to the Troy Messenger, Shhhhhh, she’s listening:
There are many questions about artificial intelligence. Among the questions are various approaches to assessing whether a manmade device is able to reveal intelligent behavior, the Turing Test is a popular approach. Software, well-designed and properly-scripted, can mimic human responses rather easily nowadays. In fact, the notion of impersonating a human via software is so common that we are often unaware that we interact with “smart devices” daily. From where does this artificial intelligence arise? Does the program, the device need to be self-aware, or, does it simply have to be so well-designed that it fools most humans? Do we need another test?