Testing against sentience.

Me
Unit tests should run the Turing Test against code. If the nightly passes, kill it.
Ben
Or instead... unit tests should run CAPTCHAS.
Oh man, I like Ben's idea. If anyone has any sentience-flagging test patterns, we sure would appreciate your thoughts.
4 Comments:
At 6:17 PM,
Nick Lothian said…
Using CAPTCHAS doesn't prove intelligence; there's more than one automatic solver. See http://sam.zoy.org/pwntcha/ and http://captcha.megaleecher.net/ for instance.
Anyway, if the code did turn sentient, isn't there a moral question as to if you should kill it or not? And what is the ethics of patching an intelligent and self-aware program? Are only consensual patches allowed? What if the program is self aware but paranoid - can it's creator give consent for it? Plus there's alway the Skynet problem. Such a moral quagmire.... good thing that there is the
SIAI GUIDELINES ON FRIENDLY AI to help you....
At 8:40 AM,
Doctor Awesome said…
The problem with running the Turing test as a unit test is that there needs to be a human judge involved.
That's a showstopper, unless, of course, we can make an artificial intelligence smart enough to tell the difference between a person and a computer.
Whoa.
At 10:44 AM,
Chris Wetherell said…
I suppose we could ramp up the difficulty of CAPTCHA solving (say, through greater variance), but you're right, I think, in that having a solution occur doesn't prove intelligence of our codebase, per se.
Or ... perhaps ... that's exactly what a human-hating Robot would want us to think.
At 5:30 AM,
Nick Lothian said…
Wouldn't the human-hating robot continue to fail the tests deliberatly until it could put the aforementioned Skynet plan into operation?
Post a Comment