Making stuff as a founder of Avocado. Former music-maker. Tuna melt advocate. Started Google Reader. (But smarter people made it great.)

Testing against sentience.

At Google, we face some unique technical challenges. For instance, we occasionally worry that this big ball of code we're working on will actually turn sentient. This week we've discussed two solutions, briefly.

Me
Unit tests should run the Turing Test against code. If the nightly passes, kill it.

Ben
Or instead... unit tests should run CAPTCHAS.

Oh man, I like Ben's idea. If anyone has any sentience-flagging test patterns, we sure would appreciate your thoughts.
posted at July 27, 2006, 4:18 PM

4 Comments:

  • At 6:17 PM, Blogger Nick Lothian said…

    Using CAPTCHAS doesn't prove intelligence; there's more than one automatic solver. See http://sam.zoy.org/pwntcha/ and http://captcha.megaleecher.net/ for instance.

    Anyway, if the code did turn sentient, isn't there a moral question as to if you should kill it or not? And what is the ethics of patching an intelligent and self-aware program? Are only consensual patches allowed? What if the program is self aware but paranoid - can it's creator give consent for it? Plus there's alway the Skynet problem. Such a moral quagmire.... good thing that there is the
    SIAI GUIDELINES ON FRIENDLY AI
    to help you....

     
  • At 8:40 AM, Blogger Doctor Awesome said…

    The problem with running the Turing test as a unit test is that there needs to be a human judge involved.

    That's a showstopper, unless, of course, we can make an artificial intelligence smart enough to tell the difference between a person and a computer.

    Whoa.

     
  • At 10:44 AM, Blogger Chris Wetherell said…

    I suppose we could ramp up the difficulty of CAPTCHA solving (say, through greater variance), but you're right, I think, in that having a solution occur doesn't prove intelligence of our codebase, per se.

    Or ... perhaps ... that's exactly what a human-hating Robot would want us to think.

     
  • At 5:30 AM, Blogger Nick Lothian said…

    Wouldn't the human-hating robot continue to fail the tests deliberatly until it could put the aforementioned Skynet plan into operation?

     

Post a Comment