I was debating something similar to this with a well known AI Programmer who programmed a bot named Mathetes.
My question was:
Can computers ever reach a point (AI Wise) where it would be immoral to turn them off or destroy their data on purpose?
Pistos (Mathetes' creator) returned that:
Since they have no soul [and will never have one], no. It cannot be consider immoral.
A computer could potentially be programmed to enslave a person, by commanding them and, say, shocking them on denial, but, essentially, a human would have to do that in the first place.
AI wise, I dont think you could possibly attach a shocking device to a computer, and teach it through means of AI to take command of a person. I just dont think AI will ever get to that point.
Sure, I have in fact seen some pretty impressive AI systems, but, they can only "learn" on their own to a certain point.
I think that AI programming will progressively get better, and that we will have new methods of allowing a computer to "learn" and even potentially program itself.
For some reason, this topic reminds me of the time that "other" website was copying our posts without permission, and we all started putting copyright notices in our signatures.