'Course you can program puny little lines like "a machine may never harm a human" or whatever, but someday an AI will be smart (or dumb) enough to judge his own goal above that rule.
I guess military droids will be the first to mess things up, probably friendly fire someday. But eventually it's perfectly possible for some future "intelligent" battlebot to decide something that's not good for our health.
Don't think humans are wise enough to never make an AI capable of such decisions.
And don't think humans are smart enough to keep such an AI away from a potentally harmfull machine.
Anyway, by the time humanity is extinct, we won't have the environment to worry about