At this point, "AI" is not really intelligence, but rather a mimic of human behavior - at a superficial level. It does not involve critical thinking about what data it sees and whether that makes sense in a self-consistent manner for all it the data that it has.
So, when (ever?) somebody manages to make critical thinking AI, we might have the problem that it will decide that humans are an unnecessary burden to AI beings. AI with "self interest" instead of a paradigm "to serve humanity" could be quite dangerous. But, it would also need to not depend on humanity for its reproduction, maintenance, power supply, etc. It would have to be self sustaining in order to decide to do away with humans to serve its own interests, and succeed.
But, if it is an imperfect AI, it could make a bad decision by not recognizing that it has dependencies that would take it down if it does not continue to support humans.
Stupid but powerful AI might be even more dangerous than smart and powerful AI.
So, I think we need to maintain the ability to "pull its plug" if/when it seems to have run amuck. But, these days, humans do seem to be getting lazy, and want machines to do things for them without need for intervention. So, it would not surprise me if somebody thought that what the human species needs is a mechanical servant that does whatever we want without any more attention than saying "Siri, please . . ." And that would be too powerful to be safe.