"Artificial intelligence" is really not critical thinking intelligence, at least at this point in its development. It is just a mimic algorithm that is "trained" by exposing it to a lot of information, from which it "learns" to see patterns in great detail. It can then mimic what it has learned when exposed to new information, and maybe learn from that too, automatically instead of by being told to "learn" again.
So, it really is not so "intelligent" as it is a "good learner". The problem is that it is learning from us, and what we already basically understand, even if we are missing some of the details that it can identify and use to reach conclusions about identifying things. When it "chats", it is just mimicking how people interact, even if it is doing "Google searches" to find info to feed back to us. So, consider how much junk is on the Internet, I have to wonder if activist trolls could radicalize an AI bot. We have a hard enough time teaching ethics and empathy to real humans. Imagine the results from a psychopath training an AI!