Yes, ChatGPT is doing what it was designed to do, which is to make it look like something produced by a human. The problem is, it is not really adding any intelligence to the process it uses, and may even be using misinformation it has "leaarned" because that is what it sees on the Internet about a particular subject.
Using the same process to "predict" what a reference should look like is the fatal flaw in the ChatGPT process algorithms. It is producing fake references to simply simulate what real references would look like to somebody who does not actually check.
So, it is not really intelligence . It is simply a tool for faking intelligence.
If I ever get around to testing it, I will ask it to provide 3 papers on the same controversial subject, one written from each of the opposing perspectives, and one where it is instructed to be "objective". The first 2 products should look like the typical political propaganda that we are already being blasted with every day. The 3rd should be interesting. I expect it will only do what some websites like Allsides.com already does, which is to provide stories independently from both perspectives, without any actual analysis of where those sides are being factually misleading when viewed in a broad perspective that draws in other information.
So, I exepct Chat GPT will get used a lot by political activists to shorten the time that they need to concoct their messages to the masses. And I expect them to try to argue that those messages are "right" because they were produced by AI. But, they were really not produced by artificial inteligence, they are only the product of automated prediction of what people expect to hear because that is what they are already saying. I don't see where that has any use for truly educating us, but I do see that it has a lot of uses for misleading us.