At this point in its development, it is obvious that ChatGPT has too few rules, not too many.
Yes, it is "flexible". It is being used to fake things like nude pictures of young women that are then used for blackmailing those women. This occurred in multiple countries and has resulted in suicides in India, already.
We would not accept an AI chess program that made illegal moves to win games. So why should we accept more general AI programs that break other rules, laws and ethics?
Isaac Asimov dealt with the ethics of "robots" in fictional works long ago.
https://webhome.auburn.edu/~vestmon/robotics.html
The problem is that Asimov was thinking only of physical injury. Today, ChatGPT can be used to fake things that can lead to loss of money and/or loss of freedom or even loss of life
indirectly. Allowing that to go unfettered is worse than passing out nuclear weapons to every teenager on the planet - the damage potential is immense. We can literally destroy society by faking any knowledge accessible to computers.
For example, those fake references that don't really exist could themselves be faked, and stored in on-line libraries for everybody to access. Surveillance videos could be faked for presentations in courts of law. Even good quality videos can be faked to make anybody seem to have done anything.
Without the need to follow some rules of
behavior, "AI" is not really artificial
intelligence, it is just a tool for creating artificial evidence of
fake "realities".