ChatGPT on Mars: How AI can help scientists study the Red Planet

There was a story on CBS "60 Minutes" that had their reporter ask an AI program to write a technical papaer on a specified subject. It did, complete with references. But, when the reporter looked up those references, he found that the AI program had made them up - they did not exist. So, AI will lie. The algorithms in that AI were designed to mimic what people say and do, rather than to find information and apply objective logic to pattern recognition.

So, not all AI is really intelligence in the sense of recognizing things that humans have not already recognized. Some of it, at least, is merely a con job.

So, whoever contracts an AI provider to do a task needs to be able to reveiw the product to make sure they are not being conned by an unscrupulous contractor.
 
Here is the deal on references. ChatGPT is "predictive". It will give you the most likely word to appear next in whatever it is writing about. For example, if you ask ChatGPT for a short explantion of Special Relativity along with a reference, it will give you a well written, accurate, two paragraph summary. It will be an average of all that is out there.

For a reference it will give the most likely one, made from scratch, not the most common one of those actually out there. Here is an example with made up items:
Einstein, A. "Special Relativity and the Speed of Light", Physics Abstract Letters, 1905, p1-20

The author is obvious
The title is a likely one
Physics Abstract Letters - because that is where he published most often
1905 because that was the year this came out
20 pages - because that is roughly how long his papers were
1-20 because they always put him at the front of the journal

ChatGPT is doing exactly what it was designed to do. They have "oversold" it in my opinion. What is especially alarming is that it cannot tell true from false. Ask it about UFO's and it will probably come down on the side of the "believers" since they are very vocal. Disbelievers have only images of clear sky. Doesn't get nearly as much press as fuzzy discs.
 
Last edited:
Yes, ChatGPT is doing what it was designed to do, which is to make it look like something produced by a human. The problem is, it is not really adding any intelligence to the process it uses, and may even be using misinformation it has "leaarned" because that is what it sees on the Internet about a particular subject.

Using the same process to "predict" what a reference should look like is the fatal flaw in the ChatGPT process algorithms. It is producing fake references to simply simulate what real references would look like to somebody who does not actually check.

So, it is not really intelligence . It is simply a tool for faking intelligence.

If I ever get around to testing it, I will ask it to provide 3 papers on the same controversial subject, one written from each of the opposing perspectives, and one where it is instructed to be "objective". The first 2 products should look like the typical political propaganda that we are already being blasted with every day. The 3rd should be interesting. I expect it will only do what some websites like Allsides.com already does, which is to provide stories independently from both perspectives, without any actual analysis of where those sides are being factually misleading when viewed in a broad perspective that draws in other information.

So, I exepct Chat GPT will get used a lot by political activists to shorten the time that they need to concoct their messages to the masses. And I expect them to try to argue that those messages are "right" because they were produced by AI. But, they were really not produced by artificial inteligence, they are only the product of automated prediction of what people expect to hear because that is what they are already saying. I don't see where that has any use for truly educating us, but I do see that it has a lot of uses for misleading us.
 
  • Like
Reactions: billslugg
There must be good uses for such an algorithm. I just saw a poem written in words starting with each letter of the alphabet in order. Not a particularly useful thing but immpressive none the less. What other things are out there?
 
The real scientific uses of AI are in learing to recognize complicated relationships if large amounts of complex data. It is still a limited trait, but it can deal with larger amounts of data and more complex interactions within that data than humans can usually muster the time and attention to notice. Not to mention that it can do it faster, once humans have provided the "training" on what to look for.

But, if what we are looking for is actual intelligence, where we notice that something is not consistent with expectations (or political rhetoric), then we are not "there yet" with AI.
 
  • Like
Reactions: billslugg