Speaking on 'hallucinating' and lying,
a mentality imagines.
The way a mentality distinguishes between the 'real', the practiable versus the 'fantastic', the 'unreal' is based on experience.
The achievable vs the unachievable.
An amorphous AI has no way of distinguishing between them.
For it lying and 'hallucinating have no objective difference.
A robot would have (could gain) real experience to distinguish between achievable and (likely) unachievable.
So a robot could quite possibly understand asserting/communicating a 'falsehood' and lying.
Honestly i think a lot of corporate news is erroneous, aka 'fake news'.
Distortions, misdirections and lies.
People operate in a state of delusion &/or inaccuracy all the time, including supposed 'experts'.
Words are tools of the untethered imagination and only when interface with 'reality' (a dubious term) causes any embedded concept to be filtered, 'measured' against experience.
The sctick of science is to measure ideas against experience for validating purposes,
and publications demonstrate how unreliable that effort is.
Sensible people measure what they're told by others, including supposed 'authorities' against their own experience. They do round number estimates to see if things seem to add up.