The article (Pranshu Verma & Will Oremus) (paywalled), which was prompted by Jonathan Turley’s USA Today piece, just went up. The general analysis should be familiar to our readers (see the Large Libel Models thread), but here is a comment from an OpenAI spokesperson:
When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress.
And one from Microsoft (which operates Bing, which uses GPT technology):
We have developed a safety system including content filtering, operational monitoring, and abuse detection to provide a safe search experience for our users… [U]sers are also provided with explicit notice that they are interacting with an AI system.
I think it’s good to provide disclaimers, but I should note that they’re not likely to be much of a defense for the companies in defamation lawsuits; more on that here.