Luckily, hallucinating generative AI is fixable by human editors. What is not likely to be fixable is when these new models, without any limitations or biases in the data on which they are trained or kinks in the algorithm that is to be trained, knowingly produce untruths. That, incidentally, is