My experience with GPT-3 has indicated that AI can sometimes distort information. I have gotten responses that are directly contrary to what I know to be so. The problem, of course, lies in how the AI is trained and what safeguards are being utilized.Another problem, of course, is that coders and those doing the training may be limited in their imagination as to where the algorithm needs to be tweaked. Yes, it all comes down to training, and there is a fear of inherent bias in the coding that the coders never see because they are open "the guilty parties."