There is a problem of incompatibility of examples of AI like ChatGPT.
1st: Wikipedia is not primary source, the references are important. In ChatGPT there are statements but not references to support the statements.
2nd: Bias. In Wikipedia all positions for a problem must be indicated. ChatGPT is not able to describe the different positions. It takes generally only one.
3rd: disambiguation. Examples like ChatGPT don't process well the disambiguation. It means that the system has a weak AI. It looks that in case of disambiguation, it takes only one meaning.
4th: neutral point of view. Examples like ChatGPT don't give a neutral answer. Frequently they are trained to take a specific answer.
However I personally consider that investigate in AI makes sense because AI is doing a lot of progress and Wikimedia projects can benefit.
But ChatGPT is a bad example for Wikimedia projects.
Kind regards