I've been using ChatGPT off and on to help with C++ syntax and some Excel formulas. It's quite useful as long as you keep it to small blocks of code. I once asked it to write me an entire C++ program that performed the actions I specified and it did...but the more I added, the longer the program was and ChatGPT's output apparently can only be so long before it just....stops outputting. So I elected to ask it to write code in blocks instead and that was extremely useful for developing my FB-01/IMF SCI patch bank converters to its native sysex format that the FB-01 can understand. And also back again. It took me a while, but it would have taken me way way longer to do it myself (I still consider myself a novice programmer at best). Asking it to write small blocks of code for simpler functions also still puts you in the driver's seat for the overall design and flow of the program.
I also asked it about SCI one time by feeding it a link to the online SCI Documentation and questioning it on syntax and how certain functions are performed. Most of its responses were actually correct! I even had arguments with it about how a certain SCI function worked (something about speech triggering with DoSound or DoAudio) and it was actually correct and I was wrong. Considering SCI Script is not that well known and is so niche, it's kind of incredible that it could figure it out and get to know it somewhat at all enough to correct me on a bad assumption.
ChatGPT and AI is great as long as you're aware of its weaknesses. It
is absolutely prone to hallucinations (outright lies) and you have to be wary of that. If you understand syntax (or any subject) enough and know what output/result you're shooting for and can recognize when ChatGPT gets it wrong, you can stay on top of it and it becomes an effective tool. You can really get a lot done a lot faster than if you're just working it out on your own or if you have to ask questions online and wait for responses. It's kind of a live search engine that can give very specific results to your exact needs rather than very broad responses that you have to sift through to find something actually useful or helpful sometimes (like a problem someone had on StackOverflow that's similar but not entirely the same as yours).
Not everything you read on the internet is true either, AI aside. And since AI is based on the database that is the internet, you have to be just as careful as before. I don't think the future of AI is in doing everything for you (which breeds laziness and reminds me of that TNG episode where this ancient advanced civilization was so advanced that they forgot how everything worked and tried to steal children from the Enterprise). I think the future of AI is in streamlining the work you do. If you just take everything it says for granted, well...
If you're writing a Pong clone and your ball bounces wrong because you used DX instead of DY, I'd say the cause would be pretty obvious. Not the best simplistic example you could've come up with. But please, try again.
Something like this, a silly user error that you missed, is something that AI could point out (with exceptions and caveats, of course, at least in its current state) while built-in syntax checkers can't because the syntax is technically correct. As long as you've described what you want your program to do to the AI and it understands your goal. And yes, that example would be pretty easy to diagnose and debug, but think of other more complex issues that are just hard to spot sometimes. It would draw your attention to them faster and those mistakes and oversights would be far less common in the future.