Interesting discussion. Full disclosure: I have spent the last 40 years of my professional life studying AI. Entered grad school to do my PhD in AI in 1983. We were then a small group of diehards when there was no tech industry, no web or WiFi and no smartphones or even PCs! Most of my career has been in academia with two stints in industry: first job after grad school on the east coast at a big industrial research lab and after going through the academic ladder from young assistant professor to full tenured professor, I’m back in industry on the west coast in the Bay Area. Hint: the pay is better!
Agree with most of the comments here. AI is developing at a rapid pace, and I can tell you from the inside that even professionals like me don’t understand why chatGPT works. Theoretically the model is extremely simple, but given 100 terabytes of digital data and a billion dollars of compute, it seems to have learned to mimic some impressive human abilities. That said, it’s a chatbot that only mines correlations between words. But with a bit of clever processing these correlations seem to give it more power than we realized. Suffice it to say no one is more surprised by chatGPT than its creators at Open AI, some of whom I know.
Scary part is that as powerful as computers are today, they are nothing compared to the quantum computers now being developed at research labs. A quantum chatGPT would likely be far more capable. Can we control it? Unlikely. But as Microsoft has done with Bing, one can put crude controls like 10-20 questions per session. The next generation will not be so easy to control.