Advances on AI will certainly reshape some of the society, but I don't think it will be as significant as people make it out to be.
For once, it has to be fed with content to train it. AI is not sentient, nor creative. As it is, it will not be able to come up with new paths and solutions that aren't previously baby-fed to it. It simply regurgitates what it is told, and as a result, it's heavily dependent on human knowledge. It's a series of mathematical functions that are constantly changing its parameters to fine tune and adequate outputs for every prompt.
Pros: It can be a great and accessible way of spreading knowledge.
Cons: It can be a great and accessible way of spreading misinformation or biased content.
It all depends on what's fed. If it's fed trash, it will spout trash. If it's fed quality content, it will provide quality content.
Every now and then a new revolution in information appears, and with it caution is necessary, it's been present throughout the history. From books, to the invention of the press, to the internet. And so on. The general idea is that even if it spouts nonsense, people should be able to correct it so it replaces the bad information it was fed for new high-quality ones, but this is a procedure that is always and constantly happening, so it will always have some bad content.
I remember hearing a fairly persuasive argument on to why AIs like ChatGPT should be taken with a grain of salt is that it's as reliable as Wikipedia. Both are extremely useful tools, but sometimes it's hard to know whether or not its contents are accurate or correct, specially without proof that the sources utilized in the text are of high quality. And honestly, this sort of argument can probably be taken ad infinitum, or up to empirical experimentation or extreme abstraction.
But let's assume it reach a point where it's trustworthy enough. Another concern is that knowledge can be used for bad stuff. Without regularizing what sorts of prompts can be inserted into the AI, it can be a tool for illegal activities such as producing drugs, viruses, or even weaponry to be used in terrorist attacks. There's currently an attempt to prevent it from telling people how to make stuff like that, but as seen a few times it can often be bypassed by some really bizarre ways (there is a famous example of ChatGPT teaching an user how to make napalm because it was asked to roleplay as their grandmother telling a bedtime story about her time working on a napalm factory). So clearly, some of it requires to be addressed.
Similarly, how to make sure the content isn't biased? Be it towards a certain political, economical, or even philosophical ideology. The fact the AI is able to amplify hate speeches and discrimination is fairly worrying, but how to even flat these out when they're also extremely subjective topics that are also subject to human bias when regulated? Tough topic.
There's a whole other discussion regarding AI on artistic skill, though. Mainly digital painting.
I feel like the root of this issue comes from people feeding other people's works to these machines. It's just a disguised form of plagiarism/ forgery. Similar to the case of it aiding commit crimes, since it enables illegal activity it simply requires immediate legislation; but this time intervention on what's being fed to the machines rather than on accepting the inputs.
There's also a fun discussion regarding ethics when faced with a dilemma. An example is what a self-driving car would do if it's in a situation where running over someone is unavoidable but it has the choice to decide between certain demographics (or even collide with something else but harm the passengers). But honestly I don't have an opinion on this theme.
Overall, it might just replace mediocrity in some areas, since it forces people to step up and be able to perform better than the overall existing knowledge. I often see some people in my engineering course trying to cheat exams by using it, and honestly I just keep asking myself who is even gonna hire them if they're just regurgitating what the AI is already regurgitating. People should be sitting on the end feeding it high quality information, not on the end consuming potentially ****ty ones. So yea, overall I feel like it will become an important tool in the future, but shouldn't be demonized or feared. Won't cause that huge of an impact in short term. Some stuff should be dealt with a lot of caution though.
If you've seen the Pixar movie Wall-E, that's a pretty good illustration of where I see AI going.
Do you mean any robots in particular?
I feel like the majority in the Wall-E movie (aside from arguably the protags, and AUTO - the HAL9000 parody) are more akin to industrial robots or a roomba than an AI since they are focused on executing manual labor rather than machine learning devices.