Oh, and in terms of what I use or have used AI for..
I've definitely used it for programming tasks. It is quite useful for that, but I'm not completely buying the idea that it is ready to replace programmers, because in its current state you still really need to be able to give it pretty specific direction and evaluate what it's doing, because it can and will go off the rails pretty quickly with complex problems. It's true that you can use AI to build entire apps without writing any code at all, but I think people will find that if you continue to use, maintain, and enhance those apps over a few years without ever touching the code, it's going to eventually be a mess. The primary debate is whether the continued rate of improvement in AI coding capabilities will reach a point where that's not a problem soon. I tend to think that we'll hit the point of diminishing returns before that, but I can't claim to truly know.
I definitely worry about what will happen if companies stop hiring and mentoring junior programmers, because you need a pipeline of junior developers in order to eventually have senior developers. Once all the senior people retire, if there are no more junior people to supervise the AI tools, we may be in a mess. I guess the bet is that by that point, the AI tools will be so good that you don't need any human to understand the code, but color me skeptical about that.
Beyond programming stuff, I don't let AI write for me, but sometimes I use it as a thought partner for writing. Occasionally if I'm stuck on how to start something, I'll tell an AI tool what I'm trying to do, and its ideas can give me a jump start. At times, when writing a complex document, I will give AI a stream of consciousness brain dump of what I want to get across, and have it suggest how to structure it. And occasionally I will feed in something I have written, and ask for feedback or a modified draft. But in none of these cases, do I ever use the AI output directly. I just use it to help improve my own writing.
Sometimes I will use an AI tool for data analysis, like I fed in a bunch of credit card transactions and asked it to look at trends by grouping together similar categories (i.e. the different credit cards use somewhat different categories, and I wanted to see spending summed up by category with them unified across cards). But you have to be really cautious with stuff like this because even with things that might seem as straightforward as math, it can make very stupid mistakes. I think it's ok for directional data, or things where you can easily verify the results, but I would not make important decisions based just on AI output for something like this.
AI can be really useful to help me figure out what to search for in order to do a more traditional search. I can enter a rambling description of what I am trying to find or figure out, and even if AI does not give me the right answer, it usually gives me enough direction or terminology to then go to Google and do a proper search for the right thing. (Sometimes even this fails in comical ways, but usually it works.)
Finally, I occasionally use AI for research in what I'd call a "bar trivia" context. Like I'll ask it to explain stuff that I'm just curious about, when it isn't really that important if the answer is perfect. The same as if I was chatting with friends at a bar about it over a few beers. If something seems off, or I get more curious based on what I read, I will go find actual sources and confirm facts. But for low-stakes stuff, AI answers are often good enough
I have two big concerns with these technologies.
First, as I mentioned above, I don't think that human brains are evolved to evaluate the veracity of AI output, and it's going to keep getting worse. We see that in threads here, where people post AI things like they are meaningful, and we see it on social media where AI-generated photos and videos go viral with people believing they're real. This is really dangerous and I don't think that simple education or regulation is going to solve it. I don't know what the answer is.
Second, we have to remember that AI works by being "trained" on content created by humans. All AI writing is based on writing that humans did. AI photos are made by models trained from photos humans took. AI code is written by models trained on code that humans wrote. If we get to a place where a large proportion of the new content in the world is AI-generated, what will the next AI models train on? It becomes a big self-perpetuating loop, and no genuinely new ideas or creativity can come out of that.
Anyway, it's both a brave and scary new world. It'll be interesting to watch.