I find these tools useful for tasks, for which I know or have an idea of the correct answer or solution, or to get an initial overview of some concept, which, if necessary, I can double-check with another source. I think these tools are still not reliable enough, so we can't blindly trust them. However, to be fair, we probably shouldn't also blindly trust many other online sources. The big difference is AI tends to make up stuff more and doesn't think like a human.
More specifically, I use Gemini Advanced almost everyday for
- code/concept explanations
- simple code writing
usually, when I have an idea of the correct answer or solution.
It has saved me some time because writing code, even though it's just a few lines and it's very simple, takes time and usually more time than a code review.
There might be other successful use cases or tools. Some people might have used them for tasks like image generation. There has been some hype around the last OpenAI model, which is also able to generate images and very good ones in some cases, but I also found that it still struggles with problems that have been around since the generative AI hype started a few years ago, e.g. fingers. There are many tools and people try to take advantage of the hype to do some business and get some cash.