I’m using it extensively, for instance, I have to create contracts with businesses that I work with, and AI is really helpful, works within minutes, and I don’t have to get a lawyer involved (I have sufficient legal know how, to be able to check the texts). I ask Claude.AI many questions, which help me to find out about any kind of topic (eg I wanted to know more about mobile nuclear reactors and got a quick summary within seconds). Naturally, you should take nothing at face value, blindly believing AI won’t do you any good, AI does not replace your brain. I create marketing materials and personal profiles with the help of AI, and I’ve also improved my CV. I have even used it to write a message to a friend who lost their son to cancer, I truly was at a loss for words, but AI helped me to come up with some useful sentences (which I personalized). All in all, it is extremely useful, even to answer questions here on Lemmy.


When you look at the AI summary of a topic you know little to nothing about, how do you know that the summary is factual and not something the LLM just made up? (Such as, someone on Reddit said something incorrect and the LLM was trained on that.)
How much do you have to fix what AI gives you for legal documents (vs. you just re-wording) things?
I would find a lot of utility with LLMs like you. These are great applications of LLMs. I don’t use them because they are controlled by large companies that may partner with Google and cooperate with law enforcement. I haven’t bothered to work on hosting my own yet.
Instruct it to include links to sources & verify them yourself just like an argument prepared by a human? Not that mysterious unless you blindly trust anything at face value, which you shouldn’t.