1st year makes predictable mistakes that are easier to review.
Llm will fucking fake shit and it is way harder to spot unless you resp the entire fucking thing yourself, which a that point loses all value from cost perspective.
AI is useful as support tool for middle level and senior pros since they can catch the lies as they happen
Yup. That’s the biggest problem I see with AI. People don’t realize it’s fallible. They’re so busy wanting shortcuts and reduced work time that they’re ignoring the accuracy.
That’s part of why youre starting to see reports of AI not saving any time or money, you’re basically shifting it from the creation phase to the validation phase.
I’m going for a hot take here. The issue is not using AI (barring privacy/confidentiality), but that it wasn’t thoroughly validated.
I consider AI as a new trainee. Everything has to be validated by an experienced person.
1st year makes predictable mistakes that are easier to review.
Llm will fucking fake shit and it is way harder to spot unless you resp the entire fucking thing yourself, which a that point loses all value from cost perspective.
AI is useful as support tool for middle level and senior pros since they can catch the lies as they happen
The issue is still using AI as its sensitive data and shouldnt be given to 3rd parties.
Yup. That’s the biggest problem I see with AI. People don’t realize it’s fallible. They’re so busy wanting shortcuts and reduced work time that they’re ignoring the accuracy.
That’s part of why youre starting to see reports of AI not saving any time or money, you’re basically shifting it from the creation phase to the validation phase.