I know the title will trigger people but it’s a short so please briefly hear her out. I’ve since given this a try and it’s incredibly cool. It’s a very different experience and provides much better information AFAICT
I know the title will trigger people but it’s a short so please briefly hear her out. I’ve since given this a try and it’s incredibly cool. It’s a very different experience and provides much better information AFAICT
In my limited experience experience, Gemini responds better with flat, emotionless prompts without any courteous language. Using polite phrasing seems more likely to prompt “I can’t answer that sorry” responses, even to questions that it absolutely can answer (and will to a more terse prompt).
So I think my point is “it depends”. LLMs aren’t intelligent, they just produce strings based on their training data. What works better and what doesn’t will be entirely dependent on the specific model.