Ever read a headline and thought, “Something feels off, but I can’t explain why?”

I built CLARi (Clear, Logical, Accurate, Reliable Insight), a custom GPT designed not just to verify facts—but to train your instincts for clarity, logic, and truth.

Instead of arguing back, CLARi shows you how claims:

  • Distort your perception (even if technically true)

  • Trigger emotions to override logic

  • Frame reality in a way that feels right—but misleads

She uses tools like:

🧭 Clarity Compass – to break down vague claims

🧠 Emotional Persuasion Detector – to spot manipulative emotional framing

🧩 Context Expansion – to expose what’s being left out

Whether it’s news, social media, or “alternative facts,” CLARi doesn’t just answer—she trains you to see through distortion.

Try asking her something polarizing like:

👉 “Was 5G ever proven unsafe?”

👉 “Is crime actually going up, or is it just political noise?”

🔗 Link to CLARi

She’s open to all with this link —designed to challenge bias, dissect manipulation, and help you think clearer than ever.

Let me know what you think! Thanks Lemmy FAM!

  • Condiment2085@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    3 days ago

    Really cool, thanks for sharing! Just used it to talk about how safe seed oils are and it did a great job. Really cool how it breaks down each part of the argument so cleanly.

      • Condiment2085@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Is there a way I could share without sending any of my personal info with it?

        Also wanted to follow up and give you kudos now that I’ve had more time to play with it.

        It genuinely helped me understand more about persuasion and evidence based decision making, and does a wonderful job of always relating back to its base clarity tools.

        Also this comment saying that we can’t trust it because it’s made by companies like OpenAI - I think that’s always something to keep in mind, but doesn’t make it’s responses totally useless.

        I’ve asked it questions with leanings in just about every direction:

        • Is human caused climate change real?
        • What is Joe Rogan’s agenda/what side does he take on ideas?
        • What leads to the happiest countries?
        • Capitalism leads to the most profitable, not best, products.

        With all of these I feel like it didn’t act as a source of truth but rather it gave me a system to break down any bias/emotional wording in the claims themselves, figure out if they are falsifiable (which is actually a great thing I learned about while using it), and gave me a lot to think about each time.

        Of course it would bring up research on topics that were well researched like national happiness surveys or climate research.

        Overall great work and I think even if it’s not perfect, the logical way it approaches these claims really would help anyone in today’s media landscape. ❤️