Dope Thinkers THRIVE in AI: Building Ethical Intelligence in Kulur

LeeAnah James

The rapid adoption of generative AI has left businesses scrambling to enforce AI policies and guidelines. While some companies are rolling AI governance into their cyber security departments, most companies are simply looking for a place to start a responsible AI framework that fits their goals. This is THRIVE. 

At Kulur Group we use THRIVE to drive home the importance of transparency, human touch, responsibility, integrity, verify ability and empowerment when growing ethical intelligence within our agency.


Transparency

When utilizing AI,  transparency is two-fold: it involves internal disclosure of AI use during the development or generation of company assets, as well as external disclosure of AI use in the collection, development, and/or production of client data. When AI is used in obvious ways, trust is not impacted. Consumers are accustomed to avatars representing brands. However, when AI is used in ways that make outcomes appear “human” without identifying that it is AI generated, trust is impacted. According to a Forbes’ report on AI consumer sentiment, most consumers are open to businesses using AI when companies are transparent, and as AI adoption continues to become the norm, trust is a valuable asset. 

When we examine transparency from a domestic legal perspective, states like California or Colorado continue to lead the way, as both states have stricter regulations than most. These laws usually center around businesses sharing what data is collected and how it is used. For the users and consumers concerned about corporate AI implementation, companies with strict AI transparency in content policies can position themselves as trusted stewards over their customers' data. 

Human-in-the-loop (HITL)

The next pillar to consider when building ethical intelligence is what it looks like to keep a human in the loop. In 2024, McKinsey reported that 72% of companies use AI in at least one business function, with 29% of CEOs acting as the head of governance. These numbers highlight the growing reliance on automation and the need for broader oversight. While a CEO may be qualified to oversee such governance, the rapid growth and application of AI tools on the market require a broader and diverse approach, especially when considering AI governance in departments such as marketing. 

Enter, Human-in-the-loop (HITL). Human in the loop refers to the practice of integrating human judgment at key stages of an AI system’s design, deployment, and decision-making process. It ensures that while AI may enhance efficiency, humans remain responsible for context, empathy, and ethical reasoning.

Keeping a human in the loop is done by creating intentional checkpoints where human expertise can validate, correct, or override AI-generated insights. This is especially important in fields where data alone cannot capture nuance which includes but is not limited to areas like hiring, healthcare, public safety, and education. When humans act as the ethical gatekeepers, they prevent automation from amplifying bias or making unchecked decisions that impact people’s lives. 

For brands, maintaining a human touch strengthens accountability and builds trust with users who expect transparency and fairness. When a company can clearly articulate where and how humans intervene in the AI process, it signals responsibility and reinforces credibility. 

Responsibility

Where there is challenge, there is opportunity and challenger brands have the opportunity to lead the way concerning AI ethical responsibility. They can confront ethical concerns directly by using creativity and accountability to redefine what ethical AI engagement looks like. Rather than avoiding hard conversations, they can expose, address, and challenge them head-on, positioning themselves as allies to their customers and advocates for responsible innovation. Challenger brands don’t hide. They educate, empower, and engage. They transform skepticism into trust through human oversight, ethical clarity, and authentic connection.

Integrity

Integrity is more than compliance; it’s alignment. When AI practices are guided by integrity, they reinforce brand trust and consistency. Every decision related to AI, whether the adoption of a new tool, the handling of user data, or the automation of customer interactions, should be traceable and defensible.

By maintaining clear records on how, why, and when AI is being used companies demonstrate accountability and foresight. This clear paper trail protects the company and empowers employees, giving them clear reference points when questions arise.

For users, integrity should be visible. They should be able to easily understand how AI is applied in products or services through clear disclosures, accessible policies, and consistent communication. When a company aligns its AI actions with its stated values and demonstrates that alignment through transparency and documentation, it earns and sustains the users trust.


Verifiability

Verifiability is the ability to trace, confirm, and trust the origin of information. In the context of AI, it’s the safeguard against misinformation loops and the assurance that what you publish or consume is accurate, credible, and responsibly sourced. For brands, verifiability means having systems in place to confirm that the data, insights, and outputs generated by AI can be supported with human-reviewed evidence.

The challenge of verification grows as AI tools increasingly rely on data generated by other AI systems. A report from Ahrefs found that 74% of new websites are now created using AI-generated content. This means that instead of learning from human insights and diverse experiences, AI models are beginning to pull from and cite content produced by other machines. The result is a growing risk of echo chambers, where misinformation is recycled and amplified rather than corrected.

To mitigate this, businesses must go beyond automated validation. Encourage your teams to:

  • Cross-check data from multiple, credible human-authored sources.
  • Document verification steps, including when and how claims were reviewed.
  • Label AI-generated content internally to distinguish it from human-created material.
  • Promote original thought and authorship, ensuring that your brand contributes to the data ecosystem rather than echoing it.

Encouraging employees to produce original, human-generated content not only strengthens credibility but positions your brand as a reliable voice once robust AI detection systems become standard. In a landscape crowded with machine-made narratives, authenticity and traceability become a brand’s most valuable assets.

Empowerment

Empowerment means making AI understandable and usable for everyone, not just your teams that work with more technical aspects of your business. It’s about giving your team and customers tools they can trust, training to use them safely, and clear guidance on how AI supports rather than replaces their work. Our GAIN Framework is built to help you identify what should stay human, what AI can handle, and how to make the two work together.

To THRIVE is a choice to lead ethically, responsibly and with integrity. Are you set up to THRIVE?


Looking for more...