
We live in an era of accelerated transformation, where AI tools are unlocking unprecedented efficiency. However, a tool that’s a game-changer one day can become a risk the next.
How do we adopt these technologies with agility and ambition without compromising the security of our clients and our own business?
Today, we want to share the creation of an internal team of Security Advocates.
When a Core Tool Changes Security Rules
Like many, we used OpenAI's tools. But the recent fallout from the New York Times lawsuit means their data policy has fundamentally changed. The "30-day data retention” guarantee is gone.
For us, that’s not just an internal issue; it’s a direct conflict with the promises we make to our clients. Every code snippet, every piece of sensitive data passed through that API could now be stored indefinitely for legal review.
The typical corporate playbook for this is simple: a few managers lock themselves in a room, make a decision, and send out a company-wide memo.
We use a different approach.
A Human Bridge We Call "Security Advocates"
Instead of a top-down "control tower" that dictates policy, we build "bridges." For this, our bridge is the Security Advocates team.
This is a cross-functional group with a representative from every team at Kaizen, from product development, marketing, and finance to infrastructure and people care.
These advocates are facilitators and translators. Their job is a two-way street:
- They share security best practices with their teams.
- More importantly, they bring the questions, the frustrations, and the real-world needs of their daily work back to the group.
This isn't about consensus-driven committees; it's about a constant, high-speed dialogue that helps us build policies that actually work for the people who have to live with them.
Figuring Out the OpenAI Shift
This team was put to the test the same week it was formed, showing just how fast we need to adapt.
When the OpenAI news hit, our first move wasn't to issue a ban. It was to ask an open question through our Advocates: "This is happening. How does this actually impact your day-to-day work?"
The goal was to get an honest, ground-level picture:
- Would a developer lose a tool that’s critical to their workflow?
- Does our design and marketing team rely on GPT for creative tasks that an alternative can’t handle?
- Is there a use case somewhere we haven’t even thought of?
Initially, we presented a clear proposal: discourage the use of OpenAI where data retention is an issue and use an alternative like Claude that better aligns with our privacy goals. We were also brutally honest about the trade-offs: the alternative is safer, but it might require different prompting and adjustments.
The final outcome wasn't a total ban. After discussions with the design team and others, we landed on a hybrid approach: we’d limit OpenAI for any work involving sensitive data but still allow its use for non-sensitive, conscientious tasks. We even built in a process to use it on specific projects if a client gives us their explicit consent.
This flexibility is the direct result of having the conversation out in the open, creating a solution we all own because we all understood the 'why' and had a hand in building it.
This is Our Value
This approach is more than a security strategy; it’s our culture in action. In a world where the ground is constantly shifting, we believe the most robust and responsible solutions come from having every voice in the room. Our clients expect us to be adaptable and trustworthy, and this is how we deliver on that promise, not just by having the right answers, but by having the right process to find them.