Deployment Playbook
Announcing ClarityLift to your team without wrecking trust.
Five assets, sequenced across the announcement arc. Use them in order. The ones that actually build trust are the follow-ups, not the initial announcement.
1. Pre-announcement
Kill the rumor before the all-hands
Post this 24 hours before the all-hands, in a general channel. It exists so nobody hears about ClarityLift through a leaked rumor. The goal here is to pre-empt, not to explain. Explanation belongs in the all-hands itself.
Draft — Slack / Teams message
Hey team. Short heads-up.
Tomorrow's all-hands we'll talk about a new tool we're bringing in called ClarityLift. It analyzes patterns in team conversations to surface friction, engagement, and retention signals at the team level.
I want to be clear on a few things up front so nobody spends tonight wondering:
- It does not read DMs. Ever.
- It does not identify individual speakers in any output.
- Every signal is scoped to a team of 10+ people.
- We pick which channels it analyzes. Default is off.
I'll walk through exactly how it works tomorrow, take questions, and share the full documentation. If you have questions you'd rather ask in private before then, reply to this message or DM me.
— [Founder name]
This message is not the explanation. It is the commitment. The all-hands is where the explanation happens. Resist the urge to cram all the architecture into this post — it will read as defensive.
2. All-hands
Speaking notes, 4 to 6 minutes
Structure the talk as why / what / boundaries. In that order. Employees want to understand your reasoning before they evaluate your decision.
Why (60 seconds)
The short version: the last time we had a senior departure / disengagement event / culture issue [pick the specific example], we saw the signs in retrospect. The question I've been sitting with is whether we could have seen them in real time.
Our options as a leadership team are basically: run more surveys (we know how that goes), count messages (that measures the shape of conversation, not the content), or find a way to understand the content without violating anyone's privacy. That third option is what ClarityLift does.
What (2 minutes)
ClarityLift reads patterns in our team channels — not individual messages — and surfaces four kinds of signals: friction, disengagement, retention risk, and positive collaboration. Not at the person level. At the team level, with a minimum group size of 10.
A signal looks like: “engineering is showing elevated friction this week, particularly around the handoff from product.” It does not say who. It cannot say who. That is a product of how the tool is built, not a policy we adopted.
What I'll see as the founder: weekly digests with trend lines per team, alerts when a pillar score drops sharply, and a dashboard I can look at when something feels off. What I'll do with it: start conversations with team leads earlier than I would have otherwise.
Boundaries (2 minutes)
Four rules that are built into the tool, not promised by policy:
- No DMs. Ever. The tool does not have access to direct messages. Slack and Teams will not let it even if I wanted it to.
- No individual attribution. The tool's output cannot identify a specific person. This is not a permission we turned off — the data required to do it is not collected.
- Opt-in channels only. I pick which channels get analyzed. Default is none. I'm sharing the list I'm starting with in a minute, and we'll update it publicly if it changes.
- Cannot be used for reviews or comp decisions. Our contract with them explicitly prohibits this, and structurally the data doesn't support it anyway.
Close (60 seconds)
I'm going to share the channels I'm opting in. I'm going to share the full architecture document — it's on their public site, not a secret. I'm going to come back to this in a month and share what we learned and what we didn't.
Questions. All of them. Including the uncomfortable ones.
Read the room. If people look unconvinced, say so. “I can see this is landing heavy — let's take an extra 10 minutes” is a better response than moving on.
3. Written FAQ
Post this right after the all-hands
Employees will re-read this over the following days as they think through questions they didn't ask in the moment. The value is in having something scannable and specific to point at. Copy this into your wiki or Notion, edit the channel list, and ship it.
Which channels are analyzed?
Currently: #engineering, #product, #sales-floor, #customer-success. [Edit this to match your actual list.]
If we add or remove a channel, I will update this FAQ and post the change in general. You can always ask me directly.
Does ClarityLift read my messages?
It processes messages from the channels above for signals, then discards them. No message text is stored in their database. The tool looks for patterns across teams of 10+ people, never at individual messages attached to individual people.
Does ClarityLift read DMs?
No. The tool does not have access to direct messages. This is enforced at the Slack / Teams integration layer — the permissions it requested at install do not include DM access.
Can you use this to see what I specifically said?
No. There is no output in the tool that shows individual messages attributed to individual people. The schema does not have that information. I wouldn't be able to answer “what did [employee] say about [topic]” even if I wanted to.
Could this be used against me in a performance review?
No, for two reasons. First, contractually: the acceptable use policy I signed prohibits using ClarityLift outputs for hiring, firing, performance review, or compensation decisions. Second, architecturally: the outputs are team-level, so there is no individual data to bring into a review in the first place.
What if future leadership doesn't respect these boundaries?
That concern is real and I want to engage it directly. The architectural limits (no individual data, 10-person minimum, no DMs) survive any change in leadership — they are structural, not policy. The AUP that prohibits individual-decision use is contractual and would have to be explicitly violated.
If you see us drifting from these commitments, hold me accountable to this announcement.
Can I ask to remove a specific channel?
Yes. Tell me, and I'll disable it. It stops flowing signals within 30 seconds.
Can I see the technical details?
Yes. The public architecture page is at claritylift.ai/privacy-architecture. Section 4 lists exactly what the product cannot do. Section 2 walks through the ingest pipeline. It is not marketing material — it is the technical spine of how the tool is built.
4. Hard questions
The ones people actually want to ask
Read this before the all-hands. These are the uncomfortable questions, with draft responses that are direct instead of corporate-safe. Edit them to sound like you, but do not edit them to sound softer. The soft versions are what make people stop trusting the commitment.
Question
This feels like surveillance. How is it not?
Draft response
Fair. Let me give you the specific answer instead of the reassuring one.
Surveillance implies watching individuals. The architecture of this tool makes watching individuals impossible — not because we turned off a feature, but because the data to watch individuals is not collected. The minimum group size is 10. The schema has no user id on any output row. If I wanted to know what a specific person on your team has been saying, I could not ask the tool. The tool cannot answer that question.
That is the specific claim. Section 4 of the architecture page lists every thing the tool cannot do, with the reason for each.
Question
Did you consult us before deciding?
Draft response
Honestly: no. I made this decision as a leadership call, and I am telling you about it now. I want to be up front about that because the alternative — telling you we had a whole consultation process — would not be true.
What I am committing to is the opposite side of that. From this point forward, everything about this tool is transparent. You see the channel list. You see the architecture document. You can ask me to remove a channel and I will. If we expand scope — which today means picking a new channel — I will announce it the same way I am announcing this.
Question
What happens when you get acquired and new leadership doesn't care about these rules?
Draft response
The architectural rules are not rules I can turn off. No DMs is baked into the Slack / Teams integration — changing it would require a full re-auth with new scopes, which would be employee-visible. The 10-person minimum is in the database schema. The lack of individual attribution is in the schema. A new leadership team inheriting this system would inherit those limits too.
The contractual rules — the AUP that prohibits individual-decision use — a new leadership team could theoretically violate. But that violation would be a specific, public breach of a specific, public document. It would be exactly the kind of thing a whistleblower or plaintiff's attorney can point at.
None of that is zero risk. But the risk surface is much smaller than deploying something without these limits.
Question
What signal could this possibly catch that my manager couldn't?
Draft response
Nothing your manager couldn't catch if they had 40 hours a day to read every channel carefully and remember what they said three months ago. The value is not that it sees something humans cannot — it is that it reads consistently and surfaces patterns across time and across teams that humans miss because humans are busy.
Also: if your manager is doing the reading, the reading is filtered through one person's biases and one person's relationships. The tool is not unbiased, but its biases are different from a human's biases, which is sometimes useful.
Question
Are you going to tell us what you see?
Draft response
Yes. My plan is to share the major signals at our monthly operating review, aggregated and without anything that would identify specific people. If the tool surfaces something that needs a response, my job is to start the conversation with the relevant team, which means you will hear about it.
What I will not share: raw dashboard screenshots or specific channel signals in a large public setting. The signals the tool produces are aggregate but they are still team-specific, and I'd rather discuss them with the team they refer to first.
If a question lands that you did not prepare for: the answer is “I need to think about that one. Let me get back to you tomorrow.” That is a better answer than improvising something you might have to walk back.
5. 30 / 60 / 90 follow-up
Trust is built in the follow-ups, not the announcement
One all-hands does not build trust in a tool like this. The plan below is the communication cadence that does. Copy these into your calendar now.
Short Slack update — the first week
Post in general. Two to three sentences. “First week of ClarityLift is running. We enabled [list]. So far the major signal has been [one specific thing, aggregate]. I'll share a fuller view at 30 days.”
The point is not the content of the update. The point is that you are communicating on the schedule you committed to. Silence after a big announcement reads as retreat.
The 30-day debrief
Schedule 15 minutes at a team meeting. Walk through:
- What signals have shown up (aggregate, no individual data).
- What you've changed based on them (if anything).
- What you'd change about the setup in hindsight — channel choices, threshold tuning, etc.
- Whether you are keeping it for another 30 days, pausing it, or turning it off. Make a decision out loud.
If the answer is “we found it useful, keeping it,” that is fine. If the answer is “the signals were noisy and not adding much, turning it off,” that is also fine — and communicating that decision openly builds more trust than quietly leaving it on.
The quiet check-in
DM three or four people at different levels and ask them directly: “Is ClarityLift landing the way I described it? Anything that feels off?” Do not make this public. This is where real feedback comes — the things nobody wants to raise in a group setting.
If the feedback is that it is landing fine, great. If the feedback is that something feels different from the commitment, take the feedback seriously and either fix the issue or turn the tool off.
The second all-hands slot
Five minutes at an all-hands. Short. “Three months in, here's what we learned, here's what changed, here's what didn't.” Close with: “If any of you has concerns about this I haven't heard, my DM is always open.”
6. Gut check
Do not deploy ClarityLift if you cannot stand behind this announcement.
The last section of the playbook, and the one doing the strategic work.
If you read the pre-announcement Slack message above and cannot imagine posting it under your own name, do not deploy ClarityLift. If you read the hard-questions section and the draft responses feel dishonest to you, do not deploy ClarityLift. If the day-30 debrief feels like a conversation you do not want to have, do not deploy ClarityLift.
This is a deliberate filter. ClarityLift is built for founders who want real signal about their organization without compromising the trust of the people in it. That combination requires the founder to be willing to communicate about the tool publicly and honestly. The architecture supports that kind of deployment. It does not save a founder who is trying to avoid the conversation.
If any of the above is not you: pass. We would rather you pass than sign up, announce it poorly, and erode the trust the tool was supposed to protect.
If all of it is you: welcome. Get early access.
Ready to start the conversation?
The playbook answers “can I actually deploy this without wrecking trust.” The architecture page answers “how does this work.” The whitepaper answers “is this safe.” When those three feel settled, get in touch.