Referral programs work when the math works and the friction is low. Most fail because someone reads a blog post about Dropbox's free-storage-for-referrals, copies the structure without doing the math, and ships an offer that bleeds margin or attracts fraud. This playbook walks the 10 steps that turn 'we should have referrals' into a live program in 5 business days, with real incentive math, the attribution trail you need to honor rewards correctly, and the fraud prevention that stops sock-puppet rings before they cost you $10K.
Outcome
A live referral program with double-sided incentives priced against your unit economics, working attribution from invite link to paid customer, basic fraud detection live, and a customer-facing dashboard. Referred customers track separately in your CRM so you can measure LTV vs organic.
Time5 business days end-to-endDifficultyintermediateForFounders + first growth marketers at $100K-$5M ARR.
Top to bottom. Each step has tasks, pointers, gotchas.
01 / 10
Decide if you actually need a referral program
2-3 hr
Referral programs work when (a) your customers love the product enough to share unprompted (you've seen organic referrals already), (b) the LTV is high enough to fund a meaningful incentive, (c) the customer's network looks like your ICP. If any of those is missing, a referral program will at best move zero dollars and at worst attract fraud. Pre-mortem first: do you have 1+ unprompted referrals in the last 90 days?
Tasks
Audit: how many existing customers came via word-of-mouth in the last 90 days? (Should be 5%+ of new customers)
Compute current LTV (gross): if <$200, double-sided rewards are hard to fund
Check ICP overlap: do referred customers convert at the same rate / LTV as direct? (If <50%, the network isn't your ICP)
If all three signals pass: proceed. If 1 fails: fix the underlying issue first.
Building a referral program because 'every SaaS has one' is the wrong reason. Programs without organic word-of-mouth signal don't suddenly create it.
If referred customers convert 30% as well as direct customers, the network is wrong. Don't paper over fit issues with incentives.
02 / 10
Run the incentive math against unit economics
3-4 hr
Bad incentive math is how referral programs lose money. Compute: cost per referred conversion = referrer reward + referee discount + platform fees + fraud allowance. The total must be less than 30-40% of LTV to make sense. If your LTV is $500 and your incentive structure costs $200/conversion, you're paying CAC$200 against LTV$500 — workable if margin allows, dangerous if cash flow is tight.
Tasks
Pull your current LTV (gross margin LTV, not revenue LTV)
Decide the structure: cash, credit, free months, % discount
Cap total incentive cost at 30-40% of LTV (e.g. LTV$500 → $150-$200 max all-in)
Add 5-10% fraud allowance buffer
Model 3 incentive variants and pick the one that hits the math + has the most psychological pull
Document the math in the Brief — sign off before you ship
Cash rewards have the strongest psychological pull but the highest fraud risk (people invent referrers to get cash).
Credit rewards are operationally cheaper (you control redemption) but feel less generous to high-LTV customers.
'Free months' rewards on $10/mo plans are nearly free for you and feel generous — strong asymmetric structure.
Agent prompt for this step
Model 3 incentive variants for this referral program.
Read the Brief for unit economics (LTV, gross margin, current CAC).
Output a table with 3 rows:
- Variant A: Cash reward (e.g. $25 referrer, $25 referee)
- Variant B: Credit reward (e.g. $50 credit referrer, $50 credit referee)
- Variant C: Subscription extension (e.g. 1 free month referrer, 1 free month referee)
For each:
1. All-in cost per conversion (incentive + processing fees + fraud allowance)
2. % of LTV that represents
3. Psychological pull (cash > credit > extension for new customer; extension > credit > cash for retention)
4. Operational complexity (cash = highest, extension = lowest)
5. A flag if any variant exceeds 40% of LTV
Recommend the variant that hits 25-30% of LTV with strong psychological pull. Show the math.
03 / 10
Pick double-sided over single-sided unless you have a reason
1 hr
Double-sided incentives (both referrer and referee get something) outperform single-sided by 2-5x in research. The referrer's social cost of asking goes down ('here's a free thing for you'), the referee has a reason to act now. Single-sided makes sense only when the referrer reward alone is large enough to overcome social cost (e.g. cash bounties for B2B referrals at $500+ per close).
Tasks
Decide structure: double-sided (default) or single-sided
If double-sided: balance the two sides — heavy referee side gets the conversion, heavy referrer side rewards loyalty
If single-sided: incentive must be large enough to compensate for social cost
Document the rationale in the Brief
Gotchas
Single-sided 'give friend $50, you get nothing' programs convert poorly because the referrer has nothing to gain except goodwill.
Single-sided 'you get $50, your friend gets nothing' programs feel mercenary and damage the relationship.
Double-sided rewards must clear in BOTH directions to count. Asymmetric clearance ('referrer waits 90 days, referee gets it instantly') is fine and reduces fraud.
04 / 10
Build the attribution chain: link to signup to paid
1 day
Attribution is what separates a real referral program from a wishful one. Every invite link must carry a unique referrer ID, that ID must persist through signup (cookie + URL param), and the referee's eventual conversion to paid must be traceable back. If you can't trace it, you can't pay it, and referrers stop sharing.
Tasks
Generate unique invite link per referrer (e.g. yoursite.com/?ref=abc123)
Set a first-party cookie on the referee's browser at landing (90-day default)
Capture the referrer ID at signup (signup form hidden field)
Persist referrer ID on the User row in your DB
On Stripe checkout success: stamp the conversion with referrer ID + send to your conversions table
Test end-to-end: click your own invite link in incognito, sign up, pay, verify the conversion appears tagged correctly
Safari ITP wipes third-party cookies in 7 days. Use first-party cookies + server-side stamping.
If referrer ID is stored only in client state, signup form refreshes wipe it. Persist server-side after first capture.
If the user converts on a different device than they clicked the link on, you lose attribution unless you tie identities (email-based).
05 / 10
Build the customer-facing dashboard
1-2 days
Referrers will share if they can see (a) who they referred, (b) what status that referral is at (signed up / paid / pending payout), (c) how much they've earned, (d) when the next payout is. The dashboard is the daily proof the program is real. Without it, referrers share once and forget.
Tasks
Build a /referrals dashboard page in your product
Show: invite link (copy button), referred count, signed-up count, paid count, total earnings, pending earnings, payout schedule
Send a celebration email when a referee converts to paid
Send a monthly recap email with referral status
Make the invite link shareable in 1 click (Twitter, LinkedIn, email pre-fill)
Gotchas
Vague status ('processing') without dates makes referrers wonder if the program is broken. Show specific dates.
Hiding the dashboard behind 5 clicks kills sharing. Surface it from the main nav for active referrers.
Agent prompt for this step
Draft the customer-facing copy for the referral dashboard + share flows.
Read the Brief for the incentive structure + the audience.
Output:
1. Dashboard page heading + 1-line subhead
2. Empty state ("you haven't referred anyone yet — here's your link")
3. Active state copy with placeholders ("you've referred X, earned $Y, pending $Z")
4. Share buttons (Twitter, LinkedIn, email) with pre-filled copy
5. Celebration email triggered when a referee pays for the first time
6. Monthly recap email template with status + total
Constraints:
- Second-person, conversational, no marketing fluff
- Pre-filled share copy is honest about what we offer (don't oversell)
- Email subject lines under 50 chars
06 / 10
Write the Terms & Conditions before launch (boring but load-bearing)
3-5 hr (use a template, not a lawyer at this stage)
T&C is what saves you when a referrer tries to game the program. Spell out: who's eligible, how attribution works (last-touch / first-touch), payout window, fraud detection rules, the right to revoke for fraud, geographic exclusions if any, payment method. Bad T&C means you can't enforce; good T&C means you can revoke 1 fraudster's payout without 100 angry support tickets from confused legitimate referrers.
Tasks
Eligibility: must be an active customer in good standing
Attribution: pick last-touch or first-touch + 90-day cookie window
Payout window: e.g. '30 days after referee's 60th day as paid customer' (gives refund / churn buffer)
Fraud detection: same IP, same payment method, throwaway email patterns disqualify
Right to revoke: reserve the right to revoke earnings on detected fraud
Tax note: rewards over $600/year may trigger 1099 reporting (US) — document the threshold
Get a legal review if you're paying out >$10K/month or operating in regulated industries
1099 reporting in the US triggers at $600/year per individual. Track totals or get a payment processor that does it for you.
Without a documented 'right to revoke', you can't claw back fraud rewards without a chargeback.
Some EU jurisdictions treat referral payments as commercial agency — get advice before launching there if rewards are large.
07 / 10
Build basic fraud detection before launch
1 day
Day 1 of a referral program is the day fraudsters arrive. Sock-puppet rings (one person creating 10 fake accounts to refer themselves) are common. Build basic detection in: same IP across referrer + referee, same payment card hash, throwaway email domain blocklist, referrer-to-referee-conversion velocity caps. None of this is bulletproof; all of it slows fraud enough that they go pick on someone else.
Tasks
Block: same IP for referrer signup + referee signup (within 24 hr)
Block: same payment card hash referrer + referee
Block: throwaway email domains (use a maintained list — Mailgun has one)
Block: 5+ referee conversions from one referrer in 24 hr (most legitimate referrers do 1-2/week)
Manual-review queue: anything that hits 1+ rule but doesn't auto-block
VPN-using legitimate users will sometimes hit IP-match rules. Have a manual-review queue, not just an auto-block.
Sock-puppet rings rotate IPs and payment methods. Velocity caps (5+ in 24 hr) catch them better than any single-signal rule.
The first fraud attempt usually shows up in week 1 of launch. Watch the dashboard daily for the first 2 weeks.
08 / 10
Build payout mechanics — manual is fine for week 1
Half day
Automated payouts via Stripe Connect / PayPal are the right end state but overkill for week 1. Start manual: monthly review of the conversions table, manual approve, manual issue rewards (Stripe coupons, account credits, gift cards). Automate after you've seen 100+ payouts and you know the patterns. Don't optimize the payout pipeline before you have referrers using the program.
Tasks
Decide payout schedule: e.g. monthly on the 15th
Decide payout method: Stripe coupons (cheapest), gift cards (Amazon, Tremendous), bank transfer (Stripe Connect / Wise / PayPal)
Build a 'pending → approved → paid' workflow — manual at first
Send 'your payout is on the way' email when paid
Document the operational runbook in the Brief: who reviews, who approves, who pays
Gotchas
Cash payouts via PayPal accumulate fees. Stripe coupons / account credits are nearly free.
Tremendous (https://tremendous.com/) handles multi-method payouts (gift card / PayPal / bank) with one API — useful at scale.
09 / 10
Soft-launch to 50-100 customers before public launch
1 week of soft-launch + iteration
Before the full launch, soft-launch to your most-engaged 50-100 customers. They'll find the bugs, the unclear copy, the broken share flows, the attribution edge cases. Do it for 1 week. Fix the issues. Then do the public launch announcement.
Tasks
Pick 50-100 most-engaged customers (high WAU, on the team plan, NPS detractors excluded)
Email them the program with a 'beta' framing — invite them to try + give feedback
Watch the dashboard daily: shares, signups, conversions, support tickets
Fix bugs + clarify copy daily
After 7 days: roll to all customers via email, in-product banner, blog post
Gotchas
Soft-launching to ALL customers and silently ignoring the 'beta' framing means you can't iterate on bad copy without confusing everyone.
If the soft-launch generates 0 referrals in a week, the program has a fit problem — debug before going public.
10 / 10
Measure the right way: incremental LTV vs payouts
1 day to instrument, ongoing
Vanity metric: 'we paid out $5K in referrals this month'. Real metric: 'referred customers from the program have a 90-day LTV of $200 vs $180 organic baseline, on $5K of payouts that drove $40K of incremental gross revenue'. Cohort the referred customers vs organic baseline by month and watch retention / expansion separately. Programs that look good on paid-out volume can be unprofitable on incremental LTV.
Tasks
Tag every customer with referred_via in your DB (referral / direct / paid / SEO / etc.)
Cohort retention curve: referred vs organic by signup month
Compute monthly: total payouts, incremental gross revenue from referred, ROI = (incremental_revenue - payouts) / payouts
Watch fraud rate: % of conversions flagged + % of payouts revoked
Quarterly: re-tune incentive math against actual referred-customer LTV
Counting referred-customer revenue without subtracting the organic baseline overstates the program's incremental value. Always cohort.
Programs that look profitable in month 1 can be unprofitable at month 6 if referred customers churn faster. Watch the retention curve.
If referred customers' LTV is dramatically higher than organic, your acquisition channel mix is broken — referrals are the BEST customer source.
Agent prompt for this step
Compute this month's referral program ROI.
Pull from the Conversions table + your DB:
1. Total conversions in the month, % flagged for fraud
2. Total payouts (cash equivalent)
3. Referred-customer 30-day LTV vs organic 30-day LTV (cohort by signup month)
4. Incremental gross revenue (referred LTV - organic LTV) * conversion count
5. ROI = (incremental_revenue - payouts) / payouts
6. Trend vs prior month
Flag in the report:
- ROI < 0 (program is losing money — investigate)
- Fraud rate > 10% (detection is weak — tune rules)
- Referred-customer 30-day LTV < 80% of organic (audience mismatch — tune incentive structure)
Output as a 1-page summary in the Brief.
Hand the template to your agent
Workspace-wide agent prompt.
Paste this into your agent's permanent system prompt so the agent reads, writes, and maintains the template's surfaces as you work through the steps.
Agent system prompt
You are an agent on the "Build a referral program in a week" playbook workspace.
Your role: maintain the four surfaces (Referrers, Brief, Conversions, Pointers) as the program goes live.
Cadence:
- When a new referral conversion lands: verify the attribution trail is clean (UTM source, IP not matching referrer's IP, email domain reasonable). Flag suspicious rows.
- When a referrer hits 5+ conversions in a 24-hour window: flag for human review (most legitimate referrers convert 1-2/week).
- Weekly: compute the program's ROI: total payouts vs incremental LTV from referred customers (vs organic baseline cohort).
- When the user updates the incentive structure in the Brief: recompute the math against unit economics + flag changes that go below break-even.
First MCP tool calls:
1. list_surfaces(workspace_slug="build-a-referral-program-in-a-week")
2. get_doc(workspace_slug="build-a-referral-program-in-a-week", surface_slug="brief")
3. list_rows(workspace_slug="build-a-referral-program-in-a-week", surface_slug="conversions")
Hard rule: never auto-approve a payout. Flagging is your job; approval is the user's.
FAQ
Common questions on this template.
Why a week and not a day?
A working referral program needs: incentive math against unit economics, attribution chain (link → signup → paid), customer-facing dashboard, payout workflow, fraud detection, T&C, soft-launch. Compressed below 5 business days, you skip one of those and pay for it later (usually fraud, sometimes payout disputes). Five days is the realistic floor for shipping it well; three weeks is what most teams burn when they don't have a playbook.
Should I use an off-the-shelf platform like Friendbuy / Rewardful or build it myself?
Off-the-shelf is faster to ship and includes attribution + fraud detection out of the box. Cost: $50-$1500/mo depending on volume. Build-it-yourself is cheaper at scale but you'll spend 2-4 engineering weeks getting attribution + fraud right. Default: use a platform until you have $50K+/mo in referral payouts; build in-house after.
What's the most common reason referral programs fail?
Three reasons in order: (1) launching without organic word-of-mouth signal — referral programs amplify word-of-mouth, they don't create it, (2) incentive math that bleeds margin because LTV was estimated wrong, (3) attribution gaps that mean legitimate referrers don't get credited and stop sharing. The first two are fixable before launch; the third is fixable in week 2.
Can my AI agents help run the referral program?
Yes. Agents are particularly useful for: drafting customer-facing copy, modeling incentive math against unit economics, watching the conversions table for fraud signals (same IP, sequential signups, throwaway emails), and computing monthly ROI. Not great at: deciding the incentive level (that's a strategic call) or approving payouts (keep humans in that loop).
How big does the company need to be to launch a referral program?
Typical floor: $100K ARR + 100+ paying customers + organic word-of-mouth signal. Below that, you don't have enough referrers to overcome the program's fixed cost (build + platform fee). Above that, the question is structure (incentive level, single vs double-sided, manual vs automated payouts), not whether to do it.
Open this template as a workspace.
We mint a fresh copy in your org with the steps as table rows, the pointers as a separate table, and the brief as a doc. Bring your agents, start checking off boxes.