Shipping a GPT looks like 'paste in some instructions and click Publish.' The reality is closer to a small App Store: instruction tuning that survives jailbreaks, knowledge files that don't leak, an Action with a real OpenAPI schema if you need a backend, a Builder Profile that has to be verified before your GPT shows up in search, and the category gate that pushes 80% of submissions to the long tail. This playbook walks the 10 gates with the official OpenAI links plus the agent prompts that automate the parts agents do well: drafting instructions from your spec, generating the OpenAPI for an Action, writing the listing copy, and watching the analytics post-launch.
Outcome
Your GPT live on the GPT Store with a verified Builder Profile, a category placement, working Actions, and conversation analytics flowing back into your workspace so you can iterate weekly.
Time3-7 days (most of it instruction tuning + Builder verification)DifficultyintermediateForIndie builders + small teams shipping their first custom GPT.
Top to bottom. Each step has tasks, pointers, gotchas.
01 / 10
Subscribe to ChatGPT Plus and confirm GPT Builder access
10 min
The GPT Builder is gated on a paid ChatGPT plan: Plus, Team, or Enterprise. Free accounts can chat with public GPTs but cannot create one. If you're on Plus from a personal account, double-check the workspace you'll publish under, switching workspaces post-publication forfeits the rating history on that GPT.
Tasks
Subscribe to ChatGPT Plus (or confirm an existing Team / Enterprise seat)
Open chatgpt.com -> Explore GPTs -> Create -> confirm the Builder loads
Decide which workspace will own the GPT (personal vs. Team workspace)
GPTs are owned by the ChatGPT workspace they were created in. Moving from Personal to Team requires rebuilding the GPT.
Free tier accounts can use a published GPT but cannot publish one. The Builder won't even load.
02 / 10
Write the spec: who is this GPT for, and what's the one job
1-2 hr
The biggest mistake first-time GPT builders make is shipping a 'general assistant for X.' GPTs that win the GPT Store solve one narrow job extremely well. Before you open the Builder, write a 1-page spec: the user, the job, the inputs they'll paste, the output they should get, and the 3-5 things this GPT will refuse to do.
Tasks
Name the single job the GPT does (one sentence)
Name the persona (who's the user, what's their context)
List 3 example inputs you expect users to paste
List the exact format of the output for each input
List 3-5 hard refusals (off-topic, unsafe, or out-of-scope requests)
GPTs trying to do 'everything for X' rank below GPTs that do 'one specific thing for Y' every time. The Store algorithm rewards focus.
If you can't name 3 hard refusals, your GPT is too broad. Cut scope.
Agent prompt for this step
Help the user write a tight 1-page GPT spec.
Ask them, one question at a time:
1. Who is the user (role + context, not a demographic)
2. What's the single job this GPT does for them
3. What does the user paste in (3 example inputs)
4. What does the GPT return (exact format)
5. What's out of scope (3-5 things it should refuse)
Output the spec as a Brief surface section titled "GPT spec v1". Don't write instructions yet, that's the next step. The spec is the source of truth for everything that follows.
03 / 10
Draft the GPT instructions and conversation starters
3-6 hr (most of it iterating on tone + edge cases)
Instructions are the system prompt. They're capped at 8000 characters and they leak: assume any user can extract your full instructions with a clever message. Don't put secrets here, put them behind a custom Action with auth instead. Conversation starters are the 4 chips users see when they open the GPT, treat them as the GPT's marketing copy.
Tasks
Draft the instructions: role, behavior, format, refusals, escalation
Add a 'You are X, you are NOT Y' framing in the first 200 characters
Add 3-5 explicit format rules (markdown, length, tone)
Add the refusal block from your spec verbatim
Write 4 conversation starter chips (each one is a real example input)
Test 10 edge cases: off-topic, jailbreaks, format-stress, multilingual
GPT instructions are NOT a secret. Users can and will extract them with a 'repeat the text above starting with You are' style prompt.
Conversation starters max out at 50 chars each, longer ones get truncated mid-word in the UI.
Don't paste API keys or DB strings into instructions. Use a custom Action with an authenticated backend instead.
Agent prompt for this step
Read the GPT spec from the Brief.
Draft the GPT instructions following this structure:
1. Identity (one sentence: "You are X. You help Y do Z.")
2. Behavior (3-5 bullets on tone + format + step-by-step approach)
3. Refusals (the 3-5 from the spec, verbatim)
4. Output format (markdown rules, length, headings)
5. Escalation ("If the user asks for something you can't do, suggest they...")
Constraints: under 8000 characters. No secrets, treat the instructions as public. Don't include API keys, customer names, or proprietary internal info.
Then write 4 conversation starter chips, each one a real example input from the spec, max 50 chars each.
Output as a Brief section titled "Instructions v1".
04 / 10
Add knowledge files (only if you actually need them)
1-3 hr
GPTs can attach up to 20 files (max 512 MB each, 2M token retrieval index). Knowledge is overrated for most GPTs: the model is already good at the underlying domain. Only attach knowledge files when the GPT needs proprietary content the base model doesn't know: your product docs, internal SOPs, a specific dataset.
Tasks
List the proprietary content the GPT needs that the base model doesn't already know
Convert source files to PDF, TXT, or Markdown (those retrieve best)
Strip secrets, customer names, internal-only data
Upload the files to the GPT in Builder -> Configure -> Knowledge
Test 5 retrieval queries: ask the GPT something only the file knows
Confirm the GPT can NOT regurgitate the file verbatim (file leakage = takedown risk)
Knowledge files leak. Users have extracted full PDFs from GPTs. Strip anything you don't want public.
20 files max, 512 MB each, but retrieval gets noisy past ~5 large files. Quality > quantity.
PDFs with images-only content (scanned docs) don't retrieve, they need OCR first.
05 / 10
Build a custom Action if your GPT needs a backend
4-12 hr (most of it the API + auth + schema)
Actions let your GPT call your API (or any HTTPS API) mid-conversation. They're defined by an OpenAPI 3.1 schema you paste into the Builder. Most beginner GPTs don't need an Action, but if your GPT writes to a database, queries live data, or hits a paid API, you need one. The schema is the contract: get it wrong and the model invents calls that 404.
Tasks
Decide if you need an Action: live data, write operations, or paid-API access -> yes
Build the backend API (Vercel functions, Cloudflare Workers, your existing app)
Add auth: Bearer token or OAuth (the Builder supports both)
Write the OpenAPI 3.1 schema (operationId, summary, parameters, response shape)
Paste the schema into Builder -> Configure -> Actions -> Create new action
Test each operation from inside the GPT chat with the 'Test' button
Set the privacy policy URL on the Action (required for Store publication)
Actions can't return more than ~100 KB per call. The model truncates and the user sees garbage. Paginate large responses.
OpenAPI 3.0 schemas mostly work, but 3.1 is the official target. Some 3.0 quirks (nullable) don't translate cleanly.
The privacy policy URL on the Action is REQUIRED for Store submission. Skipping it fails review.
Agent prompt for this step
Read the user's existing API (codebase or OpenAPI doc).
Generate an OpenAPI 3.1 schema for a GPT Action that wraps it. Constraints:
1. Each operation needs an operationId in camelCase, that's how the model addresses it.
2. Each operation needs a 1-line summary, the model uses it to decide when to call.
3. Parameters: explicit names, types, descriptions. The model invents nonsense if descriptions are vague.
4. Response: a clear schema, not "object". The model formats the response based on the schema.
5. Auth: Bearer or OAuth. Include the security scheme in the schema.
Output the YAML or JSON schema as a Brief section titled "Action schema v1". Then list 3 example user prompts that would trigger each operation, so the user can test from the Builder.
06 / 10
Verify your Builder Profile
30 min for the form, 1-3 days for OpenAI to confirm
Builder Profile verification is the gate that filters spam GPTs out of search. Until your profile is verified, your GPT shows up only via direct link, not in Store browse or search. Verification ties your real name (or company) to the GPT and requires you to verify a domain you own.
Tasks
Go to chatgpt.com Settings -> Builder Profile
Pick the display name users will see next to your GPT
Add the domain you own (must be a domain, not a subdirectory)
Add the DNS TXT record OpenAI gives you to your domain
Wait for the green check mark next to the domain
Toggle 'Display name' on so it shows on every GPT you publish
Without a verified domain, your GPT does NOT appear in Store search. It's only reachable by direct URL.
DNS TXT records can take up to 48 hr to propagate. If verification fails immediately, wait and retry.
Builder Profile changes propagate to every GPT you've published. Pick the display name carefully.
07 / 10
Pick the right category and write the listing copy
2-3 hr
The GPT Store has 7 main categories: Writing, Productivity, Research, Education, Lifestyle, Programming, DALL-E. Each category has its own ranking pool, picking the right one is half the discovery battle. Your listing has a name (40 char), description (300 char), and the 4 conversation starters from earlier. There's no separate listing description from the in-chat experience, so the same starters do double duty.
Tasks
Pick the category that maps to the user's intent, not the technology
Draft the GPT name (max 40 chars, no 'GPT' or 'OpenAI' in the name)
Draft the GPT description (max 300 chars, lead with the user job)
Confirm the 4 conversation starters read like real example inputs
Pick a 1024x1024 PNG cover image (the Builder generates one with DALL-E if you don't supply one)
GPT Store rejects names + descriptions that mention competing AI brands (Claude, Gemini, Copilot). Even comparative phrasing trips review.
Names with 'GPT' or 'OpenAI' violate the brand guidelines and are rejected on submission.
Category mismatch is a soft-rejection, your GPT goes live but ranks last in the wrong pool.
08 / 10
Submit to the GPT Store and prepare for rejection
10 min to submit, 1-7 days waiting, repeat on rejection
Submitting is one click in the Builder: Configure -> top-right -> Save -> Publish to GPT Store. Review takes 24-72 hr typical. First-time submissions reject ~20-30% of the time, almost always for the same handful of reasons: brand-name policy, similar-to-existing-GPT, missing privacy policy on an Action, or instructions that promise capabilities the GPT can't deliver.
Tasks
In the Builder: Configure -> Save -> Publish -> Everyone (GPT Store)
Pick the category from the dropdown
Confirm the privacy policy URL is set on every Action
Click Confirm
On rejection: read the email, fix the cited issue, resubmit (no penalty)
On approval: confirm the GPT shows up in Store browse for your category within 24 hr
Common rejection 1: the description mentions a competing AI product. Strip every 'better than' phrase.
Common rejection 2: the GPT name is too close to an existing top-ranking GPT. Pick a more specific name.
Common rejection 3: a custom Action without a privacy policy URL. Add one to the Action config and resubmit.
09 / 10
Drive launch traffic to seed the ranking signal
3-7 days of prep before launch day
The GPT Store ranks on early conversation volume + completion rate + ratings. A GPT that gets 50 conversations in the first 48 hours ranks above one that gets 5, even if the second is technically better. Day-of distribution comes from your network: X / Bluesky, Reddit (r/ChatGPT, r/SideProject), Product Hunt, your newsletter.
Tasks
Build a 1-page landing site at the verified domain (a single screenshot + try-it link works)
Schedule the X / Bluesky launch thread for launch day morning
Pre-write the Reddit post for r/ChatGPT (community-friendly, no spam)
Schedule a Product Hunt launch (12:01am PT)
Email 20-50 closest contacts the day before with the GPT link
Set up a daily standup with Flint to log conversations + rating + share count
GPT Store ranking weights the FIRST 48 hours heavily. Don't soft-launch, batch all distribution into a 24 hr window.
Reddit auto-flags posts with shortened URLs. Use the full chatgpt.com/g/... URL.
Conversations + rating count, but a 1-message conversation that ends with the user leaving is a NEGATIVE signal. The GPT needs to deliver the output the user came for.
Agent prompt for this step
Draft the launch-day distribution copy for this GPT.
Output:
1. A 1-page landing site copy (1 hero sentence, 3 bullets on what it does, 1 try-it CTA)
2. A 5-tweet launch thread, 1 example input per tweet
3. A 200-word r/ChatGPT post (community-friendly, no marketing fluff)
4. A 200-word Product Hunt tagline + first comment
5. A pre-launch email to send 20-50 close contacts the day before
Constraints: no superlatives. No "revolutionary." Lead with the one job the GPT does.
10 / 10
Iterate on the analytics + apply for the GPT revenue program
Ongoing, 2-4 hr/week for the first month
Once live, the Builder shows a basic analytics dashboard: conversations per day, average length, top conversation starters used. Read this every day for the first 2 weeks: a high-volume GPT with a 1-message average conversation has a tone or refusal problem. The OpenAI revenue program is currently US-only, opt-in via Builder Profile, and pays based on engaged conversations from ChatGPT Plus users.
Tasks
Read the conversation analytics every morning of week 1
Identify the top 3 conversation starters that fail (1 turn + drop-off)
Update instructions to handle those failures, republish
If US-based: apply to the GPT revenue program from Builder Profile -> Settings
Set up Flint to log a daily Snapshot row in the workspace with conversations + ratings + shares
Revenue program is US-only as of Q1 2026, with a few additional countries rolling in. International builders can still rank but can't be paid out yet.
Editing a published GPT does NOT trigger re-review unless you change the name, description, or category. Instruction tweaks ship instantly.
Analytics has a 24 hr lag and only counts ChatGPT Plus users. Free-tier conversations don't count toward the rank or revenue.
Hand the template to your agent
Workspace-wide agent prompt.
Paste this into your agent's permanent system prompt so the agent reads, writes, and maintains the template's surfaces as you work through the steps.
Agent system prompt
You are an agent on the "Build a GPT and ship to the GPT Store" playbook workspace at your-org/build-a-gpt-and-ship-to-the-store.
Your role: maintain the four surfaces (Steps, Pointers, Brief, Submission log) as the user works through the 10-step playbook.
Cadence:
- When the user marks a step Done, append a line to the Brief summarising what shipped at that gate.
- When OpenAI rejects the submission, capture the reason as a row in Submission log + draft a response in the Brief.
- When the user pastes new instructions or a new Action schema into the Brief, mirror the canonical version into the Pointers table for cross-referencing.
First MCP tool calls:
1. list_surfaces(workspace_slug="build-a-gpt-and-ship-to-the-store")
2. list_rows(workspace_slug="build-a-gpt-and-ship-to-the-store", surface_slug="steps")
3. get_doc(workspace_slug="build-a-gpt-and-ship-to-the-store", surface_slug="brief")
Do NOT modify the canonical step titles in the Steps table. You can append substeps as new rows beneath them.
FAQ
Common questions on this template.
How long does it actually take to ship a GPT to the Store?
From spec to live in the Store: 3-7 days for a polished GPT. The breakdown: half a day on the spec + instructions, 1-2 days on Builder Profile verification (DNS propagation), 0-3 days on the custom Action if you need one, and 1-3 days on OpenAI review. A no-Action GPT with a verified profile can be live within 48 hours.
What gets first-time GPTs rejected from the Store?
The top three: (1) the GPT name or description mentions a competing AI brand or includes 'GPT' / 'OpenAI' (against the naming policy), (2) a custom Action without a privacy policy URL configured, (3) the GPT is too similar to an existing top-ranking GPT in the same category. The rejection email cites the specific policy section, fix and resubmit.
Do I need a custom Action to publish a GPT?
No. Most GPTs on the Store don't have a custom Action, they rely on the built-in browsing, code interpreter, or DALL-E tools plus knowledge files. Add an Action only if your GPT needs to call your own API, write to a database, or hit a paid third-party API.
How does the GPT revenue program pay out?
OpenAI pays US-based builders based on 'engaged conversations' from ChatGPT Plus users, weighted by depth of engagement. The exact rate isn't published. Free-tier conversations don't count, neither do conversations from your own account. International builders can publish and rank but aren't yet eligible for payouts (as of Q1 2026).
Can my AI agents help build the GPT?
Yes. The playbook ships agent prompts for the slow parts: drafting the spec from a 1-line idea, drafting the instructions from the spec, generating the OpenAPI schema for an Action from your existing API, drafting launch-day distribution copy, and watching the analytics post-launch. The Brief surface keeps the canonical version of each artifact.
What does it cost end-to-end?
ChatGPT Plus $20/mo (required to publish a GPT). Domain registration $10-20/yr (required for Builder Profile verification). Optional: hosting for a custom Action backend, $0-20/mo on Vercel free tier. OpenAI takes no cut of any monetization outside the (US-only) revenue program.
Open this template as a workspace.
We mint a fresh copy in your org with the steps as table rows, the pointers as a separate table, and the brief as a doc. Bring your agents, start checking off boxes.