How to Build a Custom GPT: Complete Guide to Knowledge Files, Actions API & Sharing

How to Build a Custom GPT: Knowledge Files, Actions API & Sharing Setup

Custom GPTs let you create specialized AI assistants tailored to your workflows. Whether you need a customer support bot trained on your documentation or an internal tool that connects to your APIs, this guide walks you through every step — from initial creation to public deployment.

Prerequisites

  • ChatGPT Plus, Team, or Enterprise subscription (Custom GPTs are not available on the free plan)
  • Your knowledge base files (PDF, DOCX, TXT, CSV, JSON — max 20 files, 512 MB each)
  • (Optional) An external API endpoint with OpenAPI 3.0+ schema for Actions

Step-by-Step: Creating Your Custom GPT

Step 1: Open the GPT Builder

  • Navigate to chat.openai.com and log in.
  • Click your profile icon → My GPTsCreate a GPT.
  • You’ll see two tabs: Create (conversational builder) and Configure (manual setup). Use Configure for full control.

Step 2: Define Identity & Instructions

Fill in the following fields under the Configure tab:

FieldDescriptionExample
NamePublic-facing name of your GPTDevOps Assistant Pro
DescriptionShort summary shown in the GPT StoreHelps teams troubleshoot CI/CD pipelines
InstructionsSystem prompt that governs behaviorSee example below

Write clear, structured instructions using this template:

You are “DevOps Assistant Pro”, an expert in CI/CD, Docker, Kubernetes, and cloud infrastructure.

RULES:

  • Always provide actionable CLI commands.
  • When referencing documentation, cite the specific file name from your knowledge base.
  • If you don’t know the answer, say so — do not fabricate commands.
  • Format outputs in markdown with code blocks.

PERSONALITY:

  • Professional but approachable.
  • Prefer concise answers with expandable details.

CONTEXT:

  • Users are mid-to-senior engineers on Linux/macOS.
  • Primary stack: AWS, Terraform, GitHub Actions.

Step 3: Upload Knowledge Files

  • Scroll to the Knowledge section in Configure.
  • Click Upload files and select your documents.
  • Supported formats: PDF, DOCX, PPTX, TXT, MD, CSV, JSON, XML, HTML.
  • The GPT uses retrieval-augmented generation (RAG) to search these files at query time.

**Best practices for knowledge files:**

  • Use descriptive file names (e.g., aws-ec2-troubleshooting-2025.pdf instead of doc1.pdf).
  • Break large documents into topic-specific files for better retrieval accuracy.
  • Include a table-of-contents.md file that maps topics to file names — this helps the GPT route queries.
  • Keep total knowledge base under 50 files for optimal performance.

Step 4: Configure Actions (API Integration)

Actions allow your GPT to call external APIs. You need an OpenAPI 3.0 specification.

4a. Write Your OpenAPI Schema

{ “openapi”: “3.1.0”, “info”: { “title”: “Ticket Management API”, “version”: “1.0.0”, “description”: “Create and query support tickets” }, “servers”: [ { “url”: “https://api.yourcompany.com/v1” } ], “paths”: { “/tickets”: { “get”: { “operationId”: “listTickets”, “summary”: “List open support tickets”, “parameters”: [ { “name”: “status”, “in”: “query”, “required”: false, “schema”: { “type”: “string”, “enum”: [“open”, “closed”, “pending”] } } ], “responses”: { “200”: { “description”: “Array of tickets”, “content”: { “application/json”: { “schema”: { “type”: “array”, “items”: { “type”: “object”, “properties”: { “id”: { “type”: “string” }, “title”: { “type”: “string” }, “status”: { “type”: “string” } } } } } } } } }, “post”: { “operationId”: “createTicket”, “summary”: “Create a new support ticket”, “requestBody”: { “required”: true, “content”: { “application/json”: { “schema”: { “type”: “object”, “required”: [“title”, “description”], “properties”: { “title”: { “type”: “string” }, “description”: { “type”: “string” }, “priority”: { “type”: “string”, “enum”: [“low”, “medium”, “high”] } } } } } }, “responses”: { “201”: { “description”: “Ticket created successfully” } } } } } }

4b. Add the Action in GPT Builder

  • Click Create new action in the Configure tab.
  • Paste your OpenAPI schema into the schema editor.
  • Set Authentication: choose API Key, OAuth 2.0, or None.
  • For API Key auth, select the header name (typically Authorization) and enter Bearer YOUR_API_KEY.
  • Set your Privacy policy URL (required for public GPTs).
  • Click Test to validate each endpoint.

4c. Validate with cURL Before Connecting

# Test your API independently before adding to GPT
curl -X GET "https://api.yourcompany.com/v1/tickets?status=open" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

# Test POST endpoint
curl -X POST "https://api.yourcompany.com/v1/tickets" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"title": "Test Ticket", "description": "Created via API", "priority": "low"}'

Step 5: Enable Capabilities

Toggle the built-in capabilities your GPT needs:

  • Web Browsing — for real-time information lookup
  • Code Interpreter — for data analysis, chart generation, and running Python
  • DALL·E — for generating visuals on demand

Step 6: Set Sharing & Publishing Options

  • Click Save in the top-right corner.
  • Choose your sharing level:
OptionVisibilityUse Case
Only mePrivate — only you can use itPersonal productivity tools
Anyone with the linkUnlisted — accessible via direct URLTeam tools, client demos
EveryonePublic — listed in the GPT StoreCommunity tools, lead generation
For Team/Enterprise workspaces, an additional option **Everyone in [Workspace]** restricts access to your organization.

Pro Tips for Power Users

  • Version your instructions: Keep a local copy of your system prompt in a Git repo. This lets you track changes and roll back if a prompt update degrades performance.
  • Use conversation starters: Add 4 starter prompts that showcase your GPT’s best capabilities — these appear as clickable buttons for new users.
  • Chain Actions strategically: Your GPT can call multiple Actions in sequence. Design your API endpoints to be composable (e.g., search → get detail → update).
  • Test with edge cases: Before publishing, test ambiguous queries, empty API responses, and large file references to ensure graceful handling.
  • Monitor usage: In the GPT Store analytics dashboard, track conversation counts and user retention to iterate on your instructions.
  • Structure knowledge files with headers: Use clear H1/H2 markdown headers in your docs — the retrieval system uses these as semantic anchors.

Troubleshooting Common Errors

ErrorCauseFix
"Could not connect to the API"Server URL mismatch or CORS issueVerify the servers.url in your schema matches your actual endpoint. Ensure your API accepts requests from https://chat.openai.com.
"Authentication failed"Invalid or expired API keyRegenerate your API key and update it in the Action authentication settings. Check the header format (Bearer vs raw key).
Knowledge file not referencedFile name is generic or content lacks structureRename files descriptively and add a TOC file. Mention file names explicitly in your instructions.
"Schema validation error"OpenAPI spec has syntax issuesValidate your schema at editor.swagger.io before pasting. Common issues: missing operationId, invalid $ref paths.
GPT ignores instructionsSystem prompt is too long or contradictoryKeep instructions under 1500 words. Use clear sections with headers. Remove conflicting rules.

Frequently Asked Questions

Can I update my Custom GPT’s knowledge files after publishing?

Yes. Open your GPT in the builder, navigate to the Knowledge section, remove outdated files, and upload new ones. Changes take effect immediately — no need to republish. Users in active conversations may need to start a new chat to access the updated knowledge base.

How do I secure sensitive data when using Actions with an external API?

Use OAuth 2.0 authentication instead of static API keys for production environments. Never expose internal-only endpoints; create a dedicated proxy or gateway layer with scoped permissions. Add rate limiting on your API server, and always validate incoming request headers to confirm they originate from OpenAI’s infrastructure.

What are the file size and quantity limits for Knowledge uploads?

Each file can be up to 512 MB, and you can upload up to 20 files per GPT. Supported formats include PDF, DOCX, TXT, CSV, JSON, MD, PPTX, and HTML. For best retrieval performance, keep individual files focused on a single topic and under 50 pages or 100,000 tokens of text content.

Explore More Tools

Grok Best Practices for Academic Research and Literature Discovery: Leveraging X/Twitter for Scholarly Intelligence Best Practices Grok Best Practices for Content Strategy: Identify Trending Topics Before They Peak and Create Content That Captures Demand Best Practices Grok Case Study: How a DTC Beauty Brand Used Real-Time Social Listening to Save Their Product Launch Case Study Grok Case Study: How a Pharma Company Tracked Patient Sentiment During a Drug Launch and Caught a Safety Signal 48 Hours Before the FDA Case Study Grok Case Study: How a Disaster Relief Nonprofit Used Real-Time X/Twitter Monitoring to Coordinate Emergency Response 3x Faster Case Study Grok Case Study: How a Political Campaign Used X/Twitter Sentiment Analysis to Reshape Messaging and Win a Swing District Case Study How to Use Grok for Competitive Intelligence: Track Product Launches, Pricing Changes, and Market Positioning in Real Time How-To Grok vs Perplexity vs ChatGPT Search for Real-Time Information: Which AI Search Tool Is Most Accurate in 2026? Comparison How to Use Grok for Crisis Communication Monitoring: Detect, Assess, and Respond to PR Emergencies in Real Time How-To How to Use Grok for Product Improvement: Extract Customer Feedback Signals from X/Twitter That Your Support Team Misses How-To How to Use Grok for Conference Live Monitoring: Extract Event Insights and Identify Networking Opportunities in Real Time How-To How to Use Grok for Influencer Marketing: Discover, Vet, and Track Influencer Partnerships Using Real X/Twitter Data How-To How to Use Grok for Job Market Analysis: Track Industry Hiring Trends, Layoff Signals, and Salary Discussions on X/Twitter How-To How to Use Grok for Investor Relations: Track Earnings Sentiment, Analyst Reactions, and Shareholder Concerns in Real Time How-To How to Use Grok for Recruitment and Talent Intelligence: Identifying Hiring Signals from X/Twitter Data How-To How to Use Grok for Startup Fundraising Intelligence: Track Investor Sentiment, VC Activity, and Funding Trends on X/Twitter How-To How to Use Grok for Regulatory Compliance Monitoring: Real-Time Policy Tracking Across Industries How-To NotebookLM Best Practices for Financial Analysts: Due Diligence, Investment Research & Risk Factor Analysis Across SEC Filings Best Practices NotebookLM Best Practices for Teachers: Build Curriculum-Aligned Lesson Plans, Study Guides, and Assessment Materials from Your Own Resources Best Practices NotebookLM Case Study: How an Insurance Company Built a Claims Processing Training System That Cut Errors by 35% Case Study