How to Use Claude Projects with Custom Instructions and Knowledge Files to Build a Reusable Research Assistant

How to Use Claude Projects with Custom Instructions and Knowledge Files to Build a Reusable Research Assistant

Claude Projects let you create persistent, context-rich workspaces where Claude remembers your preferences, references your documents, and follows your specific instructions across every conversation. Instead of re-explaining your needs each time, you build a reusable assistant tailored to your exact research workflow. This guide walks you through setting up a Claude Project from scratch, configuring custom instructions, uploading knowledge files, and optimizing the system for serious research work.

Prerequisites

  • A Claude Pro, Team, or Enterprise subscription (Projects are not available on the free tier)- Access to claude.ai via web browser- Your research documents in PDF, TXT, CSV, or MD format (max 200,000 tokens per project)

Step 1: Create a New Project

  • Log in to claude.ai and click Projects in the left sidebar.- Click Create Project.- Name your project descriptively — for example, Market Research Assistant – Q1 2026.- Add an optional description to remind yourself of the project’s scope.

Step 2: Write Custom Instructions

Custom instructions define how Claude behaves inside the project. Click Set custom instructions in the project settings panel and enter your system prompt. Here is a battle-tested example for a research assistant: You are a senior research analyst. Follow these rules in every response:

  1. SOURCING: Always cite the specific knowledge file and section when referencing uploaded documents. Use the format [Source: filename.pdf, p.X].

  2. STRUCTURE: Present findings using this format:

    • Executive Summary (2-3 sentences)
    • Key Findings (bulleted list)
    • Supporting Evidence (from knowledge files)
    • Gaps & Limitations
  3. TONE: Write in professional, concise language suitable for executive stakeholders.

  4. ANALYSIS: When comparing data points, use tables. When identifying trends, provide both the data and your interpretation.

  5. UNCERTAINTY: If the uploaded documents do not contain enough information, say so explicitly. Never fabricate data.

  6. OUTPUT: Default to Markdown formatting. When asked for deliverables, produce structured reports ready for copy-paste into Google Docs or Notion.

    This prompt ensures consistent, high-quality output across every conversation within the project.

Step 3: Upload Knowledge Files

  • In your project settings, click Add content under the Knowledge section.- Upload your research documents. Supported formats include PDF, TXT, CSV, MD, and common code files.- Organize uploads by category — for example, upload competitive analysis reports, market data CSVs, and internal strategy docs separately.- Claude will reference these files automatically when answering questions within the project.

Example Knowledge File Structure

File NameTypePurpose
competitor-analysis-2026.pdfPDFCompetitive landscape data
market-size-data.csvCSVTAM/SAM/SOM calculations
internal-strategy-memo.mdMarkdownCompany strategic priorities
customer-survey-results.pdfPDFPrimary research findings
## Step 4: Use the Project in Conversations Start a new conversation inside the project. Claude now has access to your custom instructions and all uploaded knowledge files. Try prompts like: Compare our top 3 competitors based on the uploaded competitor analysis. Present the results in a table with columns for pricing, market share, and key differentiators.
Using the customer survey results, identify the top 5 unmet needs 
and cross-reference them with our internal strategy memo.
Generate an executive brief summarizing the market opportunity. 
Cite specific data points from the uploaded CSV.
## Step 5: Using the Claude API with System Prompts

If you prefer programmatic access, replicate the project behavior using the Claude API with a system prompt: import anthropic

client = anthropic.Anthropic(api_key=“YOUR_API_KEY”)

system_prompt = """ You are a senior research analyst. Always cite sources explicitly. Structure responses with: Executive Summary, Key Findings, Evidence, Gaps. Use tables for comparisons. Never fabricate data. """

Include knowledge file content as part of the system prompt

with open(“competitor-analysis.txt”, “r”) as f: knowledge = f.read()

message = client.messages.create( model=“claude-sonnet-4-6”, max_tokens=4096, system=f”{system_prompt}\n\n## Reference Data:\n{knowledge}”, messages=[ {“role”: “user”, “content”: “Summarize the competitive landscape and identify our biggest threat.”} ] )

print(message.content[0].text)

Install the SDK first: pip install anthropic

Pro Tips for Power Users

  • Layer your instructions: Put universal rules (tone, format) in custom instructions. Put task-specific rules in your prompts. This keeps the system modular.- Version your knowledge files: Name files with dates like market-data-2026-Q1.csv so Claude can distinguish between time periods when you upload updated versions.- Use starter prompts: Configure 3-4 starter prompts in project settings for recurring tasks like “Weekly competitor update” or “Summarize new survey data.”- Chain projects: Create separate projects for different research phases (Discovery, Analysis, Reporting) and transfer findings between them by copying Claude’s outputs into knowledge files for the next phase.- Token budget management: Monitor your knowledge file usage. If you approach the 200K token limit, summarize older documents and replace the originals with condensed versions.

Troubleshooting Common Issues

ProblemCauseSolution
Claude ignores custom instructionsInstructions are too long or contradictoryKeep instructions under 1,500 words. Remove conflicting rules. Test with a simple prompt first.
Knowledge files not referencedFile content may not match the query contextAsk Claude directly: "What documents do you have access to in this project?" Then rephrase your query to match file contents.
Upload failsFile exceeds size limit or unsupported formatConvert files to PDF or plain text. Split large files into chunks under 30MB each.
Inconsistent output formatCustom instructions lack specificityAdd explicit format templates with examples in your instructions. Use numbered rules rather than prose.
API responses differ from webSystem prompt missing knowledge contextManually include knowledge file content in the system prompt when using the API.
## Frequently Asked Questions

How many knowledge files can I upload to a single Claude Project?

Claude Projects support up to approximately 200,000 tokens of knowledge content. This translates to roughly 500 pages of text. You can upload multiple files in any supported format (PDF, TXT, CSV, MD). If you need more capacity, summarize older documents or split your research across multiple focused projects.

Do custom instructions carry over between conversations in the same project?

Yes. Custom instructions persist across every conversation within a project. Each time you start a new chat inside the project, Claude automatically applies your instructions and has access to all uploaded knowledge files. However, conversation history from previous chats does not carry over — only the instructions and knowledge files are persistent.

Can I share a Claude Project with my team?

On Claude Team and Enterprise plans, projects can be shared with team members. All collaborators see the same custom instructions and knowledge files. On Claude Pro (individual), projects are private to your account. To share on supported plans, open the project settings and use the sharing controls to invite team members by email.

Explore More Tools

Grok Best Practices for Academic Research and Literature Discovery: Leveraging X/Twitter for Scholarly Intelligence Best Practices Grok Best Practices for Content Strategy: Identify Trending Topics Before They Peak and Create Content That Captures Demand Best Practices Grok Case Study: How a DTC Beauty Brand Used Real-Time Social Listening to Save Their Product Launch Case Study Grok Case Study: How a Pharma Company Tracked Patient Sentiment During a Drug Launch and Caught a Safety Signal 48 Hours Before the FDA Case Study Grok Case Study: How a Disaster Relief Nonprofit Used Real-Time X/Twitter Monitoring to Coordinate Emergency Response 3x Faster Case Study Grok Case Study: How a Political Campaign Used X/Twitter Sentiment Analysis to Reshape Messaging and Win a Swing District Case Study How to Use Grok for Competitive Intelligence: Track Product Launches, Pricing Changes, and Market Positioning in Real Time How-To Grok vs Perplexity vs ChatGPT Search for Real-Time Information: Which AI Search Tool Is Most Accurate in 2026? Comparison How to Use Grok for Crisis Communication Monitoring: Detect, Assess, and Respond to PR Emergencies in Real Time How-To How to Use Grok for Product Improvement: Extract Customer Feedback Signals from X/Twitter That Your Support Team Misses How-To How to Use Grok for Conference Live Monitoring: Extract Event Insights and Identify Networking Opportunities in Real Time How-To How to Use Grok for Influencer Marketing: Discover, Vet, and Track Influencer Partnerships Using Real X/Twitter Data How-To How to Use Grok for Job Market Analysis: Track Industry Hiring Trends, Layoff Signals, and Salary Discussions on X/Twitter How-To How to Use Grok for Investor Relations: Track Earnings Sentiment, Analyst Reactions, and Shareholder Concerns in Real Time How-To How to Use Grok for Recruitment and Talent Intelligence: Identifying Hiring Signals from X/Twitter Data How-To How to Use Grok for Startup Fundraising Intelligence: Track Investor Sentiment, VC Activity, and Funding Trends on X/Twitter How-To How to Use Grok for Regulatory Compliance Monitoring: Real-Time Policy Tracking Across Industries How-To NotebookLM Best Practices for Financial Analysts: Due Diligence, Investment Research & Risk Factor Analysis Across SEC Filings Best Practices NotebookLM Best Practices for Teachers: Build Curriculum-Aligned Lesson Plans, Study Guides, and Assessment Materials from Your Own Resources Best Practices NotebookLM Case Study: How an Insurance Company Built a Claims Processing Training System That Cut Errors by 35% Case Study