Perplexity Spaces Case Study: How a VC Analyst Team Cut Due Diligence Prep from 12 Hours to 90 Minutes
Executive Summary
A mid-market venture capital firm with a six-person analyst team was spending an average of 12 hours per startup evaluation on manual deal sourcing, competitor mapping, and market sizing. By implementing Perplexity Spaces as their collaborative research hub, they reduced due diligence preparation time to 90 minutes per evaluation — an 87% reduction — while simultaneously improving citation quality and report consistency. This case study walks through the exact setup, API integration, and workflow automation that made this transformation possible.
The Problem: Manual Research Bottlenecks in Deal Flow
Before adopting Perplexity Spaces, the analyst team faced three critical bottlenecks:
- Fragmented sourcing: Analysts used 8+ tabs across Crunchbase, PitchBook, Google Scholar, and SEC filings to gather data on a single startup.- No shared context: Research done by one analyst was invisible to others, leading to duplicated work on overlapping deals.- Unreliable market sizing: Estimates lacked traceable citations, creating friction during investment committee reviews.
Solution Architecture: Perplexity Spaces + API Automation
Step 1: Install the Perplexity CLI and SDK
The team began by setting up programmatic access to Perplexity’s Sonar API for automated research workflows.
# Install the Perplexity Python SDK
pip install perplexity-sdk requests
Verify installation
python -c “import perplexity; print(perplexity.version)“
Step 2: Configure API Access
# config.py — Perplexity API configuration
import os
PERPLEXITY_API_KEY = os.getenv("PERPLEXITY_API_KEY", "YOUR_API_KEY")
BASE_URL = "https://api.perplexity.ai"
MODEL = "sonar-pro" # Use sonar-pro for citation-rich research
HEADERS = {
"Authorization": f"Bearer {PERPLEXITY_API_KEY}",
"Content-Type": "application/json"
}
Step 3: Create Dedicated Spaces for Each Deal
The team organized Perplexity Spaces into a three-tier structure:
| Space Type | Purpose | Access Level |
|---|---|---|
| Deal Pipeline | Active startup evaluations with shared threads | Full analyst team |
| Sector Research | Ongoing industry monitoring (AI/ML, Fintech, Climate) | Sector-assigned analysts |
| IC Prep | Finalized reports for investment committee | Partners + lead analyst |
Step 4: Automate Competitor Mapping
The following script automates competitor landscape generation for any target startup:
import requests
import json
def map_competitors(startup_name, sector, api_key=“YOUR_API_KEY”):
"""Generate a citation-backed competitor map for a target startup."""
url = “https://api.perplexity.ai/chat/completions”
payload = {
"model": "sonar-pro",
"messages": [
{
"role": "system",
"content": "You are a venture capital research analyst. "
"Provide structured competitor analysis with "
"funding data, key differentiators, and source citations."
},
{
"role": "user",
"content": f"Map the competitive landscape for {startup_name} "
f"in the {sector} sector. Include: "
f"1) Direct competitors with funding amounts, "
f"2) Indirect competitors from adjacent markets, "
f"3) Key differentiators for each, "
f"4) Market positioning matrix."
}
],
"return_citations": True,
"search_recency_filter": "month"
}
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
result = response.json()
content = result["choices"][0]["message"]["content"]
citations = result.get("citations", [])
return {"analysis": content, "sources": citations}
Usage
result = map_competitors(“ExampleAI”, “enterprise AI infrastructure”)
print(result[“analysis”])
print(f”\nBacked by {len(result[‘sources’])} citations”)
Step 5: Generate Citation-Backed Market Sizing Reports
def market_sizing_report(market_description, api_key="YOUR_API_KEY"):
"""Generate TAM/SAM/SOM analysis with traceable citations."""
url = "https://api.perplexity.ai/chat/completions"
payload = {
"model": "sonar-pro",
"messages": [
{
"role": "system",
"content": "You are a market research analyst at a VC firm. "
"All market size figures MUST include source citations. "
"Use bottom-up and top-down approaches."
},
{
"role": "user",
"content": f"Provide a TAM/SAM/SOM analysis for: {market_description}. "
f"Include growth rates (CAGR), key assumptions, "
f"and cite every data point to its original source."
}
],
"return_citations": True,
"search_recency_filter": "year"
}
response = requests.post(url, json=payload, headers={"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"})
return response.json()
# Usage
report = market_sizing_report("AI-powered contract analysis for mid-market legal departments")
print(report["choices"][0]["message"]["content"])
The 90-Minute Due Diligence Workflow
After implementation, the team standardized a repeatable evaluation workflow: - **Minutes 0–15:** Create a new Space for the deal. Run the competitor mapping script. Share the Space with the assigned analyst pair.- **Minutes 15–40:** Execute market sizing queries within the Space. Perplexity retains context from the competitor analysis, enriching the TAM/SAM/SOM output.- **Minutes 40–60:** Use follow-up threads in the Space to investigate founder backgrounds, patent filings, and regulatory risks — all citation-backed.- **Minutes 60–75:** Review auto-generated citations. Flag any single-source claims for manual verification.- **Minutes 75–90:** Export the Space thread as the structured memo draft for IC review. ## Results
| Metric | Before | After | Improvement |
|---|---|---|---|
| Due diligence prep time | 12 hours | 90 minutes | 87% reduction |
| Deals evaluated per week | 3–4 | 12–15 | 3.5x throughput |
| Citations per report | 8–12 (manual) | 35–50 (auto) | 4x source density |
| Duplicate research across team | ~40% overlap | <5% overlap | Shared Spaces |
"Always include funding round dates and lead investor names when discussing competitors" so every query in that Space follows your firm's reporting standards.- **Use search_recency_filter strategically:** Set to "week" for news-sensitive queries (funding announcements, exec changes) and "year" for market sizing to capture comprehensive data.- **Chain Spaces for pipeline stages:** Move a deal from the Pipeline Space to the IC Prep Space when ready. The IC Prep Space can have stricter system prompts requiring quantitative backing for every claim.- **Batch API calls with threading:** Use Python's concurrent.futures.ThreadPoolExecutor to run competitor mapping across 5 startups simultaneously for sector-wide scans.- **Export citations as BibTeX:** Parse the citations array from API responses into BibTeX format for integration with your firm's knowledge management system.
## Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
401 Unauthorized | Invalid or expired API key | Regenerate your key at perplexity.ai/settings/api and update your environment variable. |
| Citations missing from response | return_citations not set | Ensure "return_citations": true is included in every API payload. |
| Stale market data in reports | Default recency filter too broad | Set "search_recency_filter": "month" for time-sensitive financial data. |
| Rate limiting on batch queries | Exceeding API tier limits | Add time.sleep(1) between calls or upgrade to a higher-rate API plan. |
| Space context not carrying over | New thread started outside the Space | Ensure all follow-up queries are posted within the same Space thread, not as new conversations. |
Can Perplexity Spaces replace dedicated VC research platforms like PitchBook or CB Insights?
Perplexity Spaces complements rather than fully replaces specialized platforms. It excels at synthesizing open-web information with citations and dramatically accelerates the initial research phase. However, proprietary databases like PitchBook still offer structured financial data fields (cap tables, valuation histories) that Perplexity cannot access. The most effective setup uses Perplexity Spaces for rapid qualitative research and narrative synthesis, then cross-references key figures against proprietary databases during the verification step.
How reliable are the citations in Perplexity’s market sizing outputs?
Citations from the Sonar Pro model are generally traceable and accurate, but they should be treated as a strong starting point rather than final authority. In the case study team’s experience, roughly 90% of citations linked to valid, relevant sources. The remaining 10% occasionally pointed to outdated pages or tangentially related content. The team’s workflow accounts for this by including a dedicated 15-minute citation review step before finalizing any report for the investment committee.
What is the API cost for running this due diligence workflow per startup evaluation?
Using the Sonar Pro model, a typical 90-minute evaluation involves 8–12 API calls (competitor mapping, market sizing, founder research, regulatory queries). At current pricing tiers, this translates to approximately $0.50–$1.50 per full evaluation, depending on response length and search depth. Compared to the analyst time saved — converting 12 hours of senior analyst work into 90 minutes — the API cost is negligible relative to the labor cost reduction.