Genspark Case Study: VC Analyst Automated Due Diligence Research, Saving 15 Hours Per Deal
The Problem: 20 Hours of Research Per Deal, 6 Deals Per Month
Sarah Kim, a senior analyst at a mid-stage VC fund ($200M AUM), evaluated approximately 6 startups per month for potential Series A/B investments. Each evaluation required comprehensive due diligence across 8 dimensions:
- Market size and growth trajectory
- Competitive landscape and differentiation
- Founding team backgrounds and track record
- Technology and IP analysis
- Customer traction and unit economics
- Financial health and burn rate
- Regulatory and legal risks
- Market timing and macro trends
The traditional research process took 20 hours per deal — spread across multiple tools:
- PitchBook/Crunchbase for funding and financial data (2 hours)
- LinkedIn for team background research (2 hours)
- Google/Google Scholar for market research (4 hours)
- Industry reports from Gartner/McKinsey (3 hours)
- Customer review sites (G2, Trustpilot) for traction signals (2 hours)
- Patent databases for IP analysis (2 hours)
- News and press for recent developments (2 hours)
- Synthesis and report writing (3 hours)
At 6 deals/month, Sarah spent 120 hours (75% of her working time) on research, leaving minimal time for the analytical judgment that made her valuable — evaluating whether the data supported an investment.
The Solution: SparkPage Research Templates
Sarah built a standardized due diligence workflow using Genspark SparkPages, creating one SparkPage per research dimension per deal.
The 8-Page Template
For each startup evaluation, Sarah created 8 SparkPages following this structure:
Page 1: Market Analysis
"Create a comprehensive market analysis for the [industry] market that [Company Name] operates in. Include: 1. Total addressable market (TAM) with source and methodology 2. Serviceable addressable market (SAM) for [Company]'s specific segment 3. Market growth rate (historical 3-year CAGR and projected 5-year) 4. Key market drivers and tailwinds 5. Market risks and headwinds 6. Regional breakdown if relevant 7. Adjacent markets that could expand the opportunity Cross-reference at least 3 sources for market size estimates. Note any significant discrepancies between sources. Prioritize: analyst reports > industry surveys > company claims."
Page 2: Competitive Landscape
"Map the competitive landscape for [Company Name] in [industry]: 1. Direct competitors (same product category, same customer segment) 2. Indirect competitors (different approach to the same problem) 3. Potential future competitors (adjacent companies that could enter) For each competitor: - Founded, funding, estimated revenue, team size - Key differentiators vs [Company Name] - Recent product launches or strategic moves - Customer sentiment from review sites Create a positioning matrix: X-axis = [key dimension 1], Y-axis = [key dimension 2]. Place each company. Identify: where is the white space? What is [Company]'s defensible moat?"
Pages 3-8 followed similar structured templates for team, technology, traction, financials, regulatory, and market timing.
The Synthesis Page
After the 8 research pages were complete, Sarah created a final synthesis:
"Based on all the research in this conversation, create an investment memo for [Company Name] with: 1. ONE-LINE THESIS: Why this company could be a great investment 2. BULL CASE: Top 3 reasons to invest (with evidence) 3. BEAR CASE: Top 3 risks (with evidence) 4. KEY METRICS: Financial and operational metrics that matter 5. COMPARABLE EXITS: Similar companies that have exited, at what valuations, and what that implies for [Company] 6. QUESTIONS FOR MANAGEMENT: 5 questions we need answered before proceeding 7. RECOMMENDATION: Proceed to partner meeting / Pass / Need more data Be direct and opinionated. This is for an internal investment committee, not a public report."
Implementation: Month 1
Week 1: Template Development and Testing
Sarah tested the template on a deal she had already researched manually. She compared the SparkPage output against her existing research to calibrate:
- Market sizing: SparkPage found the same TAM estimates from Gartner and McKinsey but also surfaced a newer CB Insights report she had missed. Quality: 90% of manual research quality.
- Competitive mapping: SparkPage identified 12 competitors vs. 8 she had found manually. Three of the additional four were genuine competitors she had overlooked. Quality: 95%.
- Team research: Less detailed than manual LinkedIn deep-dives. SparkPage captured public information but missed some career nuances. Quality: 75%.
- Overall: SparkPage covered 85% of the ground in 25% of the time.
Weeks 2-4: Live Deal Evaluation
Sarah used the SparkPage workflow for 5 live deals:
Deal 1: AI-powered logistics startup
- SparkPage research: 4.5 hours (8 pages + synthesis)
- Manual supplementation: 1.5 hours (team background deep-dives)
- Total: 6 hours (vs. 20 hours traditional)
- Quality assessment: “Sufficient for go/no-go decision. Would need deeper financial modeling for term sheet.”
Deal 2: Healthcare SaaS
- SparkPage research: 5 hours (regulatory dimension was complex)
- Manual supplementation: 2 hours (FDA pathway research)
- Total: 7 hours
- Quality: “Regulatory section needed human expertise. Market and competitive sections were excellent.”
Deal 3-5: averaged 5-6 hours each with increasing efficiency as Sarah refined her templates.
Results After 3 Months
Time Savings
| Dimension | Before (Manual) | After (SparkPage + Manual) | Savings |
|---|---|---|---|
| Market analysis | 4 hours | 45 min | 81% |
| Competitive landscape | 3 hours | 45 min | 75% |
| Team research | 2 hours | 1 hour | 50% |
| Technology/IP | 2 hours | 30 min | 75% |
| Customer traction | 2 hours | 30 min | 75% |
| Financial analysis | 2 hours | 45 min | 63% |
| Regulatory | 2 hours | 45 min | 63% |
| Synthesis and report | 3 hours | 30 min | 83% |
| Total per deal | 20 hours | 5.5 hours | 72% |
Capacity Impact
| Metric | Before | After | Change |
|---|---|---|---|
| Deals evaluated per month | 6 | 12 | +100% |
| Research hours per month | 120 | 66 | -45% |
| Hours for analysis/judgment | 40 | 94 | +135% |
| Deals reaching partner meeting | 2/month | 4/month | +100% |
| Investments closed (quarterly) | 1-2 | 2-3 | +50% |
The most significant outcome: Sarah now spent more time on analytical judgment (is this a good investment?) and less time on information gathering (what does the market look like?).
Quality Assessment
Sarah’s managing partner reviewed 6 SparkPage-assisted memos vs. 6 traditionally researched memos (blind review):
- Comprehensiveness: SparkPage memos scored 4.2/5.0 vs. 4.5/5.0 for traditional (slightly less detail in niche areas)
- Accuracy: 4.4/5.0 vs. 4.3/5.0 (SparkPage caught data points that manual research missed)
- Actionability: 4.5/5.0 vs. 4.0/5.0 (the structured synthesis page improved decision quality)
- Overall: partner rated both approaches as “investment committee ready”
Key Workflow Decisions
1. One SparkPage Per Dimension, Not One Per Deal
Initially, Sarah tried creating a single comprehensive SparkPage per deal. The output was too broad and shallow. Breaking research into 8 focused pages produced deeper, more useful results for each dimension.
2. Source Quality Control
Sarah added source directives to every SparkPage template:
"Prioritize sources from: - Named research firms (Gartner, Forrester, McKinsey, CB Insights) - SEC filings and investor presentations - Peer-reviewed market research - Company official announcements Deprioritize: - Blog posts and opinion articles - Undated or anonymous sources - Company marketing materials (cite but flag as self-reported)"
3. Human-Only Dimensions
Some due diligence dimensions could not be fully automated:
- Founder reference checks: personal conversations, not public data
- Financial model review: requires spreadsheet analysis, not web research
- Product demo assessment: hands-on evaluation
- Cultural and team fit: subjective judgment from meetings
Sarah kept these as manual steps, using SparkPages only for publicly available information research.
4. Continuous Template Refinement
After each deal, Sarah noted what the template missed and refined the prompts. By month 3, the templates were significantly more targeted than the originals — each capturing lessons from 15+ deal evaluations.
Lessons for Other VC Analysts
- Template everything — the time investment in creating templates pays for itself after 2-3 deals
- Break research into dimensions — focused queries produce better results than comprehensive ones
- Add source quality directives — AI research is only as good as its sources
- Keep judgment human — AI gathers information; humans evaluate it
- Refine after every deal — templates should improve with usage
- Share SparkPages with the team — colleagues can review the same research and add their perspective
Frequently Asked Questions
Can SparkPages access gated content like PitchBook?
No. SparkPages search the open web. For gated databases (PitchBook, Crunchbase Pro, Bloomberg), Sarah still uses those tools directly. SparkPages replaced the Google/web research portion, not subscription databases.
How does this compare to using ChatGPT for research?
ChatGPT produces good analysis but with less reliable sourcing. SparkPages provide structured, multi-source research with inline citations — critical for due diligence where every claim needs a verifiable source.
Can this workflow handle international deals?
Yes, with caveats. SparkPages work well for markets with English-language coverage. For deals in markets with limited English press (e.g., Southeast Asian startups), manual research in local languages supplements the SparkPage output.
How does the managing partner feel about AI-assisted research?
After the blind review, the partner endorsed the workflow for all analysts on the team. The key was that the final investment memo quality met the same standard — how it was produced mattered less than its accuracy and completeness.