Genspark Case Study: How a Law Firm Automated Legal Research and Cut Brief Preparation Time by 60%
The Problem: $400/Hour Associates Doing $50/Hour Research Work
A 45-attorney mid-size law firm specializing in commercial litigation faced a persistent efficiency problem. Junior associates spent 30-40% of their billable hours on legal research — searching for relevant case law, analyzing regulatory frameworks, and compiling citations for briefs and memos. At billing rates of $300-450/hour, this research work was expensive for clients and unfulfilling for associates who wanted to spend more time on strategy and advocacy.
The firm’s research workflow was traditional:
- Receive a legal question from a partner or client
- Search Westlaw or LexisNexis for relevant case law (2-4 hours)
- Read and assess relevance of 20-50 cases (3-6 hours)
- Identify the strongest authorities and counter-authorities (1-2 hours)
- Draft a research memo or brief section (2-4 hours)
- Senior review and revision (1-2 hours)
Total: 10-18 hours per research assignment. At associate billing rates, each research project cost clients $3,000-8,000. The firm handled 15-20 such projects per week.
The managing partner wanted to reduce research time without sacrificing quality — ideally without replacing Westlaw, which remained essential for verified legal databases.
Why Genspark Was Chosen
The firm evaluated ChatGPT, Perplexity, and Genspark for research augmentation. The evaluation criteria were specific to legal work:
ChatGPT limitations for legal research:
- No reliable citation to actual case law (hallucinated case names and citations)
- No ability to search legal databases
- Could not distinguish between binding and persuasive authority
- No source verification capability
Perplexity advantages and limitations:
- Good at searching the open web for regulatory and news content
- Citations were verifiable
- Weak on deep legal analysis requiring synthesis across many sources
- Did not understand legal hierarchy (supreme court vs. district court)
Genspark advantages:
- Sparkpage feature allowed building persistent research spaces for ongoing cases
- Multi-source synthesis was strong — could identify themes across many documents
- Citation quality was reasonable for secondary sources (law review articles, regulatory guidance, news coverage)
- Could synthesize complex factual patterns across multiple sources
- The “deep research” mode performed well on nuanced analytical questions
The firm adopted Genspark as a research accelerator — not a replacement for Westlaw, but a complement that handled the initial research scoping, secondary source discovery, and synthesis work. Associates still used Westlaw for primary legal authority verification.
Implementation: The Three-Phase Research Workflow
Phase 1: Research Scoping with Genspark (30-60 minutes)
Before diving into Westlaw, associates now started with Genspark to map the research landscape:
"I need to research [legal question]. Provide: 1. The key legal issues this question raises 2. The relevant areas of law (federal/state, statutory/common law) 3. Landmark cases that are likely relevant (note: I will verify these in Westlaw — just give me starting points) 4. Any relevant statutes or regulations 5. Recent law review articles or bar publications on this topic 6. Any circuit splits or unsettled areas of law 7. Common arguments made by each side in similar cases"
This scoping query replaced 2-3 hours of initial Westlaw exploration. Associates reported that Genspark consistently identified the relevant legal frameworks and starting points, even for unfamiliar areas of law.
Phase 2: Deep Research in Westlaw (2-4 hours, down from 5-10)
Armed with Genspark’s scoping output, associates entered Westlaw with targeted search queries instead of broad exploratory searches. They knew which cases to verify, which statutory sections to check, and which secondary sources to pull.
The time savings came from elimination of wrong-path research. Without scoping, an associate might spend 90 minutes exploring a line of cases that turns out to be irrelevant. With Genspark’s overview, these dead ends were identified before the associate entered Westlaw.
Phase 3: Synthesis and Drafting with Genspark (1-2 hours)
After verifying authorities in Westlaw, associates used Genspark to help synthesize findings:
"I'm writing a brief section arguing that [legal position]. Here are the key cases I've identified (verified in Westlaw): [list of cases with holdings] And the opposing authorities: [list of counter-cases] Help me: 1. Organize these authorities into a logical argument structure 2. Identify the strongest distinguishing factors for the opposing cases 3. Suggest the most persuasive ordering of arguments 4. Draft a topic sentence for each paragraph of the argument 5. Identify any gaps in my authority that I should fill"
This synthesis step replaced 2-3 hours of staring at printed cases and trying to organize arguments mentally. Genspark provided a structural framework that associates then refined and wrote in their own legal prose.
Building Sparkpages for Ongoing Cases
Case-Specific Research Hubs
For cases lasting months or years, associates created Sparkpages that accumulated research over time:
Sparkpage: "Smith v. Jones Corp — Research Hub" Sources added over time: - Complaint and answer - Key discovery documents - Deposition transcripts (relevant excerpts) - Expert reports - Case law found during research phases - Regulatory guidance documents - Industry publications relevant to the subject matter
The Sparkpage became a queryable knowledge base for the case. Six months into litigation, an associate could ask: “Based on all the research in this Sparkpage, what is our strongest argument for summary judgment on the breach of contract claim?” and receive an answer that synthesized months of accumulated research.
Practice Area Knowledge Bases
The firm created shared Sparkpages for common practice areas:
"Employment discrimination — Recent developments" "Non-compete enforcement — State-by-state analysis" "Data privacy litigation — CCPA and state law update"
These living knowledge bases reduced redundant research. When three different associates researched similar questions, the accumulated findings were available to all.
Results After 6 Months
Time Savings
| Research Phase | Before (hours) | After (hours) | Savings |
|---|---|---|---|
| Initial scoping | 2-4 | 0.5-1 | 60-75% |
| Primary research (Westlaw) | 5-10 | 2-4 | 50-60% |
| Synthesis and drafting | 3-6 | 1-2 | 55-70% |
| Senior review | 1-2 | 0.5-1 | 50% |
| Total per project | 10-18 | 4-8 | ~60% |
Average research project time dropped from 14 hours to 6 hours — a 57% reduction.
Financial Impact
Research projects per week: 17 (average) Hours saved per project: 8 (average) Hours saved per week: 136 Associate billing rate: $375/hour (average) Value of saved time: $51,000/week This time was reallocated to: - Higher-value strategic work (billable at same or higher rates) - More thorough analysis on complex matters - Earlier identification of case weaknesses - Faster turnaround for clients (competitive advantage)
Quality Metrics
The firm tracked quality through senior attorney review:
| Quality Metric | Before | After | Change |
|---|---|---|---|
| Citations rejected in review | 8% per memo | 4% per memo | -50% |
| Research gaps identified in review | 12% of memos | 7% of memos | -42% |
| Counter-arguments missed | 15% of briefs | 6% of briefs | -60% |
| Client feedback rating | 4.1/5.0 | 4.5/5.0 | +10% |
Quality improved because:
- Better initial scoping meant fewer missed authorities
- Genspark identified counter-arguments that associates might have overlooked
- Synthesis assistance produced more organized, logical arguments
- Faster turnaround allowed more time for refinement
Associate Satisfaction
An internal survey showed:
- 82% of associates reported the research workflow improvement positively affected their job satisfaction
- Most cited reason: “Spending less time on mechanical search and more on analysis and writing”
- 91% said they would not want to return to the previous workflow
- One associate noted: “I used to spend my first two years doing research I could have done as a paralegal. Now I’m doing associate-level analysis from day one.”
What Went Wrong
Problem 1: Hallucinated Case Citations in the First Month
In the first month, two associates cited cases in draft briefs that turned out to not exist — they had used Genspark’s suggested case names without verification in Westlaw. One was caught in senior review. The other made it into a filed brief and had to be corrected in an amended filing.
Root cause: Associates treated Genspark’s case citations as verified authority. Genspark, like all AI tools, can generate plausible but non-existent case citations.
Fix: The firm implemented an absolute rule: “No case citation from Genspark may appear in any document without Westlaw verification. Genspark provides research direction, not citable authority.” This was added to the firm’s research guidelines and reinforced in training. No further incidents occurred.
Problem 2: Regulatory Analysis Outdated
For one matter involving recent EPA regulations, Genspark provided analysis based on pre-amendment regulatory text. The regulation had been amended 3 months earlier, and Genspark’s sources had not yet reflected the change.
Root cause: Genspark searches the web, and some regulatory sources are not updated in real time.
Fix: For regulatory research, associates now cross-reference Genspark’s regulatory analysis with the Federal Register or the relevant agency’s website. The rule: “Regulatory text must be verified against the official source within the last 30 days.”
Problem 3: Over-Reliance by Junior Associates
One junior associate began using Genspark to generate entire brief sections, which she submitted for review with minimal editing. The writing was competent but generic — it lacked the firm’s analytical style and missed nuances that an experienced litigator would catch.
Root cause: The associate treated Genspark as a drafting tool rather than a research tool. She used its synthesis output as final prose rather than as a structural framework.
Fix: The firm clarified the boundary: “Genspark assists with research scoping, authority discovery, and argument structure. All legal writing in filings and client deliverables must be written by the attorney in their own words. Genspark output is a starting point, not a final product.”
Lessons for Other Law Firms
Start with Research Scoping, Not Drafting
The highest-value use of Genspark is the first step — mapping the research landscape. This saves the most time (eliminating wrong-path research) with the lowest risk (no client-facing output depends on unverified AI suggestions).
Maintain the Verification Layer
AI tools are research accelerators, not authority sources. Every factual claim, case citation, and statutory reference must be verified against authoritative legal databases. This is not optional — it is a professional responsibility.
Build Knowledge Incrementally
Sparkpages that accumulate research over a case’s lifetime become increasingly valuable. The 100th document added to a Sparkpage makes all previous queries more powerful because the knowledge base is more comprehensive.
Train on the Workflow, Not Just the Tool
Associates who understood the three-phase workflow (scope with Genspark, verify in Westlaw, synthesize with Genspark) were productive within a week. Associates who were just told “use Genspark for research” either over-relied on it or under-utilized it.
Measure Outcomes, Not Just Time Savings
Time savings are easy to measure. Quality improvements are harder but more important. Track citation acceptance rates, counter-argument identification, and client satisfaction alongside hours saved.
Frequently Asked Questions
Can Genspark replace Westlaw or LexisNexis?
No. Genspark searches the open web, which includes some legal content but does not provide the comprehensive, verified, and citeable legal databases that Westlaw and LexisNexis offer. Genspark is a research accelerator, not a legal database replacement.
Is it ethical for attorneys to use AI for legal research?
Yes, with appropriate safeguards. Most bar associations have issued guidance permitting AI tool use in legal research provided that: the attorney verifies all citations, the attorney exercises independent judgment, and the attorney does not delegate professional responsibility to the AI tool. Check your jurisdiction’s specific guidance.
What about confidentiality?
Do not input confidential client information into Genspark queries. Frame research questions in general terms: “What is the standard for piercing the corporate veil in Delaware?” not “Our client Acme Corp needs to pierce the veil of XYZ LLC.” The firm’s policy: “No client names, case details, or privileged information in AI tool queries.”
How does this affect billing?
The firm adjusted its billing practices: research that previously took 14 hours and was billed at associate rates now takes 6 hours. Clients receive lower bills for research. The firm’s revenue per attorney increased because associates reallocated saved time to higher-value work. Client satisfaction improved because bills were lower and turnaround was faster.
What size firm does this work for?
Any size. Solo practitioners benefit the most from research time savings (they do everything themselves). Large firms benefit from the knowledge base accumulation (Sparkpages shared across offices). Mid-size firms, like this case study, benefit from both.