Legal document review has historically been one of the most labor-intensive, expensive, and error-prone processes in professional services. A due diligence exercise for an M&A transaction can involve reviewing thousands of documents — contracts, policies, regulatory filings — over weeks or months, typically by teams of junior associates billing hundreds of euros per hour.
The Three Waves of Legal Tech
Wave 1: Digitization
The first wave simply moved documents from paper to PDF. Searchable, storable, shareable — but still requiring human eyes to read and analyze every page.
Wave 2: Keyword Search & Analytics
Tools like Relativity and Brainspace brought keyword search, email threading, and predictive coding to eDiscovery. These tools helped find relevant documents faster, but the actual analysis — "is this clause problematic?" — still required human judgment.
Wave 3: Agentic AI
The current wave uses LLMs not just to find documents but to understand them. Multi-agent systems like PAK4L can read a contract, identify regulatory violations, check internal consistency, assess financial risk, and suggest corrections — tasks that previously required multiple specialists over multiple days.
What Changes
- •Speed: A review that took 3 days takes 3 minutes
- •Cost: Analysis that cost thousands of euros costs single-digit credits
- •Consistency: AI doesn't get tired on page 87 of a 100-page document
- •Accessibility: Small firms and solo practitioners get the same analytical depth as Big Four consultancies
What Doesn't Change
AI doesn't replace legal judgment. It replaces the mechanical parts of review — finding issues, checking compliance boxes, flagging inconsistencies. The strategic decisions — whether to accept a risk, how to negotiate a clause, when to walk away from a deal — remain firmly human.
The best analogy is the calculator. Accountants didn't disappear when calculators arrived. They stopped doing arithmetic and started doing analysis. Legal AI is the calculator for document review.
Looking Forward
The next frontier is iterative AI review — systems that don't just identify issues but help you fix them, then re-review the revised document to confirm the fixes are correct. Combined with version tracking and collaborative workflows, this creates a continuous improvement loop where documents get better with each iteration.
We're also seeing the emergence of domain-specific knowledge bases — curated collections of regulatory frameworks, case law, and best practices that AI agents can reference during review. These transform general-purpose models into domain experts, closing the gap between AI capability and professional specialization.