FutureSearch Logofuturesearch
  • Blog
  • Solutions
  • Research
  • Docs
  • Evals
  • Company
  • Get Researchers
FutureSearch Logo

General inquiry? You can reach us at hello@futuresearch.ai.

Company

Team & CareersPressPrivacy PolicyTerms of Service

Developers

SDK DocsAPI ReferenceCase StudiesGitHub

Follow Us

X (Twitter)@dschwarz26LinkedIn
FutureSearchdocs
Your research team
Installation
  • All install methods
  • Claude.ai
  • Claude Cowork
  • Claude Code
  • Web App
  • Python SDK
  • Skill
  • MCP Server
Reference
  • API Key
  • classify
  • dedupe
  • forecast
  • merge
  • rank
  • agent_map
  • screen
  • Progress Monitoring
  • Chaining Operations
Guides
  • LLM-Powered Data Labeling
  • Add a Column via Web Research
  • Classify and Label Rows
  • Deduplicate Training Data
  • Filter a Dataset Intelligently
  • Join Tables Without Shared Keys
  • Rank Data by External Metrics
  • Resolve Duplicate Entities
  • Scale Deduplication to 20K Rows
Case Studies
  • Deduplicate Contact Lists
  • Deduplicate CRM Records
  • Enrich Contacts with Company Data
  • Fuzzy Match Across Tables
  • Link Records Across Medical Datasets
  • LLM Cost vs. Accuracy
  • Merge Costs and Speed
  • Merge Thousands of Records
  • Multi-Stage Lead Qualification
  • Research and Rank Web Data
  • Run 10,000 LLM Web Research Agents
  • Score Cold Leads via Web Research
  • Score Leads from Fragmented Data
  • Screen 10,000 Rows
  • Screen Job Listings
  • Screen Stocks by Economic Sensitivity
  • Screen Stocks by Investment Thesis
FutureSearchby futuresearch
by futuresearch

Multi-Stage Lead Qualification

Go to futuresearch.ai/app, upload a CSV of 20 investment funds, and enter:

Score each fund 0-100 on likelihood to adopt research tools.

Then filter results to funds scoring >= 50, re-upload, and run additional stages for team size estimation and final screening. 14 of 20 funds qualified.

Add the everyrow connector if you haven't already. Then upload a CSV of 20 investment funds and ask Claude:

I have a CSV of 20 investment funds. Run this pipeline: 1. Score each fund 0-100 on likelihood to adopt research tools. 2. Filter to funds scoring >= 50. 3. For remaining funds, estimate their investment team size. 4. Final screen: include if score >= 70 OR team size <= 5.

14 of 20 funds qualified after the three-stage pipeline. Results take about 4 minutes.

Claude Code handles a single scoring or filtering step natively. Chaining three stages (score by research adoption, filter by threshold, estimate team sizes, then screen by a compound rule) needs an approach where each stage passes its output to the next with custom logic between steps.

Here, we get Claude Code to run a three-stage qualification pipeline on 20 investment funds.

MetricValue
Input funds20
After scoring15
Final qualified14
Total cost$0.53
Time4.2 minutes

Add everyrow to Claude Code if you haven't already:

claude mcp add futuresearch --scope project --transport http https://mcp.futuresearch.ai/mcp

Tell Claude to run the multi-stage pipeline:

I have a CSV of 20 investment funds. Run this pipeline:
1. Score each fund 0-100 on likelihood to adopt research tools
2. Filter to funds scoring >= 50
3. For remaining funds, estimate their investment team size
4. Final screen: include if score >= 70 OR team size <= 5

Claude chains three everyrow operations with a pandas filter step:

Tool: everyrow_rank (Stage 1: Score by research adoption)
├─ task: "Score funds 0-100 on likelihood to adopt research tools"
├─ field_name: "score"
→ 20 rows scored in 73s. Session: https://futuresearch.ai/sessions/680fb865-...

[Claude filters to score >= 50: 15 rows remain]

Tool: everyrow_rank (Stage 3: Estimate team size)
├─ task: "Estimate investment team size per fund"
├─ field_name: "team_size_estimate"
→ 15 rows scored in 131s. Session: https://futuresearch.ai/sessions/ab54d4c9-...

Tool: everyrow_screen (Stage 4: Final inclusion)
├─ task: "Include if score >= 70 OR team <= 5"
→ 14 of 15 pass in 49s. Session: https://futuresearch.ai/sessions/5f18a461-...

14 of 20 funds qualified. The one excluded fund (Fixed Income Plus, score 55, team 12) fell below the score threshold and had too large a team.

FundScoreTeam SizeQualified
Tiny Ventures GP851Yes
Boutique Micro Fund922Yes
Nano Cap Hunters954Yes
Deep Dive Capital955Yes
Activist Value Fund9512Yes
Fixed Income Plus5512No

The everyrow SDK chains multiple operations in a single session. This notebook demonstrates a three-stage lead qualification pipeline using rank(), pandas filtering, and screen().

MetricValue
Input funds20
Final qualified14
pip install everyrow
export EVERYROW_API_KEY=your_key_here  # Get one at futuresearch.ai/api-key
import asyncio
import pandas as pd
from pydantic import BaseModel, Field
from everyrow import create_session
from everyrow.ops import rank, screen

class InclusionResult(BaseModel):
    passes: bool = Field(description="Include if score >= 70 OR team_size <= 5")

async def main():
    async with create_session(name="Multi-Stage Lead Screening") as session:
        # Stage 1: Score by research tool adoption
        scored = await rank(
            session=session,
            task="Score funds 0-100 on likelihood to adopt research tools",
            input=funds_df,
            field_name="score",
        )

        # Stage 2: Filter by threshold
        filtered = scored.data[scored.data["score"] >= 50].copy()

        # Stage 3: Research team sizes
        with_teams = await rank(
            session=session,
            task="Estimate investment team size per fund",
            input=filtered,
            field_name="team_size_estimate",
        )

        # Stage 4: Final screening
        final = await screen(
            session=session,
            task="Include if score >= 70 OR team size <= 5",
            input=with_teams.data,
            response_model=InclusionResult,
        )
        return final.data

results = asyncio.run(main())

The pipeline chains three everyrow operations in a single session: score, filter (pandas), team size estimation, and nuanced inclusion screening. 14 of 20 funds qualified. The single exclusion (Fixed Income Plus) had a moderate score (55) and large team (12).