FutureSearch Logofuturesearch
  • Solutions
  • Pricing
  • Research
  • Docs
  • Evals
  • Blog
  • Company
  • LiteLLM Checker
  • Get Researchers
FutureSearch Logo

General inquiry? You can reach us at hello@futuresearch.ai.

Company

Team & CareersPressPrivacy PolicyTerms of Service

Developers

SDK DocsAPI ReferenceCase StudiesGitHubSupport

Integrations

Claude CodeCursorChatGPT CodexClaude.ai

Follow Us

X (Twitter)@dschwarz26LinkedIn
FutureSearchdocs
Your research team
Installation
  • All install methods
  • Claude.ai
  • Claude Cowork
  • Claude Code
  • Web App
  • Python SDK
  • Skill
  • MCP Server
Reference
  • API Key
  • classify
  • dedupe
  • forecast
  • merge
  • rank
  • agent_map
  • Progress Monitoring
  • Chaining Operations
Guides
  • LLM-Powered Data Labeling
  • Add a Column via Web Research
  • Classify and Label Rows
  • Deduplicate Training Data
  • Filter a Dataset Intelligently
  • Find Profitable Polymarket Trades
  • Forecast Outcomes for a List of Entities
  • Value a Private Company
  • Join Tables Without Shared Keys
  • Rank Data by External Metrics
  • Resolve Duplicate Entities
  • Scale Deduplication to 20K Rows
  • Turn Claude into an Accurate Forecaster
Case Studies
  • Deduplicate Contact Lists
  • Deduplicate CRM Records
  • Enrich Contacts with Company Data
  • Fuzzy Match Across Tables
  • Link Records Across Medical Datasets
  • LLM Cost vs. Accuracy
  • Merge Costs and Speed
  • Merge Thousands of Records
  • Multi-Stage Lead Qualification
  • Research and Rank Web Data
  • Run 10,000 LLM Web Research Agents
  • Score Cold Leads via Web Research
  • Score Leads from Fragmented Data
  • Screen 10,000 Rows
  • Screen Job Listings
  • Screen Stocks by Economic Sensitivity
  • Screen Stocks by Investment Thesis
FutureSearchby futuresearch
by futuresearch

Score Cold Leads via Web Research

Score 15 investment firms on their likelihood to purchase third-party research tools. Each firm is researched via the web to assess its actual strategy and team composition, then scored on a 0-100 scale.

MetricValue
Rows processed15
Cost$0.30
Time149 seconds

Add FutureSearch to Claude Code if you haven't already:

claude mcp add futuresearch --scope project --transport http https://mcp.futuresearch.ai/mcp

Download investment_firms.csv. Tell Claude:

Score each investment firm from 0-100 on their likelihood to purchase
third-party research tools. High scores for fundamental/activist/short-sellers.
Low scores for passive index funds and pure quant.

Claude calls FutureSearch's rank MCP tool. Each agent researches the firm's actual strategy:

Tool: futuresearch_rank
├─ task: "Score each investment firm on likelihood to purchase research tools..."
├─ input_csv: "/Users/you/investment_firms.csv"
├─ field_name: "score"
├─ field_type: "int"
└─ ascending_order: false

→ Submitted: 15 rows for ranking.
  Session: https://futuresearch.ai/sessions/f759d5fb-822d-4bb0-b978-85b36909b919

...

Tool: futuresearch_results
→ Saved 15 rows to /Users/you/scored_firms.csv

View the session.

Add the FutureSearch connector if you haven't already. Then upload investment_firms.csv and ask Claude:

Score each investment firm from 0-100 on their likelihood to purchase third-party research tools. High scores for fundamental/activist/short-sellers. Low scores for passive index funds and pure quant.

Results take about 2.5 minutes.

Go to futuresearch.ai/app, upload investment_firms.csv, and enter:

Score each investment firm from 0-100 on their likelihood to purchase third-party research tools. High scores for fundamental/activist/short-sellers. Low scores for passive index funds and pure quant.

The FutureSearch SDK's rank() performs web research on each row to score firms by a criterion that requires external knowledge.

pip install futuresearch
export FUTURESEARCH_API_KEY=your_key_here  # Get one at futuresearch.ai/app/api-key
import asyncio
import pandas as pd
from futuresearch import create_session
from futuresearch.ops import rank

firms_df = pd.read_csv("investment_firms.csv")

async def main():
    async with create_session(name="Research Tool Adoption Scoring") as session:
        result = await rank(
            session=session,
            task="""
                Score each investment firm from 0-100 on likelihood to purchase
                third-party research tools. High for fundamental/activist/short-sellers.
                Low for passive index funds and pure quant.
            """,
            input=firms_df,
            field_name="score",
        )
        return result.data

results = asyncio.run(main())

Results

FirmScoreStrategy
Muddy Waters Research95Short-seller, research-driven
ValueAct Capital95Activist, deep research
Elliott Management95Activist, multi-strategy
Baupost Group92Value, fundamental research
Third Point92Activist, event-driven
Lone Pine Capital90Long/short equity
Pershing Square90Concentrated activist
.........
AQR Capital20Systematic/quant
Two Sigma15Quantitative
Renaissance Technologies20Pure quant
Bridgewater Associates15Systematic macro
Vanguard Index Funds0Passive index

Activist and fundamental research firms score highest. Pure quant and passive index funds score lowest. The web research verified each firm's actual strategy and team composition.