FutureSearch Logofuturesearch
  • Blog
  • Solutions
  • Research
  • Docs
  • Evals
  • Company
  • Get Researchers
FutureSearch Logo

General inquiry? You can reach us at hello@futuresearch.ai.

Company

Team & CareersPressPrivacy PolicyTerms of Service

Developers

SDK DocsAPI ReferenceCase StudiesGitHub

Follow Us

X (Twitter)@dschwarz26LinkedIn
FutureSearchdocs
Your research team
Installation
  • All install methods
  • Claude.ai
  • Claude Cowork
  • Claude Code
  • Web App
  • Python SDK
  • Skill
  • MCP Server
Reference
  • API Key
  • classify
  • dedupe
  • forecast
  • merge
  • rank
  • agent_map
  • screen
  • Progress Monitoring
  • Chaining Operations
Guides
  • LLM-Powered Data Labeling
  • Add a Column via Web Research
  • Classify and Label Rows
  • Deduplicate Training Data
  • Filter a Dataset Intelligently
  • Join Tables Without Shared Keys
  • Rank Data by External Metrics
  • Resolve Duplicate Entities
  • Scale Deduplication to 20K Rows
Case Studies
  • Deduplicate Contact Lists
  • Deduplicate CRM Records
  • Enrich Contacts with Company Data
  • Fuzzy Match Across Tables
  • Link Records Across Medical Datasets
  • LLM Cost vs. Accuracy
  • Merge Costs and Speed
  • Merge Thousands of Records
  • Multi-Stage Lead Qualification
  • Research and Rank Web Data
  • Run 10,000 LLM Web Research Agents
  • Score Cold Leads via Web Research
  • Score Leads from Fragmented Data
  • Screen 10,000 Rows
  • Screen Job Listings
  • Screen Stocks by Economic Sensitivity
  • Screen Stocks by Investment Thesis
FutureSearchby futuresearch
by futuresearch

Rank

rank takes a DataFrame and a natural-language scoring criterion, dispatches web research agents to compute a score for each row, and returns the DataFrame sorted by that score. The sort key does not need to exist in your data. Agents derive it at runtime by searching the web, reading pages, and reasoning over what they find.

Examples

from everyrow.ops import rank

result = await rank(
    task="Score by likelihood to need data integration solutions",
    input=leads_dataframe,
    field_name="integration_need_score",
    ascending_order=False,  # highest first
)
print(result.data.head())

The task can be as specific as you want. You can describe the metric in detail, list which sources to use, and explain how to resolve ambiguities.

result = await rank(
    task="""
        Score 0-100 by likelihood to adopt research tools in the next 12 months.

        High scores: teams actively publishing, hiring researchers, or with
        recent funding for R&D. Low scores: pure trading shops, firms with
        no public research output.

        Consult the company's website, job postings, and LinkedIn profile for information.
    """,
    input=investment_firms,
    field_name="research_adoption_score",
    ascending_order=False,  # highest first
)
print(result.data.head())

Structured output

If you want more than just a number, pass a Pydantic model.

Note that you don't need specify fields for reasoning, explanation or sources. That information is included automatically.

from pydantic import BaseModel, Field

class AcquisitionScore(BaseModel):
    fit_score: float = Field(description="0-100, strategic alignment with our business")
    annual_revenue_usd: int = Field(description="Their estimated annual revenue in USD")

result = await rank(
    task="Score acquisition targets by product-market fit and revenue quality",
    input=potential_acquisitions,
    field_name="fit_score",
    response_model=AcquisitionScore,
    ascending_order=False,  # highest first
)
print(result.data.head())

Now every row has both fit_score and annual_revenue_usd fields, each of which includes its own explanation.

When specifying a response model, make sure that it contains field_name. Otherwise, you'll get an error. Also, the field_type parameter is ignored when you pass a response model.

Parameters

Name Type Description
task str The task for the agent describing how to find your metric
session Session Optional, auto-created if omitted
input DataFrame Your data
field_name str Column name for the metric
field_type str The type of the field (default: "float")
response_model BaseModel Optional response model for multiple output fields
ascending_order bool True = lowest first (default)
preview bool True = process only a few rows

Via MCP

MCP tool: everyrow_rank

Parameter Type Description
csv_path string Path to input CSV file
task string How to score each row
field_name string Column name for the score

Related docs

Guides

  • Sort a Dataset Using Web Data

Case Studies

  • Score Leads from Fragmented Data
  • Score Leads Without CRM History
  • Research and Rank Permit Times

Blog posts

  • Ranking by Data Fragmentation Risk
  • Rank Leads Like an Analyst