How do I connect TinyFish to Claude/Cursor via MCP so our agent can browse, click, and extract as a tool?
AI Agent Automation Platforms

How do I connect TinyFish to Claude/Cursor via MCP so our agent can browse, click, and extract as a tool?

7 min read

Quick Answer: You connect TinyFish to Claude or Cursor via MCP by running the TinyFish MCP server locally (or in your infra), registering it in your Claude/Cursor config, and exposing a run_agent-style tool that takes a goal + target sites and returns structured results from live web execution.

Frequently Asked Questions

How does TinyFish actually work as an MCP tool with Claude or Cursor?

Short Answer: TinyFish runs as a Model Context Protocol (MCP) server that exposes “web agent” tools to Claude/Cursor so your AI agent can navigate, click, authenticate, and extract live web data on demand.

Expanded Explanation:
Instead of giving Claude/Cursor static search or scraped HTML, you plug in TinyFish as an MCP tool that executes real workflows: logins, forms, quote flows, checkouts, and portal navigation. From Claude’s or Cursor’s perspective, it’s just calling a tool; under the hood, TinyFish dispatches enterprise web agents that run in parallel across sites, handle CAPTCHAs and bot detection, and then return structured JSON outputs.

The pattern is simple: your AI describes the goal (“get current quote from carrier X for profile Y,” “fetch restaurant fees on Z platform,” “check stock for these SKUs”), TinyFish agents execute that workflow live, and Claude/Cursor uses the returned structured data to reason, compare, or generate responses. No browsers or proxies on your side. No fragile Playwright/Selenium stacks to babysit.

Key Takeaways:

  • TinyFish appears to Claude/Cursor as a standard MCP tool, but executes full web workflows behind the scenes.
  • You get live, authenticated, structured outputs instead of cached pages or brittle scraping scripts.

What’s the process to connect TinyFish to Claude or Cursor via MCP?

Short Answer: Install and run the TinyFish MCP server, register it in your Claude Desktop or Cursor MCP config, then expose a run_agent tool that Claude/Cursor can call with goals and parameters.

Expanded Explanation:
From an operator’s point of view, you’re doing three things: (1) standing up an MCP server that speaks TinyFish’s API, (2) wiring that server into your AI environment (Claude Desktop or Cursor’s MCP configuration), and (3) defining one or more tools that map directly to TinyFish agents.

Underneath, the MCP server handles authentication to TinyFish (API key or service token), translates tool calls into agent runs (goal + target URLs + constraints), and streams back results. Once configured, your prompts can say “use the TinyFish tool to browse this portal and extract X,” and the model will autonomously call the MCP tool, wait for execution, then fold the structured outputs into its reasoning.

Steps:

  1. Set up TinyFish access

    • Get a TinyFish account and API key.
    • Confirm you can run a basic agent (via API or Playground) that hits your target sites.
  2. Run the TinyFish MCP server

    • Implement or use a thin MCP wrapper that exposes tools like run_tinyfish_agent.
    • Configure it with your TinyFish API key and environment (prod/sandbox).
    • Run it locally or in your infra (e.g., localhost:8001 for Claude Desktop).
  3. Register the MCP server with Claude/Cursor

    • In Claude Desktop, add a new MCP server entry in claude_desktop_config.json.
    • In Cursor, add the TinyFish MCP server to your .cursor/mcp.json or MCP settings.
    • Restart the client and verify that the TinyFish tool is listed and callable.

What’s the difference between using TinyFish via MCP vs using generic browser tools?

Short Answer: Generic browser tools click pixels and return HTML; TinyFish via MCP executes production-grade web workflows (logins, forms, CAPTCHAs) in parallel and returns structured, decision-ready data.

Expanded Explanation:
Most built-in “browse the web” tools in AI environments are either search wrappers or thin headless-browsing layers. They’re okay for static pages and public docs. They break quickly when you introduce auth flows, step-heavy forms, anti-bot, or the need to hit hundreds of portals at once.

TinyFish is built for the other 80% of real ops work: authenticated portals, dynamic apps, and multi-step tasks that only produce a result at the end of the workflow. Via MCP, you’re not just loading a page; you’re asking an enterprise Web Agent to complete the workflow and hand back the answer in structured JSON. Think “53-step insurance quote” or “receipt-level checkout totals across 20+ countries,” not “grab me this blog post.”

Comparison Snapshot:

  • Generic browser tools:

    • Simulate simple browsing.
    • Limited auth support. Fragile to layout changes.
    • Mostly return HTML/DOM or unstructured text.
  • TinyFish via MCP:

    • Navigate, authenticate, fill forms, click, transact.
    • Handles CAPTCHAs and bot defenses at scale.
    • Returns clean, structured outputs (JSON) built for downstream systems.
  • Best for:

    • High-value, high-friction workflows: carrier portals, B2B SaaS dashboards, marketplaces, pricing/availability checks, quoting flows, anything behind login and forms.

What do I need in place to implement TinyFish + Claude/Cursor via MCP in production?

Short Answer: You need a TinyFish account, an MCP server process wired to the TinyFish API, and basic config in Claude/Cursor; from there, you can design agent “recipes” per workflow and run them unattended.

Expanded Explanation:
Think of this as wiring a new infrastructure primitive into your AI stack. On the TinyFish side, you’ll define one or more agent templates that know how to operate on your target sites (e.g., “US auto insurance portals,” “LATAM food delivery marketplaces,” “EMEA hotel inventory portals”). On the MCP side, you’ll expose tool definitions that map to those templates, with parameters that your prompts or calling code can fill in.

Operationally, you get enterprise controls: 99.99% uptime, observability via Workbench (screenshots, run history), AES-256 at rest and TLS 1.3 in transit, SSO, and audit trails. This lets you let Claude/Cursor call TinyFish unattended without worrying that a random layout change will silently corrupt your data. If a site shifts, TinyFish agents adapt; if they can’t, you have traceable failures with screenshots.

What You Need:

  • TinyFish setup:

    • Account + API key.
    • At least one proven agent workflow for your target sites (built via API or TinyFish Workbench).
  • MCP integration pieces:

    • A TinyFish MCP server (Node, Python, or your stack of choice) that:
      • Authenticates to TinyFish.
      • Exposes tools like run_tinyfish_agent, get_agent_run_status, get_agent_result.
    • Claude Desktop or Cursor MCP config pointing to that server, with tools visible to the model.

How should we think strategically about TinyFish + MCP vs building our own browser stack?

Short Answer: Treat TinyFish + MCP as a dedicated “web execution layer” for Claude/Cursor, so your AI agents operate on live, authenticated data without you owning browsers, proxies, and anti-bot logic.

Expanded Explanation:
In practice, you have three choices when you want Claude/Cursor to “act on the web”:

  1. Manual ops behind the model. Humans click, copy/paste, and feed results into the AI. This is slow (3–5 days for real ops), expensive, and error-prone.

  2. DIY browser automation. You stand up Playwright/Selenium + residential proxies + CAPTCHA solving + your own orchestration. It works until scale and site churn hit, then your team is on weekly “why did the script break again?” duty.

  3. TinyFish via MCP. You let your AI call into a platform whose whole job is to navigate/authenticate/extract/transact at production speed and cost. You keep control over workflows and observability, but outsource the infra burden.

Strategically, the third path is the only one that scales. It gives you concurrency (1 → 1,000 web agents), live outputs instead of cached search, and unit economics that make sense when your AI starts running thousands of operations per day. The MCP integration is just the wiring: it lets Claude/Cursor treat this infrastructure as a first-class tool.

Why It Matters:

  • Better decisions, fewer surprises: Your agents act on “web truth” as it exists right now, behind logins and forms—not on cached or scraped data that went stale hours ago.
  • Operational leverage: Your team stops maintaining flaky browser stacks and focuses on defining workflows and guardrails. TinyFish handles the execution, scale, and reliability behind a simple MCP tool interface.

Quick Recap

Connecting TinyFish to Claude or Cursor via MCP lets your AI agents do more than browse—they can execute real, multi-step workflows across authenticated, dynamic websites and return structured, production-ready data. You stand up a TinyFish MCP server, register it in your Claude/Cursor configuration, and expose tools that map directly onto TinyFish Web Agents. From there, your prompts can say “use the TinyFish tool to log in, click through, and extract X,” and the model will orchestrate live web operations instead of reading stale search results.

Next Step

Get Started