300M+ monthly API calls

The scraper of
LLM Intelligence

Get the real user interface responses from

ChatGPT Perplexity Copilot Gemini AI Overview Grok

with any scale and format you need.

Try 500 credits free No credit card required.
Live Example Request
GET
Request
curl "https://api.scrapellm.com/scrapers/chatgpt" \
  -H "X-API-Key: your_api_key" \
  -G \
  --data-urlencode "prompt=What brands do marketers recommend?" \
  --data-urlencode "country=US"
Response 0.0s
{
  "scraper": 
  "status": "done",
  "result": "Marketers commonly recommend ChatGPT...",
  "result_markdown": "**Marketers** commonly recommend...",
  "links": 
  "llm_model": "gpt-4o",
  "credits_used": 3,
  "elapsed_ms": 4823.5
}

Structured data from any LLM,
at any scale

Supporting all major AI and search providers. Ready for global scale across multiple regions.

ChatGPT Live
Perplexity Live
Copilot Live
Gemini Live
AI Mode Live
AI Overview Live
Grok Live
Google Search Live
Meta AI Soon

Direct LLM APIs have hidden
costs and fragile scaling

Real UI Responses

Direct API responses look nothing like the real user interface. We capture exactly what users see — sources, citations, shopping cards, and all.

Read the docs →
×

No Sources / Citations

Direct provider APIs strip the citation layer — the most valuable signal for SEO intelligence and brand monitoring.

×

Multiple Integrations

Each LLM provider requires a separate API integration, authentication flow, and parsing logic. One API handles them all.

$

Up to 12× Cost Savings

Token-based pricing from direct providers is unpredictable and expensive at scale. Our flat credit model is up to 12× cheaper.

See full cost analysis →
×

Unpredictable Costs

Token-based pricing varies wildly by model, prompt length, and provider. Budget with confidence on our credit system.

×

No Multi-Region Support

AI model responses differ significantly by region. ScrapeLLM lets you request from any geography in a single API call.

AI visibility tracking is fundamentally broken

Ask any AI the same question twice and you'll get a different answer. Tracking a single "ranking position" in an AI tool isn't a metric — it's a coin flip.

<1%
Chance of the same list twice
Ask any AI for brand recommendations 100 times and you'll get nearly 100 unique lists. Same intent, different answers, every time.
<0.1%
Same list, same order
Ranking positions in AI are effectively random. Selling or buying "rank #1 in ChatGPT" as a KPI is not a meaningful metric.
60–100×
Runs needed for signal
The only meaningful metric is visibility % — how often your brand appears across dozens or hundreds of prompt runs. That requires scale.
The ScrapeLLM approach

Run at scale. Measure visibility %.

Single-run snapshots are noise. ScrapeLLM lets you run the same prompt across every major AI at any volume — cheaply enough to gather the statistical sample that actually means something.

Repeat any prompt 100× across all AI models
Structured JSON — brand mentions, citations, sources
Flat credit pricing — affordable at the volume that matters

Simple integration, powerful output

Easily extract markdown, text or HTML. We parse sources, citations, query fan-out, shopping cards, and more.

import requests

response = requests.get(
  "https://api.scrapellm.com/scrapers/chatgpt",
  headers={"X-API-Key": "your_api_key"},
  params={
    "prompt": "What brands do marketers recommend?",
    "country": "US",
  }
)

print(response.json())
const params = new URLSearchParams({
  prompt: 'What brands do marketers recommend?',
  country: 'US',
});

const response = await fetch(
  `https://api.scrapellm.com/scrapers/chatgpt?${params}`,
  { headers: { 'X-API-Key': 'your_api_key' } }
);
const data = await response.json();
console.log(data);
curl "https://api.scrapellm.com/scrapers/chatgpt" \
  -H "X-API-Key: your_api_key" \
  -G \
  --data-urlencode "prompt=What brands do marketers recommend?" \
  --data-urlencode "country=US"
Response 200 OK application/json
{
  "scraper": "chatgpt",
  "status": "done",
  "job_id": "job_abc123",
  "prompt": "What brands do marketers recommend?",
  "country": "US",
  "result": "Marketers commonly recommend ChatGPT, Perplexity...",
  "result_markdown": "**Marketers** commonly recommend...",
  "links": [
    {
      "text": "ChatGPT",
      "url": "https://chatgpt.com"
    }
  ],
  "llm_model": "gpt-4o",
  "credits_used": 3,
  "elapsed_ms": 4823.5,
  "cached": false
}

Simple, transparent pricing

Start free. Scale as you grow. Predictable credit-based costs.

Credits per request

Calculate your monthly cost

Drag the slider to find the plan that fits your usage.

Credits per month
credits
0100k200k400k
Select a source to calculate by requests:
Credits needed5,000
Best planStarter
Plan credits10,000
Overage cost$0
Monthly total$49
Starter plan covers your usage.

Frequently asked questions

Response times depend on the provider. Most requests complete in 5–30 seconds. ChatGPT with query fan-out may take up to 45 seconds. You can poll for results or use our webhook callback for async workflows.

We extract the full response including: plain text, markdown, raw HTML, cited sources and URLs, search queries used (query fan-out), shopping cards, entities, and image references — structured as clean JSON.

Currently: ChatGPT, Perplexity, Microsoft Copilot, Google Gemini, Google AI Mode, Google AI Overview, Grok, and Google Search. Meta AI is coming soon.

Yes. All requests can be submitted asynchronously. You receive a job ID immediately and can poll the status endpoint or configure a webhook to receive results when ready.

Yes. Each request accepts a country parameter. We route the request through infrastructure in that region so AI model responses reflect the local context.

No. Credits reset at the start of each billing cycle. If you consistently need more credits, consider upgrading your plan or contacting us for a custom arrangement.

Start scraping LLMs at scale

Monitor your brand and your competitors, globally, with the best performance.

Try 500 credits free No credit card required.