Get the real user interface responses from
with any scale and format you need.
curl "https://api.scrapellm.com/scrapers/chatgpt" \
-H "X-API-Key: your_api_key" \
-G \
--data-urlencode "prompt=What brands do marketers recommend?" \
--data-urlencode "country=US"
{
"scraper":
"status": "done",
"result": "Marketers commonly recommend ChatGPT...",
"result_markdown": "**Marketers** commonly recommend...",
"links":
"llm_model": "gpt-4o",
"credits_used": 3,
"elapsed_ms": 4823.5
}
Supporting all major AI and search providers. Ready for global scale across multiple regions.
Direct API responses look nothing like the real user interface. We capture exactly what users see — sources, citations, shopping cards, and all.
Read the docs →Direct provider APIs strip the citation layer — the most valuable signal for SEO intelligence and brand monitoring.
Each LLM provider requires a separate API integration, authentication flow, and parsing logic. One API handles them all.
Token-based pricing from direct providers is unpredictable and expensive at scale. Our flat credit model is up to 12× cheaper.
See full cost analysis →Token-based pricing varies wildly by model, prompt length, and provider. Budget with confidence on our credit system.
AI model responses differ significantly by region. ScrapeLLM lets you request from any geography in a single API call.
Ask any AI the same question twice and you'll get a different answer. Tracking a single "ranking position" in an AI tool isn't a metric — it's a coin flip.
Single-run snapshots are noise. ScrapeLLM lets you run the same prompt across every major AI at any volume — cheaply enough to gather the statistical sample that actually means something.
Easily extract markdown, text or HTML. We parse sources, citations, query fan-out, shopping cards, and more.
import requests
response = requests.get(
"https://api.scrapellm.com/scrapers/chatgpt",
headers={"X-API-Key": "your_api_key"},
params={
"prompt": "What brands do marketers recommend?",
"country": "US",
}
)
print(response.json())
const params = new URLSearchParams({
prompt: 'What brands do marketers recommend?',
country: 'US',
});
const response = await fetch(
`https://api.scrapellm.com/scrapers/chatgpt?${params}`,
{ headers: { 'X-API-Key': 'your_api_key' } }
);
const data = await response.json();
console.log(data);
curl "https://api.scrapellm.com/scrapers/chatgpt" \
-H "X-API-Key: your_api_key" \
-G \
--data-urlencode "prompt=What brands do marketers recommend?" \
--data-urlencode "country=US"
{
"scraper": "chatgpt",
"status": "done",
"job_id": "job_abc123",
"prompt": "What brands do marketers recommend?",
"country": "US",
"result": "Marketers commonly recommend ChatGPT, Perplexity...",
"result_markdown": "**Marketers** commonly recommend...",
"links": [
{
"text": "ChatGPT",
"url": "https://chatgpt.com"
}
],
"llm_model": "gpt-4o",
"credits_used": 3,
"elapsed_ms": 4823.5,
"cached": false
}
Start free. Scale as you grow. Predictable credit-based costs.
Drag the slider to find the plan that fits your usage.
Response times depend on the provider. Most requests complete in 5–30 seconds. ChatGPT with query fan-out may take up to 45 seconds. You can poll for results or use our webhook callback for async workflows.
We extract the full response including: plain text, markdown, raw HTML, cited sources and URLs, search queries used (query fan-out), shopping cards, entities, and image references — structured as clean JSON.
Currently: ChatGPT, Perplexity, Microsoft Copilot, Google Gemini, Google AI Mode, Google AI Overview, Grok, and Google Search. Meta AI is coming soon.
Yes. All requests can be submitted asynchronously. You receive a job ID immediately and can poll the status endpoint or configure a webhook to receive results when ready.
Yes. Each request accepts a country parameter. We route the request through infrastructure in that region so AI model responses reflect the local context.
No. Credits reset at the start of each billing cycle. If you consistently need more credits, consider upgrading your plan or contacting us for a custom arrangement.