A balance of speed and stealth. Fast enough for efficiency while bypassing most website protections. Render JavaScript, extract data, and choose your proxy strategy.
The Scraping Browser API provisions a managed headless browser instance that renders JavaScript and bypasses bot protections. Send a URL, choose your proxy and output format, and receive clean extracted content along with a full-page screenshot.
| Property | Value |
|---|---|
| Endpoint | POST /v1/browser/scravity/ |
| Cost | 10 credits per successful request (datacenter proxy) · 35 credits (residential proxy) |
| Rate Limit | 2 requests/second, 20 requests/minute |
| Max Wait | 120 seconds per request |
| Auth | X-API-KEY header (key starts with scravity) |
| Response | JSON (application/json) |
| Interactive Docs | api.scravity.com/docs |
Sign up for a free Scravity account to generate your API key. All new accounts start with demo credits.
Get your API key
Sign up and generate a key from your dashboard
Authenticate requests
Add X-API-KEY header to every request
Send a POST request
Pass {"{ url: \"https://example.com\" }"} with optional proxy and format settings
Process results
Receive rendered page content, a base64 screenshot, and processing logs
Read the full documentation → for endpoint reference, parameters, and code examples in multiple languages.
Try the playground → to test the API with your own URL before integrating.
Test the Scraping Browser endpoint with a real API request. Your key is sent directly to our API — nothing is stored or logged on our side.
Your key is used for a single request and never stored. Get one from your dashboard.
0–120 seconds
This sends a real POST request directly to https://api.scravity.com/v1/browser/scravity/ with your API key. The response includes the page content, a base64-encoded screenshot (previewed above), and processing logs. You need a Scravity account with available credits — get one here.
Complete reference for the Scraping Browser API endpoint.
Include your API key in the request header:
| Parameter | Type | Required | Description |
|---|---|---|---|
url |
string |
Yes | The URL to scrape (must be a valid URI, max 2083 characters) |
wait_seconds |
integer |
No | Seconds to wait before extracting content (0–120, default: 0) |
proxy |
string |
No | Proxy type: "datacenter" (faster, default) or "residential" (stealthy, higher cost) |
output_format |
string |
No | Output format: "raw", "text", or "markdown" (default: "markdown") |
| Field | Type | Description |
|---|---|---|
success |
boolean |
Whether the request was processed successfully (always true) |
content |
string |
The scraped page content in the requested output format |
screenshot |
string |
Base64 encoded PNG screenshot of the rendered page |
logs |
string[] |
Processing log messages describing what the endpoint did |
| Field | Type | Description |
|---|---|---|
success |
boolean |
Whether the request was processed successfully (always false) |
error |
string |
A human-readable error message describing what went wrong |
logs |
string[] |
Processing log messages up to the point of failure |
| Status | Description |
|---|---|
200 |
Success — page scraped, content and screenshot returned |
422 |
Validation error — missing or invalid URL, out-of-range wait_seconds, or invalid enum value |
500 |
Internal error or scraping failure — the browser failed to load or render the page |
wait_seconds for JavaScript-heavy pages that need time to render content"raw" output format returns the full unprocessed HTML — use "markdown" or "text" for cleaner datascreenshot field is a base64-encoded PNG — decode it to save or display the rendered page500 status with success: false, an error message, and any logs collected before the failure