Getting Started

Welcome to ShadowLogin API documentation. Our API allows you to extract data from any website with a single API call, handling JavaScript rendering, CAPTCHA solving, and proxy rotation automatically.

Base URL

https://api.shadowlogin.net

Quick Start

cURL
curl -X POST https://api.shadowlogin.net/v1/scrape \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your_api_key" \
  -d '{
    "url": "https://example.com",
    "render_js": true
  }'

Authentication

All API requests require authentication using an API key. Include your API key in the X-API-Key header.

Header Format

X-API-Key: sk_live_xxxxxxxxxxxxxxxxxxxxxxxx

Security: Never expose your API key in client-side code. Always make API calls from your server.

Scrape Endpoint

The main endpoint for scraping web pages.

POST /v1/scrape

Request Parameters

Parameter Type Description
url string URL to scrape (required)
render_js boolean Enable JavaScript rendering (default: false)
wait_for string CSS selector to wait for before returning
country string Country code for geo-targeting (e.g., "US")
screenshot boolean Return screenshot (base64)
ai_extract object AI extraction configuration

Example Response

{
  "success": true,
  "status_code": 200,
  "url": "https://example.com",
  "html": "<html>...</html>",
  "title": "Example Domain",
  "extracted_data": { ... },
  "credits_used": 1,
  "response_time_ms": 1234
}

AI Extraction

Extract structured data from web pages using AI. No CSS selectors needed - just describe what you want in plain English.

POST /v1/ai/extract

Example Request

{
  "url": "https://amazon.com/dp/B09V3KXJPB",
  "prompt": "Extract the product name, price, rating, and number of reviews",
  "output_format": "json"
}

AI Models Available

Fast claude-haiku-4-5

Best for simple extractions. Fastest response time.

Default claude-sonnet-4-5

Balanced speed and accuracy. Recommended for most use cases.

Advanced claude-opus-4-5

Best accuracy for complex extractions and reasoning.

Browser API

Control a real browser for complex scraping scenarios. Full Playwright compatibility.

POST /v1/browser/session

Create Session

{
  "browser": "chromium",
  "headless": true,
  "stealth": true,
  "proxy": "auto"
}

Available Actions

goto Navigate to URL
click Click element by selector
type Type text into input
screenshot Take screenshot
evaluate Execute JavaScript

Google Sheets Integration

Push scraped data directly to Google Sheets with real-time sync.

Setup OAuth

GET /v1/features/sheets/auth-url

Push Data

{
  "spreadsheetId": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
  "data": [
    {"name": "Product A", "price": "$99"},
    {"name": "Product B", "price": "$149"}
  ],
  "options": {
    "sheetName": "Products",
    "includeHeaders": true
  }
}

Rate Limits

Rate limits vary by plan:

Plan Rate Limit
Free Trial 10 requests/second
Micro / Starter 50 requests/second
Advanced / Venture 50-100 requests/second
Enterprise Custom (up to unlimited)

Error Handling

The API uses standard HTTP status codes:

Code Description
200 Success
400 Bad request (invalid parameters)
401 Unauthorized (invalid API key)
429 Rate limit exceeded
500 Server error (not billed)

Note: You are only charged for successful requests (2xx and 4xx status codes). Server errors (5xx) are never billed.