Error Medic

Cloudflare API Timeout: Fix 524, 522 & Script Execution Errors (Complete Guide)

Fix Cloudflare API timeout errors (524, 522, Workers CPU limit exceeded). Step-by-step diagnosis and fixes including origin tuning, retry logic, and Worker opti

Last updated:
Last verified:
2,335 words
Key Takeaways
  • HTTP 524 (A Timeout Occurred) means Cloudflare connected to your origin but the origin did not respond within the configured timeout window (default 100s for Enterprise, less for lower plans).
  • HTTP 522 (Connection Timed Out) means Cloudflare could not complete the TCP handshake with your origin within 15 seconds — usually a network, firewall, or overloaded-server problem.
  • Cloudflare Workers throw 'Error: Script execution exceeded time limit' when a Worker runs longer than the CPU-time budget (10 ms on the free tier, 50 ms or up to 30 s wall-clock on paid plans).
  • Quick fix path: (1) confirm which timeout type you have via the error code and Cloudflare logs, (2) increase origin proxy timeout in the Cloudflare dashboard or via API, (3) optimize slow origin queries or offload work to a queue, (4) add idempotent retry logic with exponential back-off in your client.
Fix Approaches Compared
MethodWhen to UseTime to ImplementRisk
Increase Cloudflare proxy timeout (Dashboard/API)Origin is genuinely slow but correct; you own the Cloudflare zone5 minutesLow — only affects request wait time
Optimize origin query / add DB indexSlow SQL or downstream API call identified in APM traces30 min – 2 hoursLow if tested in staging first
Move long work to async queue + webhook/pollingOperation fundamentally exceeds any safe HTTP timeout (> 30 s)Half day – 1 dayMedium — requires client-side changes
Enable Cloudflare Argo Smart RoutingLatency caused by suboptimal BGP routing between PoP and origin15 minutesLow — paid add-on, can disable
Increase Worker CPU/duration limits (Workers Paid plan)Cloudflare Worker hitting free-tier 10 ms CPU cap1 hour (plan upgrade + code audit)Low
Implement client-side exponential back-off + idempotency keyTransient timeouts due to traffic spikes1–2 hoursLow — purely client code
Split large API payload into paginated requestsRequest body or response body causing read timeout2–4 hoursMedium — API contract change

Understanding Cloudflare API Timeout Errors

Cloudflare sits between your clients and your origin server, acting as a reverse proxy and CDN. Every request flows through a Cloudflare Point of Presence (PoP) before reaching your infrastructure. This architecture introduces several distinct timeout boundaries, each of which produces a different error code and requires a different fix.

The Four Timeout Boundaries

1. Client → Cloudflare (upstream read timeout) Cloudflare waits up to 90 seconds for the client to finish sending the request body. Slow POST uploads can trigger this. You will see a generic connection drop on the client side.

2. Cloudflare → Origin TCP connect (522) Error message in browser: 522 Connection Timed Out. Raw HTTP response body: {"error": "522", "message": "Connection Timed Out"}. Cloudflare attempts the TCP handshake for 15 seconds. If your origin does not complete the SYN-ACK exchange in that window, Cloudflare returns 522. Common causes: firewall blocking Cloudflare IPs, origin server completely overloaded, wrong origin IP configured.

3. Cloudflare → Origin response (524) Error message: 524 A Timeout Occurred. Cloudflare successfully connected and sent the request, but the origin did not return the first byte of a response within the proxy_read_timeout window. On Free and Pro plans this is 100 seconds. Business and Enterprise plans can be configured higher via the Cloudflare API.

4. Cloudflare Workers CPU / wall-clock limit Error returned to the client: {"error": "Worker threw exception"} with a 500 status, and in the Workers dashboard log: Error: Script execution exceeded time limit. or CPU time limit exceeded. Free Workers are capped at 10 ms CPU time per request. Workers Paid (Bundled) allows 50 ms CPU time. Workers Paid (Unbound) allows up to 30 seconds wall-clock time but bills per CPU millisecond.


Step 1: Diagnose — Identify Which Timeout You Have

Check the HTTP status code first.

524 → proxy_read_timeout (origin too slow to respond)
522 → TCP connect timeout (origin unreachable)
408 → client upload too slow
500 + "Script execution exceeded time limit" → Workers CPU budget

Pull the raw response headers with curl to confirm:

curl -I -s -o /dev/null -w "%{http_code} %{time_total}s\n" https://api.example.com/slow-endpoint

Check Cloudflare's Security Events and Cache Analytics logs in the dashboard (Analytics → Logs) for the EdgeResponseStatus field. If you have Cloudflare Logpush enabled, query your SIEM:

# Example: query Logpush JSON lines pushed to S3 / R2
jq 'select(.EdgeResponseStatus == 524 or .EdgeResponseStatus == 522) | {ts: .EdgeStartTimestamp, ip: .ClientIP, url: .ClientRequestURI, ttfb: .OriginResponseTime}' cloudflare-*.log.json

The OriginResponseTime field (in nanoseconds) tells you exactly how long Cloudflare waited before giving up. If it is close to 100,000,000,000 ns (100 s), you have a proxy read timeout.

For Workers, open the Workers dashboard → your Worker → Logs tab. The structured log line looks like:

{"outcome": "exceeded_cpu", "scriptName": "my-api-worker", "cpuTime": 10, "wallTime": 12, "event": {"request": {"url": "https://..."}}}

Step 2: Fix 522 — TCP Connect Timeout

2a. Verify Cloudflare can reach your origin IP.

Log into your server and confirm the Cloudflare IP ranges are allowlisted in your firewall:

# Download current Cloudflare IP ranges
curl -s https://www.cloudflare.com/ips-v4 | sudo tee /etc/nginx/cloudflare_ips.txt
curl -s https://www.cloudflare.com/ips-v6 | sudo tee -a /etc/nginx/cloudflare_ips.txt

# Check iptables for drops
sudo iptables -L INPUT -n -v | grep DROP

2b. Test TCP connectivity from outside your network using Cloudflare's own diagnostic:

# From a machine NOT behind your own firewall:
nc -zv your-origin-ip 443
traceroute -T -p 443 your-origin-ip

2c. Check origin server health.

ss -s          # connection states — look for high SYN_RECV (SYN flood or overload)
top -bn1 | head -20
tail -100 /var/log/nginx/error.log

Step 3: Fix 524 — Proxy Read Timeout

3a. Increase the Cloudflare proxy timeout for Business/Enterprise zones.

Cloudflare exposes the proxy_read_timeout setting via their API. The maximum is 6000 seconds on Enterprise.

# Get your zone ID
export CF_API_TOKEN="your_token"
export ZONE_ID=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones?name=example.com" \
  -H "Authorization: Bearer $CF_API_TOKEN" \
  -H "Content-Type: application/json" | jq -r '.result[0].id')

# View current proxy timeout setting
curl -s -X GET "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/settings/proxy_read_timeout" \
  -H "Authorization: Bearer $CF_API_TOKEN" | jq .

# Set proxy_read_timeout to 300 seconds (5 minutes)
curl -s -X PATCH "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/settings/proxy_read_timeout" \
  -H "Authorization: Bearer $CF_API_TOKEN" \
  -H "Content-Type: application/json" \
  --data '{"value": 300}' | jq .

3b. Identify what is slow on the origin. Instrument your slowest endpoints before blindly raising timeouts:

# Quick nginx slow log — log requests taking > 3 s
# In nginx.conf:
# log_format timed '$remote_addr - $request_time $upstream_response_time $request';
# access_log /var/log/nginx/timed.log timed;

# Parse slow requests
awk '$3 > 3 {print $3, $5}' /var/log/nginx/timed.log | sort -rn | head -20

3c. Add missing database indexes — the most common root cause of unexpectedly slow API responses:

-- PostgreSQL: find sequential scans on large tables
SELECT relname, seq_scan, seq_tup_read, idx_scan
FROM pg_stat_user_tables
WHERE seq_scan > 0
ORDER BY seq_tup_read DESC
LIMIT 10;

-- Explain the slow query
EXPLAIN (ANALYZE, BUFFERS) SELECT ...;

3d. Offload genuinely long operations to an async queue.

For operations that will never complete in under 30 seconds (report generation, bulk exports, ML inference), the correct pattern is:

  1. API endpoint enqueues the job and immediately returns 202 Accepted with a job_id.
  2. A background worker processes the job.
  3. Client polls GET /jobs/{job_id} or receives a webhook when done.

This pattern entirely sidesteps HTTP timeout limits.


Step 4: Fix Workers CPU/Wall-Clock Timeout

4a. Profile CPU-heavy Worker code using the Date.now() instrumentation pattern:

// workers/api.js
export default {
  async fetch(request, env, ctx) {
    const t0 = Date.now();

    const result = await expensiveOperation(request, env);

    console.log(`CPU checkpoint: ${Date.now() - t0}ms`);
    return new Response(JSON.stringify(result), { status: 200 });
  }
};

4b. Move CPU-intensive work to Durable Objects or offload to a backend service. Workers are designed for request routing and lightweight transformation, not heavy computation.

4c. Upgrade to Workers Paid (Unbound) for legitimate long-running Workers:

# wrangler.toml
name = "my-api-worker"
main = "src/index.js"
usage_model = "unbound"   # enables up to 30s wall-clock, billed per CPU ms

[limits]
cpu_ms = 30000

Then deploy:

wrangler deploy --env production

Step 5: Add Client-Side Retry Logic

Regardless of which fix you apply on the server, your API clients should handle transient timeouts gracefully with exponential back-off and idempotency keys:

import httpx
import time
import uuid

def call_api_with_retry(url: str, payload: dict, max_attempts: int = 4) -> dict:
    idempotency_key = str(uuid.uuid4())
    headers = {"Idempotency-Key": idempotency_key}
    
    for attempt in range(max_attempts):
        try:
            response = httpx.post(url, json=payload, headers=headers, timeout=120.0)
            response.raise_for_status()
            return response.json()
        except (httpx.TimeoutException, httpx.HTTPStatusError) as exc:
            if attempt == max_attempts - 1:
                raise
            wait = (2 ** attempt) + 0.1  # 0.1, 2.1, 4.1, 8.1 s
            print(f"Attempt {attempt + 1} failed ({exc}), retrying in {wait:.1f}s")
            time.sleep(wait)

Frequently Asked Questions

bash
#!/usr/bin/env bash
# cloudflare-timeout-diag.sh
# Comprehensive diagnostic script for Cloudflare API timeout errors
# Usage: CF_API_TOKEN=xxx ZONE_NAME=example.com bash cloudflare-timeout-diag.sh

set -euo pipefail

CF_API="https://api.cloudflare.com/client/v4"
ZONE_NAME="${ZONE_NAME:?Set ZONE_NAME}"
ORIGIN_IP="${ORIGIN_IP:-}"   # optional: your origin IP to test directly
TEST_URL="${TEST_URL:-https://$ZONE_NAME/}"  # endpoint to probe

# ── 1. Resolve Cloudflare Zone ID ──────────────────────────────────────────
echo "[1/6] Fetching zone ID for $ZONE_NAME ..."
ZONE_ID=$(curl -sf -X GET "$CF_API/zones?name=$ZONE_NAME" \
  -H "Authorization: Bearer $CF_API_TOKEN" \
  -H "Content-Type: application/json" | jq -r '.result[0].id')
echo "      Zone ID: $ZONE_ID"

# ── 2. Read current proxy_read_timeout ─────────────────────────────────────
echo "[2/6] Current proxy_read_timeout setting:"
curl -sf -X GET "$CF_API/zones/$ZONE_ID/settings/proxy_read_timeout" \
  -H "Authorization: Bearer $CF_API_TOKEN" | jq '{value: .result.value, editable: .result.editable}'

# ── 3. Probe the endpoint — measure TTFB and total time ────────────────────
echo "[3/6] Probing $TEST_URL (3 runs) ..."
for i in 1 2 3; do
  curl -sS -o /dev/null -w "  Run $i: HTTP %{http_code}  TTFB %{time_starttransfer}s  Total %{time_total}s\n" \
    --connect-timeout 20 --max-time 130 "$TEST_URL"
done

# ── 4. Test direct origin connectivity (bypass Cloudflare) ─────────────────
if [[ -n "$ORIGIN_IP" ]]; then
  echo "[4/6] Testing direct origin at $ORIGIN_IP ..."
  curl -sS -o /dev/null -w "  Direct: HTTP %{http_code}  TTFB %{time_starttransfer}s\n" \
    --resolve "$ZONE_NAME:443:$ORIGIN_IP" \
    --connect-timeout 15 --max-time 130 "$TEST_URL"
else
  echo "[4/6] Skipped (set ORIGIN_IP to enable direct-origin test)"
fi

# ── 5. Verify Cloudflare IPs are reachable to origin ───────────────────────
echo "[5/6] Cloudflare IPv4 ranges (allowlist these on your origin firewall):"
curl -sf https://www.cloudflare.com/ips-v4

# ── 6. Tail recent 524/522 events from Logpush (if available locally) ──────
if ls cloudflare-*.log.json 1>/dev/null 2>&1; then
  echo "[6/6] Recent 524/522 events from local Logpush files:"
  jq -r 'select(.EdgeResponseStatus == 524 or .EdgeResponseStatus == 522) |
    [.EdgeStartTimestamp, .EdgeResponseStatus, .ClientRequestURI,
     (.OriginResponseTime / 1e9 | tostring) + "s"] | @tsv' \
    cloudflare-*.log.json | sort | tail -20
else
  echo "[6/6] No local Logpush files found — skipping log analysis"
fi

echo "Done. Review output above to identify which timeout boundary is breaching."

# ── Bonus: set proxy_read_timeout to 300s (uncomment to apply) ─────────────
# curl -sX PATCH "$CF_API/zones/$ZONE_ID/settings/proxy_read_timeout" \
#   -H "Authorization: Bearer $CF_API_TOKEN" \
#   -H "Content-Type: application/json" \
#   --data '{"value": 300}' | jq .
E

Error Medic Editorial

The Error Medic Editorial team is composed of senior DevOps engineers, SREs, and cloud architects with hands-on experience managing high-traffic systems on Cloudflare, AWS, GCP, and Azure. We write practical, command-first troubleshooting guides based on real incidents and post-mortems, not marketing copy.

Sources

Related Guides