Error Medic

Resolving Notion API Rate Limit (HTTP 429) and 502 Bad Gateway Errors

A comprehensive DevOps guide to diagnosing and fixing Notion API rate limit (429) and 502 Bad Gateway errors using exponential backoff, jitter, and query optimi

Last updated:
Last verified:
1,343 words
Key Takeaways
  • Root Cause 1: Exceeding Notion's hard limit of 3 requests per second per integration triggers HTTP 429 Too Many Requests errors.
  • Root Cause 2: Deeply nested database queries, complex rollups, or excessively large block payloads cause upstream timeouts, manifesting as HTTP 502 Bad Gateway errors.
  • Quick Fix: Implement a Redis-backed Token Bucket rate limiter to cap outbound requests, use exponential backoff with jitter for retries, and reduce pagination page_size for heavy databases.
Resilience Strategies Compared
MethodWhen to UseTime to ImplementRisk Level
Naive Retry (while loop)Local scripts, low-volume testing5 minsHigh (Thundering Herd risk)
Exponential Backoff + JitterStandard API integrations, single-node services30 minsLow
Distributed Queue (Redis/SQS)Multi-node microservices, enterprise syncs2-3 hoursVery Low
Payload ChunkingFixing persistent 502 Bad Gateway errors on large writes1 hourLow

Understanding Notion API HTTP 429 and 502 Errors

When integrating with the Notion API, developers frequently encounter two notorious HTTP status codes: 429 Too Many Requests and 502 Bad Gateway. While they manifest differently, they are often symptoms of the same underlying issue: aggressive polling, inefficient querying, or pushing payloads that exceed Notion's backend processing capabilities.

Notion explicitly enforces a strict rate limit of 3 requests per second (RPS) per integration. If your application sends bursts of traffic exceeding this threshold, Notion's Web Application Firewall (WAF) or API gateway will immediately return an HTTP 429 Too Many Requests response. The response body will typically look like this:

{
  "object": "error",
  "status": 429,
  "code": "rate_limited",
  "message": "Rate limited."
}

Conversely, the HTTP 502 Bad Gateway error typically occurs when the API gateway fails to receive a timely response from Notion's upstream database servers. In the context of the Notion API, a 502 is almost always a backend timeout masquerading as a gateway error. This usually happens when:

  1. You are querying a massive database with deeply nested relations or complex rollups.
  2. You are attempting to append thousands of block children in a single API call.
  3. Notion's internal infrastructure is experiencing temporary degradation under load.

Step 1: Diagnosing the Bottleneck

Before writing code to blindly retry failed requests, you must identify whether you are hitting a hard rate limit (429) or a performance bottleneck (502).

Log Analysis: Inspect your application logs. If you see intermittent 429 errors during scheduled bulk syncs, your concurrency is too high. If you see 502 errors that consistently fail on the exact same database query, your payload or query complexity is the culprit.

Checking Headers: When you receive a 429, look for the Retry-After header. However, note that Notion's API does not always consistently return a standard Retry-After header with a precise second count. Your logic must be prepared to fall back on an intelligent exponential backoff strategy regardless of header presence.

Step 2: Fixing 429 Rate Limit Errors (Architectural Solutions)

To permanently resolve 429 Too Many Requests, you must implement client-side throttling and robust retry mechanisms.

1. Global Rate Limiter (Token Bucket / Leaky Bucket): If your application runs across multiple containers or serverless functions (like AWS Lambda or Vercel), a local in-memory rate limiter (like a simple thread sleep) will not work because the instances do not share memory. You need a distributed rate limiter. Using Redis to implement a Token Bucket algorithm ensures that across your entire infrastructure, no more than 3 requests are dispatched to Notion per second.

2. Exponential Backoff with Jitter: When a request fails with a 429, do not retry immediately. Implement an exponential backoff (e.g., wait 1s, then 2s, then 4s, then 8s). Crucially, you must add jitter (randomized delay variance) to prevent the "Thundering Herd" problem, where multiple queued requests retry at the exact same millisecond and instantly trigger another 429 block.

Step 3: Fixing 502 Bad Gateway Errors (Query Optimization)

A 502 error requires a different approach. Retrying a bloated request will likely just result in another 502 and waste your rate limit quota.

1. Pagination is Mandatory: Never attempt to fetch an entire database in one call. Use the start_cursor and page_size parameters. Reduce the page_size from the maximum (100) to a smaller number like 25 or 50 if you are querying a database with heavy relations or rich text fields.

2. Payload Chunking: When using the PATCH /v1/blocks/{block_id}/children endpoint, Notion allows up to 100 blocks per request. However, if those blocks contain complex rich text, tables, or embedded media, the upstream server might time out before completing the transaction. Chunk your block appends into batches of 20-30 blocks.

3. Simplify Filters: If querying a database consistently returns a 502, simplify your filter object. Offload complex logical filtering (like deeply nested AND/OR conditions) to your application layer. Fetch the broader dataset with a simple filter and perform the complex filtering in-memory within your application.

Step 4: Implementing the Fix

If you are using the official Notion SDK (@notionhq/client for Node.js or notion-client for Python), basic retries are built-in. However, for enterprise workloads handling millions of rows, you should wrap your Notion API calls in a dedicated resilience library. In Python, the tenacity library is an industry standard for this pattern (see the code block below for implementation). In Node.js, libraries like p-retry or bottleneck are highly recommended.

By combining a strict 3 RPS outbound queue with exponential backoff and payload chunking, you can completely eliminate both 429 and 502 errors from your Notion integration pipelines, ensuring a highly available and reliable sync process.

Frequently Asked Questions

python
import requests
import time
import logging
from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception_type

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

class NotionRateLimitException(Exception):
    pass

class NotionGatewayException(Exception):
    pass

# Retry decorator: Exponential backoff with jitter. 
# Waits 2^x * 1 second between each retry, up to 10 seconds, max 5 attempts.
@retry(
    stop=stop_after_attempt(5),
    wait=wait_exponential(multiplier=1, min=2, max=10),
    retry=retry_if_exception_type((NotionRateLimitException, NotionGatewayException))
)
def query_notion_database_with_retry(database_id, headers, payload):
    url = f"https://api.notion.com/v1/databases/{database_id}/query"
    
    logger.info(f"Attempting to query Notion DB: {database_id}")
    response = requests.post(url, headers=headers, json=payload)
    
    if response.status_code == 429:
        logger.warning("HTTP 429 Too Many Requests detected. Triggering backoff.")
        # Respect Retry-After if present, though tenacity handles the core backoff
        retry_after = response.headers.get("Retry-After")
        if retry_after:
            time.sleep(int(retry_after))
        raise NotionRateLimitException("Rate limited by Notion API")
        
    elif response.status_code == 502:
        logger.warning("HTTP 502 Bad Gateway detected. Upstream timeout. Triggering backoff.")
        raise NotionGatewayException("Notion backend timeout")
        
    response.raise_for_status()
    return response.json()

# Example Usage
NOTION_TOKEN = "secret_your_token_here"
DB_ID = "your_database_id"
HEADERS = {
    "Authorization": f"Bearer {NOTION_TOKEN}",
    "Notion-Version": "2022-06-28",
    "Content-Type": "application/json"
}

# Keep page_size low to avoid 502s
PAYLOAD = {
    "page_size": 25 
}

try:
    data = query_notion_database_with_retry(DB_ID, HEADERS, PAYLOAD)
    print("Successfully retrieved data!")
except Exception as e:
    print(f"Operation failed after max retries: {e}")
E

Error Medic Editorial

Error Medic Editorial is composed of senior Site Reliability Engineers and DevOps practitioners dedicated to solving the most complex API integrations, infrastructure scaling challenges, and production incidents.

Sources

Related Guides