Error Medic

Resolving Salesforce Data Migration Timeouts, Row Locks, and Slow Performance: A Complete Troubleshooting Guide

Fix Salesforce data migration timeouts (UNABLE_TO_LOCK_ROW, Apex CPU limit). Learn to optimize configuration, resolve crashes, and fix slow performance.

Last updated:
Last verified:
2,087 words
Key Takeaways
  • Data skew (Account or Ownership Skew) is the primary root cause of UNABLE_TO_LOCK_ROW errors and severe slow performance during parallel data loads.
  • Active Apex triggers, flows, and validation rules frequently inflate transaction times, leading to 'System.LimitException: Apex CPU time limit exceeded'.
  • Quick Fix: Sort migration data by Parent ID, switch Bulk API jobs to Serial mode if necessary, and temporarily bypass automation/defer sharing calculations.
Data Migration Performance Optimization Strategies
MethodWhen to UseTime to ImplementRisk/Impact
Sort Data by Parent IDWhen encountering UNABLE_TO_LOCK_ROW on child object inserts (e.g., Contacts, Opportunities).LowLow - purely a data preparation step.
Bulk API V2 (Parallel)For massive datasets (>1M records) without severe data skew.MediumLow - Salesforce manages chunking automatically.
Bulk API V1 (Serial Mode)When sorting data fails to resolve lock contention and parent records are heavily skewed.LowMedium - Significantly increases total migration time.
Defer Sharing CalculationsWhen migrating millions of records with complex, private sharing models, causing extreme CPU load.MediumHigh - Users may see incorrect data visibility until recalculation finishes.
Bypass Apex/Flow AutomationWhen hitting 'Apex CPU time limit exceeded' or 101 SOQL limits during inserts.HighHigh - Bypasses business logic; requires careful data validation post-migration.

Understanding Salesforce Data Migration Failures

When executing a large-scale Salesforce data migration, engineers frequently encounter blocking issues that cause the migration pipeline to fail, stall, or severely degrade the overall performance of the Salesforce organization. Symptoms often manifest as a localized Salesforce crash from the end-user's perspective, or widespread reports of Salesforce not working and Salesforce slow performance while background batch processes consume all available asynchronous resources.

Data loading is not merely about moving bytes; it is about interacting with a multi-tenant architecture that heavily regulates resource consumption. When you push millions of rows into Salesforce, you are simultaneously triggering a cascade of business logic, database indexing, and security recalculations.

Common error messages that engineers face during this process include:

  • UNABLE_TO_LOCK_ROW: unable to obtain exclusive access to this record
  • System.LimitException: Apex CPU time limit exceeded
  • REQUEST_LIMIT_EXCEEDED
  • CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY
  • System.Exception: Too many SOQL queries: 101
  • Salesforce timeout errors during API calls (e.g., Read timed out or 504 Gateway Timeout).

This comprehensive Salesforce troubleshooting guide will walk you through diagnosing and resolving these exact bottlenecks. It serves as an advanced Salesforce configuration tutorial for enterprise-scale operations, ensuring your team can load data efficiently without impacting business continuity.

Step 1: Diagnose the Root Cause

Before indiscriminately altering your Salesforce configuration or throwing more resources at the problem, you must identify the precise technical bottleneck causing the migration to fail.

1. Analyze Row Locks (UNABLE_TO_LOCK_ROW)

Row locks are the bane of parallel data migrations. In Salesforce's underlying Oracle/PostgreSQL relational database architecture, certain operations require the database to lock a parent record to ensure data integrity. This happens predominantly when:

  • You are inserting child records (e.g., Contacts, Cases) that update roll-up summary fields on the parent Account.
  • You are changing the owner of a record, which triggers a recalculation of sharing rules that must lock the parent.

If you are migrating Contacts and thousands of them belong to the same Account (a phenomenon known as Account Data Skew), multiple parallel Bulk API batches will attempt to insert Contacts for that same Account simultaneously. They will all try to obtain an exclusive lock on the single parent Account record. If a batch waits longer than 10 seconds for this lock, a Salesforce timeout occurs, and the batch fails with UNABLE_TO_LOCK_ROW.

2. Investigate CPU and SOQL Timeouts (Apex CPU time limit exceeded)

Salesforce enforces a strict 10,000ms CPU time limit for synchronous transactions and 60,000ms for asynchronous ones. During a Salesforce data migration, every inserted or updated record acts as a catalyst. Inserting a chunk of 200 records forces the platform to evaluate Validation Rules, Before Triggers, After Triggers, Process Builders, and record-triggered Flows.

If your org has a heavy automation footprint (a monolithic Trigger architecture, or dozens of active Flows per object), processing 200 records can easily take longer than 10 seconds. When this happens, the transaction is rolled back, and the migration tool reports a failure.

3. Identify Indexing and Query Timeouts

If you are performing an Upsert operation using an External ID, Salesforce must query the database to determine if the record exists. If the External ID field is not properly indexed, or if the index has become degraded due to high data volume, this search results in a full table scan. Full table scans on objects with millions of rows will inevitably result in a Salesforce timeout.

4. Check API Limits and Concurrency

If external integration layers are reporting Salesforce not working or returning REQUEST_LIMIT_EXCEEDED, your migration tool may be exhausting the org's 24-hour API request limit. Alternatively, you might be hitting the concurrent API request limit (typically 25 concurrent long-running requests for longer than 20 seconds), which temporarily paralyzes the org.

Step 2: Implement the Fixes and Optimizations

Once the primary bottleneck is identified, you must adjust your strategy and Salesforce configuration.

Fix 1: Resolve Data Skew and Row Locks

To eliminate UNABLE_TO_LOCK_ROW errors, you must orchestrate your data delivery to avoid contention.

  1. Sort your data: This is the most effective and least destructive fix. Before pushing data to Salesforce, sort your CSV files or JSON payloads by the Parent ID (e.g., AccountId or OwnerId). By grouping all child records for a specific parent sequentially, they are packaged into the same Bulk API batch. The single batch acquires the lock once, processes all children, and releases it, completely eliminating parallel lock contention.
  2. Reduce Batch Size: If sorting isn't entirely effective due to the massive size of a single parent's children, reduce the Bulk API batch size from the default 10,000 (Bulk V2) to a smaller number like 2,000, or drop Bulk V1 batch sizes from 200 to 50.
  3. Use Serial Mode: As a last resort for heavily skewed data, configure your Bulk API V1 job to run in Serial mode rather than Parallel. This guarantees that only one batch runs at a time. It completely eliminates lock contention but drastically increases the total migration time.
Fix 2: Optimize Salesforce Configuration for Migration

Proper Salesforce configuration is crucial during a massive data load. You must strip down the transaction footprint to ensure quick database commits.

  1. Bypass Automation: The most common solution to CPU limit exceptions is bypassing custom logic. Implement a hierarchical Custom Setting (e.g., Automation_Bypass__c) with checkbox fields for Disable_Triggers__c and Disable_Validation_Rules__c. Update all Apex Triggers and Validation Rules to check this setting before executing.
    • Execution: During the migration window, enable the bypass for the specific Integration User profile executing the data load. This allows the data to slide into the database directly, bypassing expensive CPU cycles.
  2. Defer Sharing Calculations: By default, Salesforce recalculates sharing rules immediately when ownership changes, roles change, or records are created in a private sharing model. For migrations involving millions of records, this synchronous recalculation queues up massively, causing severe Salesforce slow performance.
    • Execution: Navigate to Setup -> Defer Sharing Calculations, and suspend them. Perform your data migration. Once the migration finishes, you must manually resume and trigger a full recalculation.
  3. Disable Roll-up Summary Fields: Roll-up summary fields strictly lock parent records and require database recalculations on every child insert/update/delete. Temporarily edit the parent object, remove or disable the roll-up summary fields, migrate the child data, and then recreate the fields to allow Salesforce to calculate the aggregates asynchronously in the background.
Fix 3: Optimize Network and API Usage

If you are experiencing frequent Salesforce timeout errors resulting in 504 Gateway Timeout or Read timed out at the network level:

  1. Switch to Bulk API 2.0: Ensure your migration tool (e.g., Data Loader, MuleSoft, Salesforce CLI) uses Bulk API 2.0. Bulk V2 handles chunking automatically on the server side, optimizing the load process and significantly reducing the risk of a client-side Salesforce crash due to memory or manual chunking errors.
  2. Increase Client Timeouts: If using a custom script (Python, Java, Node.js), ensure your HTTP client timeout is set high enough. The Salesforce Bulk API can take several minutes to process complex batches. Set your read timeouts to at least 10-15 minutes (600,000 - 900,000 ms).
  3. Request Custom Indexes: If your Upsert operations are timing out, contact Salesforce Support to request a custom database index on your External ID fields, especially if they are heavily used in WHERE clauses or relationship resolutions during the load.

Step 3: Post-Migration Validation and Cleanup

A data migration is not complete when the records are inserted; it is complete when the org is returned to its standard, secure state.

  1. Restore Configuration: Re-enable all Validation Rules, Triggers, and Flows by unchecking your Custom Setting bypasses.
  2. Recalculate Sharing: If you deferred sharing calculations, navigate to Setup and trigger the recalculation. Monitor the progress in the Background Jobs page.
  3. Run Regression Tests: Run a full Apex Test Execution (sf apex run test) to ensure no underlying code coverage or behavior was broken by the state of the newly migrated data.
  4. Monitor Asynchronous Queues: Check the AsyncApexJob table and Setup -> Apex Jobs to ensure deferred processes (like @future calls or Queueable jobs triggered right after the bypass was removed) complete successfully without failing due to data volume.

By systematically applying these Salesforce troubleshooting methodologies, your engineering team can reliably execute high-throughput data migrations without compromising platform stability, avoiding downtime, and ensuring data integrity.

Frequently Asked Questions

bash
# Diagnostic commands using Salesforce CLI (sf)

# 1. Check current API limits to diagnose REQUEST_LIMIT_EXCEEDED
sf org list limits -o my-production-org | grep -i "DailyApiRequests"

# 2. Query Bulk API job status to investigate failed or stalled migrations
# This SOQL query checks for failed batch jobs and high processing times
sf data query -q "SELECT Id, Status, JobType, Method, TotalProcessingTime, NumberOfErrors FROM AsyncApexJob WHERE JobType = 'BatchApex' ORDER BY CreatedDate DESC LIMIT 10" -o my-production-org

# 3. Deploy a temporary metadata bypass to disable triggers (requires predefined Custom Metadata/Settings)
# Example: Deploying a profile bypass for the data migration user
sf project deploy start -m CustomMetadata:Automation_Bypass.Migration_Profile -o my-production-org

# 4. Run a Bulk V2 data load via CLI with specified wait times to monitor timeouts
sf data upsert bulk --sobject Account --file ./sorted_accounts_migration.csv --external-id Ext_Account_ID__c --wait 60 -o my-production-org
E

Error Medic Editorial

The Error Medic Editorial consists of senior DevOps engineers and Salesforce Architects specializing in enterprise platform stability, large-scale data architecture, and high-availability system administration.

Sources

Related Guides