Error Medic

How to Fix "Job for service failed because the control process exited with error code" (systemd service failed)

Fix systemd service failures by checking journalctl, verifying permissions, analyzing unit files, and resolving common misconfigurations. Step-by-step guide.

Last updated:
Last verified:
1,132 words
Key Takeaways
  • Misconfigured systemd unit files or incorrect ExecStart paths (e.g. status=203/EXEC)
  • Insufficient file permissions, missing environment variables, or SELinux/AppArmor denials
  • Port conflicts or missing dependencies causing service timeouts
  • Check logs via `journalctl -u <service>` and exact exit codes as the first diagnostic step
Fix Approaches Compared
MethodWhen to UseTimeRisk
Check `journalctl -xe` & `status`Always (first step)1 minNone
Verify File PermissionsWhen getting 'Permission denied' or 203/EXEC5 minsLow
Edit Unit File & daemon-reloadTo fix ExecStart paths, env variables, or timeouts10 minsMedium
Analyze SELinux/AppArmor logsSuspected mandatory access control block despite good permissions15 minsHigh

Understanding the Error

When a systemd service fails to start, the terminal typically outputs a generic error message such as Job for <service-name>.service failed because the control process exited with error code. See "systemctl status <service-name>.service" and "journalctl -xe" for details. This indicates that systemd attempted to execute the commands defined in the service's unit file, but the underlying process terminated abnormally.

Systemd is the initialization system and service manager for most modern Linux distributions. It tracks dependencies, resource limits, and environment variables. Because systemd runs services in a clean, restricted environment, applications that run perfectly when launched from your user shell might fail when started as a systemd service due to missing environment variables, differing working directories, or restrictive permissions.

Step 1: Diagnose the Failure

The first step in resolving a systemd service failure is gathering the exact logs and exit codes. Do not blindly restart the service without understanding why it crashed.

Check Service Status

Run the status command to get a high-level overview of the failure:

systemctl status <service-name>

Look for the Active: line, which will likely say failed (Result: exit-code). More importantly, look at the Process: line for the status= code. Common systemd exit statuses include:

  • status=1/FAILURE: The application itself encountered an error and exited with a generic error code.
  • status=127/n/a: Command not found. Check your ExecStart path.
  • status=203/EXEC: Systemd could not execute the binary. This usually means the path is wrong, the file is not executable (chmod +x), or the script has a bad shebang (e.g., #!/bin/bash is missing or points to a non-existent interpreter).
  • status=217/USER: The user specified in the User= directive does not exist or systemd lacks permission to switch to it.
Inspect System Logs

The journalctl utility provides granular logs for systemd units. To see the logs specifically for your failed service, use:

journalctl -u <service-name> -n 50 --no-pager

This command displays the last 50 log lines for the service. If the failure occurred during the boot process, add the -b flag to see logs from the current boot.

Step 2: Common Root Causes and Fixes

1. Incorrect Executable Path or Permissions (status=203/EXEC)

Systemd requires absolute paths for executables in the ExecStart directive. If you use ExecStart=node app.js, systemd will fail because it doesn't know where node is in its isolated environment.

Fix: Use the full path (e.g., ExecStart=/usr/bin/node /opt/myapp/app.js). Use which node or whereis node to find the correct path. Additionally, ensure the target file is executable: chmod +x /opt/myapp/app.js.

2. Environment Variable Discrepancies

When you run a command in your terminal, it inherits your user's environment variables (like $PATH, $HOME, $NODE_ENV). Systemd services do not inherit these by default.

Fix: Explicitly define required environment variables in your unit file under the [Service] section:

Environment="NODE_ENV=production"
Environment="PORT=8080"

Alternatively, load them from a file using EnvironmentFile=/etc/myapp/.env.

3. Permission Denied and SELinux/AppArmor

If your service needs to bind to a privileged port (below 1024) but runs as a non-root user, it will fail. Similarly, mandatory access control systems like SELinux or AppArmor might block the service from reading specific files or executing certain binaries, even if standard Linux file permissions (rwx) allow it.

Fix:

  • For ports: Use AmbientCapabilities=CAP_NET_BIND_SERVICE in the unit file.
  • For SELinux: Check audit logs with ausearch -m avc -ts recent. If SELinux is blocking it, you may need to adjust the file context using chcon or semanage fcontext.
4. Service Timeouts

Sometimes, a service takes longer to initialize than systemd's default timeout (usually 90 seconds). This results in a Timeout was reached error.

Fix: Increase the timeout limit in the unit file:

TimeoutStartSec=300

Step 3: Apply Changes and Verify

Whenever you modify a .service file (usually located in /etc/systemd/system/ or /usr/lib/systemd/system/), you must tell systemd to reload its configuration before attempting to restart the service.

  1. Reload the daemon: sudo systemctl daemon-reload
  2. Start the service: sudo systemctl start <service-name>
  3. Verify the status: sudo systemctl status <service-name>
  4. If it runs successfully, ensure it starts on boot: sudo systemctl enable <service-name>

By systematically checking logs, verifying paths and permissions, and understanding the isolated systemd environment, you can quickly identify and resolve most service failures.

Frequently Asked Questions

bash
# Quick diagnostic commands for a failed systemd service

SERVICE_NAME="your-service-name"

# 1. Check the general status and exit code
sudo systemctl status $SERVICE_NAME

# 2. View the last 50 lines of logs for the service without truncation
sudo journalctl -u $SERVICE_NAME -n 50 --no-pager

# 3. Reload systemd daemon after modifying a unit file in /etc/systemd/system/
sudo systemctl daemon-reload

# 4. Restart and check the status again
sudo systemctl restart $SERVICE_NAME
sudo systemctl status $SERVICE_NAME
E

Error Medic Editorial

The Error Medic Editorial team consists of senior Linux system administrators, DevOps engineers, and SREs dedicated to providing accurate, actionable solutions for complex infrastructure issues.

Sources

Related Guides