Error Medic

NFS Permission Denied: Complete Troubleshooting Guide for Linux

Fix NFS permission denied, connection refused, and slow NFS fast. Step-by-step commands for UID mismatch, exports config, firewall, SELinux, and idmapd fixes.

Last updated:
Last verified:
2,614 words
Key Takeaways
  • NFS 'Permission denied' most commonly results from UID/GID mismatches between client and server, incorrect /etc/exports options (root_squash, all_squash, or a space before the parenthesis), or SELinux/AppArmor context violations — all of which produce identical error messages despite different root causes.
  • 'NFS connection refused' and mount hangs indicate that nfs-server or rpcbind services are not running, or that firewall rules are blocking required ports: 2049 (NFS), 111 (RPC portmapper), and 20048 (mountd for NFSv3).
  • NFSv4 identity mapping failures cause all files to appear owned by 'nobody' — ensure /etc/idmapd.conf has the exact same Domain= value on both client and server, then restart nfs-idmapd and flush the cache with nfsidmap -c.
  • Slow NFS throughput is almost always caused by undersized rsize/wsize mount options (kernel default 4096 bytes is too small for modern networks) — increase to 1048576 and add noatime to eliminate access-time write overhead.
  • Quick triage sequence: run 'exportfs -v' on the server, 'rpcinfo -p SERVER_IP' from the client, compare 'id USERNAME' output on both hosts, and check 'ausearch -m avc -ts recent | grep nfs' for silent SELinux denials.
Fix Approaches Compared
MethodWhen to UseTimeRisk
Edit /etc/exports + exportfs -raWrong client permissions, missing no_root_squash, space-before-paren typo, or subnet mismatch2 minLow — live reload, no service restart
Synchronize UIDs via LDAP or FreeIPAMultiple hosts with divergent UID assignments for the same username30-120 minMedium — requires directory service setup or usermod
Fix /etc/idmapd.conf Domain= valueNFSv4 files owned by nobody, idmapd logs domain mismatch5 minLow — service restart only
Open firewall ports 2049/111/20048Connection refused on mount, rpcinfo fails from client3 minLow — standard NFS service ports
Apply SELinux nfs_t context + booleansDenials visible only in audit.log despite correct exports and UIDs5 minLow — restorecon is non-destructive
Increase rsize/wsize to 1048576 + noatimeNFS mounts and works but read/write throughput is unacceptably slow2 minLow — mount option change, remount required
Force unmount (umount -l) + remountStale filehandles after server crash, processes stuck in D state5 minMedium — interrupts active I/O on the mount

Understanding NFS Permission Denied

NFS (Network File System) permission errors are among the most disruptive Linux storage issues, appearing across development, staging, and production environments alike. The error Permission denied when mounting or accessing an NFS share can originate from at least six distinct root causes, which is why blind retries rarely succeed. This guide works through each layer systematically, from service availability down to SELinux policy.

Typical error messages you will encounter:

mount.nfs: access denied by server while mounting 192.168.1.10:/data
ls: cannot open directory '/mnt/nfs': Permission denied
cp: cannot create regular file '/mnt/nfs/file.txt': Permission denied
mount.nfs: Connection refused
nfs: server 192.168.1.10 not responding, still trying
stale file handle

Step 1: Verify NFS Services Are Running

Before diagnosing permissions, confirm NFS services are active on both server and client. An NFS crash or failed systemd unit produces connection refused errors that look identical to firewall blocks.

# On the NFS server
systemctl status nfs-server rpcbind
showmount -e localhost

# On the NFS client
systemctl status rpcbind
rpcinfo -p SERVER_IP

If rpcinfo returns connection refused, the RPC portmapper is not running or a firewall is blocking port 111. If nfs-server shows a failed state, check journalctl -xeu nfs-server for the crash reason — common causes include a corrupted /etc/exports, duplicate export entries, or a missing export directory. Restart NFS cleanly:

systemctl restart rpcbind nfs-server
exportfs -ra   # Reload export table without full service restart

Step 2: Audit the Export Configuration

The /etc/exports file controls which clients access which shares and with what permissions. It is the single most common source of NFS permission denied errors.

cat /etc/exports
exportfs -v    # Show all active exports with resolved options

A correct export entry:

/data  192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)

Common mistakes that cause permission denied:

  • Space before parenthesis: /data 192.168.1.0/24 (rw) — the space makes the share world-accessible with restrictive defaults, then treats (rw) as a separate wildcard entry. Remove the space entirely.
  • root_squash (enabled by default): Maps the root user on the client to nfsnobody (UID 65534). Add no_root_squash only when the security implications are understood.
  • all_squash: Forces all client UIDs to the anonymous user. This breaks nearly every real workflow. Remove it unless you specifically require anonymous read-only access.
  • Wrong host specification: The client IP or hostname must exactly match the export definition. CIDR notation (192.168.1.0/24) is safest. Verify with showmount -e SERVER_IP from the client.
  • Read-only default: If ro is set or inherited, write operations return permission denied even when the filesystem ACL allows writes.

After editing /etc/exports:

exportfs -ra    # Non-disruptive reload
exportfs -v     # Confirm the result

Step 3: Diagnose UID and GID Mismatches

NFS passes numeric UID/GID values across the wire without translation (NFSv3) or uses string-based identity mapping (NFSv4). If user deploy has UID 1001 on the server but UID 1002 on the client, you will get permission denied even with perfectly correct export options, because the kernel enforces POSIX permissions based on numeric IDs.

# Check UID on the client
id deploy

# Check UID on the server
ssh SERVER_IP 'id deploy'

# Inspect numeric file ownership on the NFS mount
ls -ln /mnt/nfs/

If ls -ln shows a UID that does not match the user accessing the files, you have a mismatch. Fixes in order of preference:

  1. Centralized directory service (LDAP, FreeIPA, Active Directory via SSSD): Ensures consistent UIDs across all hosts automatically. Required for environments with more than two machines.
  2. Manual synchronization: Use usermod -u NEW_UID USERNAME on the client to align with the server UID, then find / -user OLD_UID -exec chown NEW_UID {} \; to fix local file ownership.
  3. Export-level squashing: Map all client access to a known server UID using anonuid and anongid:
/data  192.168.1.10(rw,all_squash,anonuid=1001,anongid=1001)

Step 4: Fix NFSv4 Identity Mapping

NFSv4 uses user@domain string identities instead of raw UIDs. When the nfs-idmapd service is stopped or the domain is misconfigured, all file ownership resolves to nobody:nobody, and writes fail with permission denied.

# Check idmapd service status
systemctl status nfs-idmapd

# This value must be identical on server and client
grep Domain /etc/idmapd.conf

# After correcting Domain=
systemctl restart nfs-idmapd
nfsidmap -c    # Flush the identity map cache

# Force the client to re-read mappings
umount /mnt/nfs && mount /mnt/nfs

Both /etc/idmapd.conf files must contain the same Domain = yourdomain.com value. An empty or missing Domain field causes NFSv4 to fall back to a default that frequently mismatches between distributions, producing the nobody ownership symptom.


Step 5: Open Firewall Ports

NFS requires several ports. A firewall blocking any one of them produces connection refused on mount or silent timeouts with nfs: server not responding.

Required ports:

  • 2049/tcp+udp — NFS daemon (all versions)
  • 111/tcp+udp — RPC portmapper (NFSv3 and NFSv4)
  • 20048/tcp+udp — mountd (NFSv3 only; NFSv4 only requires 2049)
# firewalld (RHEL, CentOS, Fedora, Rocky Linux)
firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --permanent --add-service=mountd
firewall-cmd --reload

# iptables
iptables -I INPUT -p tcp --dport 2049 -j ACCEPT
iptables -I INPUT -p tcp --dport 111 -j ACCEPT
iptables -I INPUT -p tcp --dport 20048 -j ACCEPT
iptables-save > /etc/iptables/rules.v4

# Verify from client
nmap -sV -p 111,2049,20048 SERVER_IP

Step 6: Resolve SELinux and AppArmor Denials

SELinux denials are completely silent at the application level — the export configuration looks correct, UIDs match, but writes still fail. The denial appears only in the audit log, making this a common source of hours-long debugging sessions.

# Search for recent NFS SELinux denials
ausearch -m avc -ts recent | grep nfs

# Check the security context of the export directory
ls -Z /data/

# Set the correct NFS context
chcon -Rt nfs_t /data/
semanage fcontext -a -t nfs_t '/data(/.*)?'
restorecon -Rv /data/

# Allow NFS home directory access if needed
setsebool -P use_nfs_home_dirs 1

For AppArmor on Debian and Ubuntu systems:

aa-status
journalctl | grep apparmor | grep nfs

Step 7: Fix Slow NFS Performance

If NFS is functional but slow, increasing read/write block sizes is the highest-impact fix. The kernel default of 4096 bytes was appropriate for 100 Mbit networks; modern gigabit and 10GbE environments need much larger values.

# Check current mount options
mount | grep nfs

# Benchmark current write throughput
dd if=/dev/zero of=/mnt/nfs/testfile bs=1M count=512 oflag=direct conv=fdatasync

# Remount with optimized options
umount /mnt/nfs
mount -t nfs -o rw,hard,intr,rsize=1048576,wsize=1048576,noatime,nfsvers=4 192.168.1.10:/data /mnt/nfs

Persist optimized options in /etc/fstab:

192.168.1.10:/data  /mnt/nfs  nfs  rw,hard,intr,rsize=1048576,wsize=1048576,noatime,nfsvers=4  0  0

Key options: rsize=1048576,wsize=1048576 uses 1 MB I/O blocks (up from the 4 KB default); noatime eliminates a write I/O operation on every file read; hard,intr retries on failure but allows the mount to be interrupted by signals; nfsvers=4 delivers better throughput than NFSv3 for most workloads due to compound RPC operations.

Also check for MTU mismatches when jumbo frames are enabled on only one side of the network path — this causes fragmentation that dramatically reduces NFS throughput:

ip link show eth0 | grep mtu
ping -M do -s 8972 SERVER_IP   # Test jumbo frame path

Step 8: Recover from NFS Crash and Stale Filehandles

After a server crash or network partition, clients accumulate stale filehandle errors. Processes attempting NFS I/O enter uninterruptible sleep (D state in ps) and cannot be killed with SIGKILL while the mount is held.

# Identify processes stuck on NFS
lsof | grep nfs
ps aux | grep ' D '

# Kill processes holding the stale mount before unmounting
fuser -km /mnt/nfs

# Force unmount; use -l (lazy) if -f fails
umount -f /mnt/nfs
umount -l /mnt/nfs

# On the server, reset the export table completely
exportfs -ua && exportfs -a

# Remount on the client
mount /mnt/nfs

Frequently Asked Questions

bash
#!/usr/bin/env bash
# NFS Troubleshooting Diagnostic Script
# Usage: ./nfs-diag.sh <server-ip> [mount-point]
# Run on the NFS CLIENT. Optional tools: nmap, ausearch (audit), nfsstat.

SERVER_IP=${1:-192.168.1.10}
MOUNT_POINT=${2:-/mnt/nfs}

echo '=== [1] NFS Client Services ==='
systemctl is-active rpcbind 2>/dev/null || echo 'rpcbind not running'

echo '=== [2] RPC Port Registration on Server ==='
rpcinfo -p $SERVER_IP 2>&1 | head -20

echo '=== [3] Server NFS Exports ==='
showmount -e $SERVER_IP 2>&1

echo '=== [4] Active Exports with Options (run on server) ==='
# ssh $SERVER_IP 'exportfs -v'

echo '=== [5] Current NFS Mount Options on Client ==='
mount | grep nfs

echo '=== [6] Numeric File Ownership on NFS Mount ==='
ls -ln $MOUNT_POINT 2>/dev/null | head -20

echo '=== [7] Current User UID and GID ==='
id

echo '=== [8] idmapd Domain — must be identical on client and server ==='
grep -i Domain /etc/idmapd.conf 2>/dev/null || echo 'idmapd.conf not found or Domain not set'

echo '=== [9] SELinux NFS Denials (last 5 minutes) ==='
ausearch -m avc -ts recent 2>/dev/null | grep -i nfs | tail -20

echo '=== [10] Firewall Port Scan: NFS ports 111, 2049, 20048 ==='
nmap -sV -p 111,2049,20048 $SERVER_IP 2>/dev/null | grep -E 'PORT|open|closed|filtered'

echo '=== [11] NFS Mount Statistics ==='
nfsstat -m 2>/dev/null | head -30

echo '=== [12] Recent NFS Kernel Messages ==='
dmesg 2>/dev/null | tail -100 | grep -i nfs

echo '=== [13] Processes in Uninterruptible Sleep (stale NFS indicator) ==='
ps aux | grep ' D '

echo '=== [14] Write Throughput Benchmark ==='
if mountpoint -q $MOUNT_POINT 2>/dev/null; then
  dd if=/dev/zero of=$MOUNT_POINT/.nfs_speedtest bs=1M count=64 oflag=direct conv=fdatasync 2>&1
  rm -f $MOUNT_POINT/.nfs_speedtest
else
  echo 'Mount point not active — skipping benchmark'
fi

echo '=== [15] MTU Check (run on both client and server) ==='
ip link show | grep -i mtu

echo '=== [16] Optimized Mount Command ==='
echo "mount -t nfs -o rw,hard,intr,rsize=1048576,wsize=1048576,noatime,nfsvers=4 $SERVER_IP:/your/export $MOUNT_POINT"

echo '=== Diagnosis complete: review each section above for anomalies ==='
E

Error Medic Editorial

The Error Medic Editorial team consists of senior Linux engineers and site reliability engineers with extensive experience managing NFS, Ceph, and distributed storage infrastructure at scale across on-premises data centers and hybrid cloud environments. All troubleshooting guides are validated against real production incidents and cross-referenced with current kernel and distribution documentation.

Sources

Related Guides