No, the forespeed won't start. The game Need For Speed: Payback does not start - a clear example of dealing with errors

Good afternoon, readers and guests. There was a very long break between posts, but I’m back in action). In today's article I will look at NFS protocol operation, and setting up NFS server and NFS client on Linux.

Introduction to NFS

NFS (Network File System - network file system) in my opinion - the ideal solution in local network, where fast (faster compared to SAMBA and less resource-intensive compared to remote file systems with encryption - sshfs, SFTP, etc...) data exchange is required and the security of the transmitted information is not at the forefront. NFS protocol allows mount remote file systems over the network into a local directory tree, as if it were a mounted disk file system. This allows local applications to work with a remote file system as if they were a local one. But you need to be careful (!) with setting up NFS, because with a certain configuration it is possible to freeze the client’s operating system waiting for endless I/O. NFS protocol work based RPC protocol, which is still beyond my understanding)) so the material in the article will be a little vague... Before you can use NFS, be it a server or a client, you must make sure that your kernel has support for the NFS file system. You can check whether the kernel supports the NFS file system by looking at the presence of the corresponding lines in the file /proc/filesystems:

ARCHIV ~ # grep nfs /proc/filesystems nodev nfs nodev nfs4 nodev nfsd

If the specified lines in the file /proc/filesystems does not appear, then you need to install the packages described below. This will most likely allow you to install dependent kernel modules to support the required file systems. If, after installing the packages, NFS support is not displayed in the specified file, then you will need to enable this function.

Story Network File System

NFS protocol developed by Sun Microsystems and has 4 versions in its history. NFSv1 was developed in 1989 and was experimental, running on the UDP protocol. Version 1 is described in . NFSv2 was released in the same 1989, described by the same RFC1094 and also based on the UDP protocol, while allowing no more than 2GB to be read from a file. NFSv3 finalized in 1995 and described in . The main innovations of the third version were support for large files, added support for the TCP protocol and large TCP packets, which significantly accelerated the performance of the technology. NFSv4 finalized in 2000 and described in RFC 3010, revised in 2003 and described in . The fourth version included performance improvements, support for various authentication means (in particular, Kerberos and LIPKEY using the RPCSEC GSS protocol) and access control lists (both POSIX and Windows types). NFS version v4.1 was approved by the IESG in 2010 and received the number . An important innovation in version 4.1 is the specification of pNFS - Parallel NFS, a mechanism for parallel NFS client access to data from multiple distributed NFS servers. The presence of such a mechanism in the network file system standard will help build distributed “cloud” storage and information systems.

NFS server

Since we have NFS- This network file system, then necessary. (You can also read the article). Next is necessary. On Debian this is a package nfs-kernel-server And nfs-common, in RedHat this is a package nfs-utils. And also, you need to allow the daemon to run at the required OS execution levels (command in RedHat - /sbin/chkconfig nfs on, in Debian - /usr/sbin/update-rc.d nfs-kernel-server defaults).

Installed packages in Debian are launched in the following order:

ARCHIV ~ # ls -la /etc/rc2.d/ | grep nfs lrwxrwxrwx 1 root root 20 Oct 18 15:02 S15nfs-common -> ../init.d/nfs-common lrwxrwxrwx 1 root root 27 Oct 22 01:23 S16nfs-kernel-server -> ../init.d /nfs-kernel-server

That is, it starts first nfs-common, then the server itself nfs-kernel-server. In RedHat the situation is similar, with the only exception that the first script is called nfslock, and the server is called simply nfs. About nfs-common The debian website tells us this verbatim: shared files for NFS client and server, this package must be installed on the machine that will operate as an NFS client or server. The package includes programs: lockd, statd, showmount, nfsstat, gssd and idmapd. Viewing the contents of the launch script /etc/init.d/nfs-common you can track the following sequence of work: the script checks for the presence of an executable binary file /sbin/rpc.statd, checks for presence in files /etc/default/nfs-common, /etc/fstab And /etc/exports parameters that require running daemons idmapd And gssd , starts the daemon /sbin/rpc.statd , then before launch /usr/sbin/rpc.idmapd And /usr/sbin/rpc.gssd checks the presence of these executable binary files, then for daemon /usr/sbin/rpc.idmapd checks availability sunrpc,nfs And nfsd, as well as file system support rpc_pipefs in the kernel (that is, having it in the file /proc/filesystems), if everything is successful, it starts /usr/sbin/rpc.idmapd . Additionally, for the demon /usr/sbin/rpc.gssd checks kernel module rpcsec_gss_krb5 and starts the daemon.

If you view the content NFS server startup script on Debian ( /etc/init.d/nfs-kernel-server), then you can follow the following sequence: at startup, the script checks the existence of the file /etc/exports, Availability nfsd, availability of support NFS file system in (that is, in the file /proc/filesystems), if everything is in place, then the daemon starts /usr/sbin/rpc.nfsd , then checks whether the parameter is specified NEED_SVCGSD(set in the server settings file /etc/default/nfs-kernel-server) and, if given, starts the daemon /usr/sbin/rpc.svcgssd , launches the daemon last /usr/sbin/rpc.mountd . From this script it is clear that NFS server operation consists of daemons rpc.nfsd, rpc.mountd and if Kerberos authentication is used, then the rcp.svcgssd daemon. In the red hat, the rpc.rquotad and nfslogd daemon are still running (For some reason in Debian I did not find information about this daemon and the reasons for its absence, apparently it was deleted...).

From this it becomes clear that The Network File System server consists of the following processes (read: daemons), located in the /sbin and /usr/sbin directories:

In NFSv4, when using Kerberos, additional daemons are started:

  • rpc.gssd- The NFSv4 daemon provides authentication methods via GSS-API (Kerberos authentication). Works on client and server.
  • rpc.svcgssd- NFSv4 server daemon that provides server-side client authentication.

portmap and RPC protocol (Sun RPC)

In addition to the above packages, an additional package is required for NFSv2 and v3 to work correctly portmap(replaced in newer distributions by renamed to rpcbind). This package is usually installed automatically with NFS as a dependent package and implements the operation of the RPC server, that is, it is responsible for the dynamic assignment of ports for some services registered in the RPC server. Literally, according to the documentation, this is a server that converts RPC (Remote Procedure Call) program numbers into TCP/UDP port numbers. portmap operates on several entities: RPC calls or requests, TCP/UDP ports,protocol version(tcp or udp), program numbers And software versions. The portmap daemon is launched by the /etc/init.d/portmap script before NFS services start.

In short, the job of an RPC (Remote Procedure Call) server is to process RPC calls (so-called RPC procedures) from local and remote processes. Using RPC calls, services register or remove themselves to/from the port mapper (aka port mapper, aka portmap, aka portmapper, aka, in new versions, rpcbind), and clients use RPC calls to send requests to the portmapper receive the necessary information. User-friendly names of program services and their corresponding numbers are defined in the /etc/rpc file. As soon as any service has sent the corresponding request and registered itself on the RPC server in the port mapper, the RPC server assigns, maps to the service the TCP and UDP ports on which the service started and stores in the kernel the corresponding information about the running service (name), a unique number service (in accordance with /etc/rpc), about the protocol and port on which the service runs and about the version of the service and provides the specified information to clients upon request. The port converter itself has a program number (100000), version number - 2, TCP port 111 and UDP port 111. Above, when specifying the composition of the NFS server daemons, I indicated the main RPC program numbers. I've probably confused you a little with this paragraph, so I'll say a basic phrase that should make things clear: the main function of a port mapper is to return, upon request of a client who has provided an RPC program number (or RPC program number) and version to him (the client) the port on which the requested program is running. Accordingly, if a client needs to access RPC with a specific program number, it must first contact the portmap process on the server machine and determine the communication port number with the RPC service it needs.

The operation of an RPC server can be represented by the following steps:

  1. The port converter should start first, usually when the system boots. This creates a TCP endpoint and opens TCP port 111. It also creates a UDP endpoint that waits for a UDP datagram to arrive on UDP port 111.
  2. At startup, a program running through an RPC server creates a TCP endpoint and a UDP endpoint for each supported version of the program. (An RPC server can support multiple versions. The client specifies the required version when making the RPC call.) A dynamically assigned port number is assigned to each version of the service. The server logs each program, version, protocol, and port number by making the appropriate RPC call.
  3. When the RPC client program needs to obtain the necessary information, it calls the port resolver routine to obtain a dynamically assigned port number for the specified program, version, and protocol.
  4. In response to this request, the north returns a port number.
  5. The client sends an RPC request message to the port number obtained in step 4. If UDP is used, the client simply sends a UDP datagram containing the RPC challenge message to the UDP port number on which the requested service is running. In response, the service sends a UDP datagram containing an RPC response message. If TCP is used, the client actively opens to the TCP port number of the desired service and then sends an RPC challenge message over the established connection. The server responds with an RPC response message on the connection.

To obtain information from the RPC server, use the utility rpcinfo. When specifying parameters -p host the program displays a list of all registered RPC programs on the host host. Without specifying the host, the program will display services on localhost. Example:

ARCHIV ~ # rpcinfo -p prog-ma vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 59451 status 100024 1 tcp 60872 status 100021 1 udp 44310 nlockmgr 1000 21 3 udp 44310 nlockmgr 100021 4 udp 44310 nlockmgr 100021 1 tcp 44851 nlockmgr 100021 3 tcp 44851 nlockmgr 100021 4 tcp 44851 nlockmgr 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100005 1 udp 51306 mountd 100005 1 tcp 41405 mountd 100005 2 udp 51306 mountd 100005 2 tcp 41405 mountd 100005 3 udp 51306 mountd 100005 3 tcp 41405 mountd

As you can see, rpcinfo displays (in columns from left to right) the registered program number, version, protocol, port and name. Using rpcinfo you can remove a program's registration or get information about a specific RPC service (more options in man rpcinfo). As you can see, portmapper daemons version 2 are registered on udp and tcp ports, rpc.statd version 1 on udp and tcp ports, NFS lock manager versions 1,3,4, nfs server daemon version 2,3,4, as well as the mount daemon versions 1,2,3.

The NFS server (more precisely, the rpc.nfsd daemon) receives requests from the client in the form of UDP datagrams on port 2049. Although NFS works with a port resolver, which allows the server to use dynamically assigned ports, UDP port 2049 is hardcoded to NFS in most implementations .

Network File System Protocol Operation

Mounting remote NFS

The process of mounting a remote NFS file system can be represented by the following diagram:

Description of the NFS protocol when mounting a remote directory:

  1. An RPC server is launched on the server and client (usually at boot), serviced by the portmapper process and registered on the tcp/111 and udp/111 port.
  2. Services are launched (rpc.nfsd, rpc.statd, etc.), which are registered on the RPC server and registered on arbitrary network ports (if a static port is not specified in the service settings).
  3. the mount command on the client computer sends a request to the kernel to mount a network directory, indicating the type of file system, host and directory itself; the kernel sends and forms an RPC request to the portmap process on the NFS server on port udp/111 (if the option to work via tcp is not set on the client )
  4. The NFS server kernel queries the RPC for the presence of the rpc.mountd daemon and returns to the client kernel the network port on which the daemon is running.
  5. mount sends an RPC request to the port on which rpc.mountd is running. The NFS server can now validate a client based on its IP address and port number to see if the client can mount the specified file system.
  6. The mount daemon returns a description of the requested file system.
  7. The client mount command issues the mount system call to associate the file handle obtained in step 5 with the local mount point on the client host. The file handle is stored in the NFS client code, and from now on any access by user processes to files on the server's file system will use the file handle as a starting point.

Communication between client and NFS server

A typical access to a remote file system can be described as follows:

Description of the process of accessing a file located on an NFS server:

  1. The client (user process) does not care whether it accesses a local file or NFS file. The kernel interacts with hardware through kernel modules or built-in system calls.
  2. Kernel module kernel/fs/nfs/nfs.ko, which performs the functions of an NFS client, sends RPC requests to the NFS server via the TCP/IP module. NFS typically uses UDP, however newer implementations may use TCP.
  3. The NFS server receives requests from the client as UDP datagrams on port 2049. Although NFS can work with a port resolver, which allows the server to use dynamically assigned ports, UDP port 2049 is hard-coded to NFS in most implementations.
  4. When the NFS server receives a request from a client, it is passed to a local file access routine, which provides access to the local disk on the server.
  5. The result of the disk access is returned to the client.

Setting up an NFS server

Server Tuning in general consists of specifying local directories that are allowed to be mounted by remote systems in a file /etc/exports. This action is called export directory hierarchy. The main sources of information about exported catalogs are the following files:

  • /etc/exports- the main configuration file that stores the configuration of the exported directories. Used for starting NFS and the exportfs utility.
  • /var/lib/nfs/xtab- contains a list of directories mounted by remote clients. Used by the rpc.mountd daemon when a client attempts to mount a hierarchy (a mount entry is created).
  • /var/lib/nfs/etab- a list of directories that can be mounted by remote systems, indicating all the parameters of the exported directories.
  • /var/lib/nfs/rmtab- a list of directories that are not currently unexported.
  • /proc/fs/nfsd- a special file system (kernel 2.6) for managing the NFS server.
    • exports- a list of active exported hierarchies and clients to whom they were exported, as well as parameters. The kernel gets this information from /var/lib/nfs/xtab.
    • threads- contains the number of threads (can also be changed)
    • using filehandle you can get a pointer to a file
    • and etc...
  • /proc/net/rpc- contains “raw” statistics, which can be obtained using nfsstat, as well as various caches.
  • /var/run/portmap_mapping- information about services registered in RPC

Note: In general, on the Internet there are a lot of interpretations and formulations of the purpose of the xtab, etab, rmtab files, I don’t know who to believe. Even on http://nfs.sourceforge.net/ the interpretation is not clear.

Setting up the /etc/exports file

In the simplest case, the /etc/exports file is the only file that requires editing to configure the NFS server. This file controls the following aspects:

  • What kind of clients can access files on the server
  • Which hierarchies? directories on the server can be accessed by each client
  • How will custom customer names be be displayed to local usernames

Each line of the exports file has the following format:

export_point client1 (options) [client2(options) ...]

Where export_point absolute path of the exported directory hierarchy, client1 - n name of one or more clients or IP addresses, separated by spaces, that are allowed to mount export_point . Options describe mounting rules for client, specified before options .

Here's a typical one exports file configuration example:

ARCHIV ~ # cat /etc/exports /archiv1 files(rw,sync) 10.0.0.1(ro,sync) 10.0.230.1/24(ro,sync)

In this example, computers files and 10.0.0.1 are allowed access to the export point /archiv1, while host files has read/write access, and host 10.0.0.1 and subnet 10.0.230.1/24 have read-only access.

Host descriptions in /etc/exports are allowed in the following format:

  • The names of individual nodes are described as files or files.DOMAIN.local.
  • The domain mask is described in the following format: *DOMAIN.local includes all nodes of the DOMAIN.local domain.
  • Subnets are specified as IP address/mask pairs. For example: 10.0.0.0/255.255.255.0 includes all nodes whose addresses begin with 10.0.0.
  • Specifying the name of the @myclients network group that has access to the resource (when using an NIS server)

General options for exporting directory hierarchies

The exports file uses the following general options(options used by default in most systems are listed first, non-default ones in brackets):

  • auth_nlm (no_auth_nlm) or secure_locks (insecure_locks)- specifies that the server should require authentication of lock requests (using the NFS Lock Manager protocol).
  • nohide (hide)- if the server exports two directory hierarchies, with one nested (mounted) within the other. The client needs to explicitly mount the second (child) hierarchy, otherwise the child hierarchy's mount point will appear as an empty directory. The nohide option results in a second directory hierarchy without an explicit mount. ( note: I couldn't get this option to work...)
  • ro(rw)- Allows only read (write) requests. (Ultimately, whether it is possible to read/write or not is determined based on file system rights, and the server is not able to distinguish a request to read a file from a request to execute, so it allows reading if the user has read or execute rights.)
  • secure (insecure)- requires NFS requests to come from secure ports (< 1024), чтобы программа без прав root не могла монтировать иерархию каталогов.
  • subtree_check (no_subtree_check)- If a subdirectory of the file system is exported, but not the entire file system, the server checks whether the requested file is in the exported subdirectory. Disabling verification reduces security but increases data transfer speed.
  • sync (async)- specifies that the server should respond to requests only after the changes made by those requests have been written to disk. The async option tells the server not to wait for information to be written to disk, which improves performance but reduces reliability because In the event of a connection break or equipment failure, information may be lost.
  • wdelay (no_wdelay)- instructs the server to delay executing write requests if a subsequent write request is pending, writing data in larger blocks. This improves performance when sending large queues of write commands. no_wdelay specifies not to delay execution of a write command, which can be useful if the server receives a large number of unrelated commands.

Export symbolic links and device files. When exporting a directory hierarchy containing symbolic links, the link object must be accessible to the client (remote) system, that is, one of the following rules must be true:

The device file belongs to the interface. When you export a device file, this interface is exported. If the client system does not have a device of the same type, the exported device will not work. On the client system, when mounting NFS objects, you can use the nodev option so that device files in the mounted directories are not used.

The default options may vary between systems and can be found in /var/lib/nfs/etab. After describing the exported directory in /etc/exports and restarting the NFS server, all missing options (read: default options) will be reflected in the /var/lib/nfs/etab file.

User ID display (matching) options

For a better understanding of the following, I would advise you to read the article. Each Linux user has its own UID and main GID, which are described in the files /etc/passwd And /etc/group. The NFS server assumes that the remote host's operating system has authenticated the users and assigned them the correct UID and GID. Exporting files gives users of the client system the same access to those files as if they were logged directly on the server. Accordingly, when an NFS client sends a request to the server, the server uses the UID and GID to identify the user on the local system, which can lead to some problems:

  • a user may not have the same identifiers on both systems and therefore may be able to access another user's files.
  • because If the root user's ID is always 0, then this user is mapped to the local user depending on the specified options.

The following options set the rules for displaying remote users in local ones:

  • root_squash (no_root_squash)- With the option specified root_squash, requests from the root user are mapped to the anonymous uid/gid, or to the user specified in the anonuid/anongid parameter.
  • no_all_squash (all_squash)- Does not change the UID/GID of the connecting user. Option all_squash sets the display of ALL users (not just root) as anonymous or specified in the anonuid/anongid parameter.
  • anonuid= UID And anongid= GID - Explicitly sets the UID/GID for the anonymous user.
  • map_static= /etc/file_maps_users - Specifies a file in which you can set the mapping of remote UID/GID to local UID/GID.

Example of using a user mapping file:

ARCHIV ~ # cat /etc/file_maps_users # User mapping # remote local comment uid 0-50 1002 # mapping users with remote UID 0-50 to local UID 1002 gid 0-50 1002 # mapping users with/span remote GID 0-50 to local GID 1002

NFS Server Management

The NFS server is managed using the following utilities:

  • nfsstat
  • showmsecure (insecure)mount

nfsstat: NFS and RPC statistics

The nfsstat utility allows you to view statistics of RPC and NFS servers. The command options can be found in man nfsstat.

showmount: Display NFS status information

showmount utility queries the rpc.mountd daemon on the remote host about mounted file systems. By default, a sorted list of clients is returned. Keys:

  • --all- a list of clients and mount points is displayed indicating where the client mounted the directory. This information may not be reliable.
  • --directories- a list of mount points is displayed
  • --exports- a list of exported file systems is displayed from the point of view of nfsd

When you run showmount without arguments, information about the systems that are allowed to mount will be printed to the console local catalogues. For example, the ARCHIV host provides us with a list of exported directories with the IP addresses of hosts that are allowed to mount the specified directories:

FILES ~ # showmount --exports archive Export list for archive: /archiv-big 10.0.0.2 /archiv-small 10.0.0.2

If you specify the hostname/IP in the argument, information about this host will be displayed:

ARCHIV ~ # showmount files clnt_create: RPC: Program not registered # this message tells us that the NFSd daemon is not running on the FILES host

exportfs: manage exported directories

This command serves the exported directories specified in the file /etc/exports, it would be more accurate to write that it does not serve, but synchronizes with the file /var/lib/nfs/xtab and removes non-existent ones from xtab. exportfs is executed when the nfsd daemon is started with the -r argument. The exportfs utility in 2.6 kernel mode communicates with the rpc.mountd daemon through files in the /var/lib/nfs/ directory and does not communicate directly with the kernel. Without parameters, displays a list of currently exported file systems.

exportfs parameters:

  • [client:directory-name] - add or remove the specified file system for the specified client)
  • -v - display more information
  • -r - re-export all directories (synchronize /etc/exports and /var/lib/nfs/xtab)
  • -u - remove from the list of exported
  • -a - add or remove all file systems
  • -o - options separated by commas (similar to the options used in /etc/exports; i.e. you can change the options of already mounted file systems)
  • -i - do not use /etc/exports when adding, only current command line options
  • -f - reset the list of exported systems in kernel 2.6;

NFS client

Before accessing a file on a remote file system, the client (client OS) must mount it and receive from the server pointer to it. NFS Mount can be done with or using one of the proliferating automatic mounters (amd, autofs, automount, supermount, superpupermount). The installation process is well demonstrated in the illustration above.

On NFS clients no need to run any daemons, client functions executes a kernel module kernel/fs/nfs/nfs.ko, which is used when mounting a remote file system. Exported directories from the server can be mounted on the client in the following ways:

  • manually using the mount command
  • automatically at boot, when mounting file systems described in /etc/fstab
  • automatically using the autofs daemon

I will not consider the third method with autofs in this article, due to its voluminous information. Perhaps there will be a separate description in future articles.

Mounting the Network Files System with the mount command

An example of using the mount command is presented in the post. Here I will look at an example of the mount command for mounting an NFS file system:

FILES ~ # mount -t nfs archiv:/archiv-small /archivs/archiv-small FILES ~ # mount -t nfs -o ro archiv:/archiv-big /archivs/archiv-big FILES ~ # mount ..... .. archiv:/archiv-small on /archivs/archiv-small type nfs (rw,addr=10.0.0.6) archiv:/archiv-big on /archivs/archiv-big type nfs (ro,addr=10.0.0.6)

The first command mounts the exported directory /archiv-small on server archive to local mount point /archivs/archiv-small with default options (i.e. read and write). Although mount command in the latest distributions it can understand what type of file system is used even without specifying the type, but still indicate the parameter -t nfs desirable. The second command mounts the exported directory /archiv-big on server archive to local directory /archivs/archiv-big with read-only option ( ro). mount command without parameters, it clearly shows us the mounting result. In addition to the read-only option (ro), it is possible to specify other Basic options when mounting NFS:

  • nosuid- This option prohibits executing programs from the mounted directory.
  • nodev(no device - not a device) - This option prohibits the use of character and block special files as devices.
  • lock (nolock)- Allows NFS locking (default). nolock disables NFS locking (does not start the lockd daemon) and is useful when working with older servers that do not support NFS locking.
  • mounthost=name- The name of the host on which the NFS mount daemon is running - mountd.
  • mountport=n - Port used by the mountd daemon.
  • port=n- port used to connect to the NFS server (default is 2049 if the rpc.nfsd daemon is not registered on the RPC server). If n=0 (default), then NFS queries the portmap on the server to determine the port.
  • rsize=n(read block size - read block size) - The number of bytes read at a time from the NFS server. Standard - 4096.
  • wsize=n(write block size - write block size) - The number of bytes written at a time to the NFS server. Standard - 4096.
  • tcp or udp- To mount NFS, use the TCP or UDP protocol, respectively.
  • bg- If you lose access to the server, try again in the background so as not to block the system boot process.
  • fg- If you lose access to the server, try again in priority mode. This option can block the system boot process by repeating mount attempts. For this reason, the fg parameter is used primarily for debugging.

Options affecting attribute caching on NFS mounts

File attributes, stored in (inodes), such as modification time, size, hard links, owner, typically change infrequently for regular files and even less frequently for directories. Many programs, such as ls, access files read-only and do not change file attributes or content, but waste system resources on expensive network operations. To avoid wasting resources, you can cache these attributes. The kernel uses the modification time of a file to determine whether the cache is out of date by comparing the modification time in the cache and the modification time of the file itself. The attribute cache is periodically updated in accordance with the specified parameters:

  • ac (noac) (attrebute cache- attribute caching) - Allows attribute caching (default). Although the noac option slows down the server, it avoids attribute staleness when multiple clients are actively writing information to a common hierarchy.
  • acdirmax=n (attribute cache directory file maximum- maximum attribute caching for a directory file) - The maximum number of seconds that NFS waits before updating directory attributes (default 60 sec.)
  • acdirmin=n (attribute cache directory file minimum- minimum attribute caching for a directory file) - Minimum number of seconds that NFS waits before updating directory attributes (default 30 sec.)
  • acregmax=n (attribute cache regular file maximum- attribute caching maximum for a regular file) - The maximum number of seconds that NFS waits before updating the attributes of a regular file (default 60 sec.)
  • acregmin=n (attribute cache regular file minimum- minimum attribute caching for a regular file) - Minimum number of seconds that NFS waits before updating the attributes of a regular file (default 3 seconds)
  • actimeo=n (attribute cache timeout- attribute caching timeout) - Replaces the values ​​for all the above options. If actimeo is not specified, then the above values ​​take on the default values.

NFS Error Handling Options

The following options control what NFS does when there is no response from the server or when I/O errors occur:

  • fg(bg) (foreground- foreground, background- background) - Attempts to mount a failed NFS in the foreground/background.
  • hard (soft)- displays the message "server not responding" to the console when the timeout is reached and continues to attempt to mount. With option given soft- during a timeout, informs the program that called the operation about an I/O error. (it is recommended not to use the soft option)
  • nointr (intr) (no interrupt- do not interrupt) - Does not allow signals to interrupt file operations in a hard-mounted directory hierarchy when a large timeout is reached. intr- enables interruption.
  • retrans=n (retransmission value- retransmission value) - After n small timeouts, NFS generates a large timeout (default 3). A large timeout stops operations or prints a "server not responding" message to the console, depending on whether the hard/soft option is specified.
  • retry=n (retry value- retry value) - The number of minutes the NFS service will repeat mount operations before giving up (default 10000).
  • timeo=n (timeout value- timeout value) - The number of tenths of a second the NFS service waits before retransmitting in case of RPC or a small timeout (default 7). This value increases with each timeout up to a maximum of 60 seconds or until a large timeout occurs. If the network is busy, the server is slow, or the request is going through multiple routers or gateways, increasing this value may improve performance.

Automatic NFS mount at boot (description of file systems in /etc/fstab)

You can select the optimal timeo for a specific value of the transmitted packet (rsize/wsize values) using the ping command:

FILES ~ # ping -s 32768 archiv PING archiv.DOMAIN.local (10.0.0.6) 32768(32796) bytes of data. 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=1 ttl=64 time=0.931 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=2 ttl=64 time=0.958 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=3 ttl=64 time=1.03 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=4 ttl=64 time=1.00 ms 32776 bytes from archive .domain.local (10.0.0.6): icmp_req=5 ttl=64 time=1.08 ms ^C --- archive.DOMAIN.local ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4006ms rtt min/avg/max/mdev = 0.931/1.002/1.083/0.061 ms

As you can see, when sending a packet of size 32768 (32Kb), its travel time from the client to the server and back floats around 1 millisecond. If this time exceeds 200 ms, then you should think about increasing the timeo value so that it exceeds the exchange value by three to four times. Accordingly, it is advisable to do this test during heavy network load.

Launching NFS and setting up Firewall

The note was copied from the blog http://bog.pp.ru/work/NFS.html, for which many thanks!!!

Run NFS server, mount, block, quota and status with "correct" ports (for firewall)

  • It is advisable to first unmount all resources on clients
  • stop and disable rpcidmapd from starting if you do not plan to use NFSv4: chkconfig --level 345 rpcidmapd off service rpcidmapd stop
  • if necessary, allow the portmap, nfs and nfslock services to start: chkconfig --levels 345 portmap/rpcbind on chkconfig --levels 345 nfs on chkconfig --levels 345 nfslock on
  • if necessary, stop the nfslock and nfs services, start portmap/rpcbind, unload the modules service nfslock stop service nfs stop service portmap start # service rpcbind start umount /proc/fs/nfsd service rpcidmapd stop rmmod nfsd service autofs stop # somewhere later it must be launched rmmod nfs rmmod nfs_acl rmmod lockd
  • open ports in
    • for RPC: UDP/111, TCP/111
    • for NFS: UDP/2049, TCP/2049
    • for rpc.statd: UDP/4000, TCP/4000
    • for lockd: UDP/4001, TCP/4001
    • for mountd: UDP/4002, TCP/4002
    • for rpc.rquota: UDP/4003, TCP/4003
  • for the rpc.nfsd server, add the line RPCNFSDARGS="--port 2049" to /etc/sysconfig/nfs
  • for the mount server, add the line MOUNTD_PORT=4002 to /etc/sysconfig/nfs
  • to configure rpc.rquota for new versions, you need to add the line RQUOTAD_PORT=4003 to /etc/sysconfig/nfs
  • to configure rpc.rquota it is necessary for older versions (however, you must have the quota package 3.08 or newer) add to /etc/services rquotad 4003/tcp rquotad 4003/udp
  • will check the adequacy of /etc/exports
  • run the services rpc.nfsd, mountd and rpc.rquota (rpcsvcgssd and rpc.idmapd are launched at the same time, if you remember to delete them) service nfsd start or in new versions service nfs start
  • for the blocking server for new systems, add the lines LOCKD_TCPPORT=4001 LOCKD_UDPPORT=4001 to /etc/sysconfig/nfs
  • for the lock server for older systems, add directly to /etc/modprobe[.conf]: options lockd nlm_udpport=4001 nlm_tcpport=4001
  • bind the rpc.statd status server to port 4000 (for older systems, run rpc.statd with the -p 4000 switch in /etc/init.d/nfslock) STATD_PORT=4000
  • start the lockd and rpc services.statd service nfslock start
  • make sure that all ports are bound normally using "lsof -i -n -P" and "netstat -a -n" (some of the ports are used by kernel modules that lsof does not see)
  • if before the “rebuilding” the server was used by clients and they could not be unmounted, then you will have to restart the automatic mounting services on the clients (am-utils, autofs)

Example NFS server and client configuration

Server configuration

If you want to make your NFS shared directory open and writable, you can use the option all_squash in combination with options anonuid And anongid. For example, to set permissions for user "nobody" in group "nobody", you could do the following:

ARCHIV ~ # cat /etc/exports # Read and write access for client on 192.168.0.100, with rw access for user 99 with gid 99 /files 192.168.0.100(rw,sync,all_squash,anonuid=99,anongid=99) ) # Read and write access for client on 192.168.0.100, with rw access for user 99 with gid 99 /files 192.168.0.100(rw,sync,all_squash,anonuid=99,anongid=99))

This also means that if you want to allow access to the specified directory, nobody.nobody must be the owner of the shared directory:

man mount
man exports
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/nfs_perf.htm - NFS performance from IBM.

Best regards, McSim!

In this material we will help you launch Need For Speed ​​Rivals, and will also help you avoid game crashes. WITH new part NFS, in terms of technical stability, the developers failed a bit to do it right. For many players NFS Rivals does not start and crashes, which does not bode well for gamers.

So let's understand these problems.

NFS Rivals not working

If you get a black screen when launch Need For Speed Rivals, then you here. The material at the link contains objective lighting and a solution to the problem with a black screen in NFS Rivals, as well as endless loading. Also, the tips listed on that page will help you run NFS Rivals.

1. Check if your computer meets the minimum system requirements games:

OS: Windows Vista/7
Processor: Intel Core 2 Duo @ 2.4 Ghz / AMD Athlon 64 X2 5400+
RAM: 2 Gb
Hard drive: 20 Gb
Video card: nVidia GeForce 9800 (512MB) / ATI Radeon HD 4870 (512MB)
DirectX: 10

Required to play latest versions drivers for video cards, especially for AMD cards with which Rivals has a lot of trouble.

3. Install additional gaming software:

4. Make sure that there is still free space on the disk where the game is installed.

5. If you are using a pirated copy of the game, you may need a working Origin (Depending on the crack)

6. Install the patch, change the crack:

7. If you are using a pirated copy of the game, the problem may be in the repack itself - download another one, after reading the comments to it.

If you are using a license, update Origin (delete the old one and download the new one from the off-site).

If Need for Speed ​​Rivals is in the middle gameplay crashed with a DirectX error (i.e. you managed to run it), the problem can be solved by changing the crack or trying to change the resolution in the game. A patch should fix this.

With the problem when Need for Speed ​​Rivals crashes after the intro video or after/before the tutorial, a save will help. Unzip and copy it to C:\Users\USERNAME\Documents\Ghost Games\Need for Speed(TM) Rivals\settings\ with replacement.

The game does not start with an error

1.MSVCR110.dll is missing

MSVCR100.dll is missing

An error of this type occurs primarily due to an incorrectly installed Microsoft Visual C++ component. Reinstall it and the problem should go away. If the problem persists, check the presence of files in the folder with the exe files of the game.

2. Error in the text which is one of the words: dx, directX, dx_diag_d3_d11, dx11

An error of this kind indicates a problem with the DirectX component - install it again. Also, an error with this content may appear if you run NFS Rivals on Windows XP or on an old video card.

Adviсe:

If you are running Need For Speed ​​Rivals on a laptop with 2 video cards, make sure that you are running the discrete video card and not the built-in one.

Check that the time on your computer is correct. Very often, due to incorrect timing, many problems appear with games.

Run the game as system administrator

If you have an x64-bit system, try running the x86 shortcut.

If you play on a TV, change the method of connecting the TV to the computer, for example to DVI/VGA.

Some more useful materials.

At the moment, in this note we will be able to list the main solutions to all the troubles associated with the popular video game Need for Speed: Payback. Many gamers, in one case or another, have encountered a lot of problems, including game crashes, low FPS, a variety of errors, frozen games, etc. However, do not worry - perhaps in our publication you can find answers to your questions.

A comprehensive list of most difficulties and their solutions will be regularly updated, so be sure to ask topics of interest in the responses. In addition, we ourselves will be quite interested if you can tell us exactly what solution could help you in a given situation.

NFS Payback: Freezes on loading

The most common cause of games freezing during loading is the use of a graphics card that does not meet the requirements operating system. When this is not your case, then you need to ensure that the drivers are updated.

Need for Speed ​​Payback: black screen

The only solution in case of this difficulty is to switch the video game to window mode and back, while pressing the combination of two keys Alt + Enter. If this action did not help, then check that you have the latest video adapter drivers. Most often it turns out that your video card is not responding minimum requirements video games, because the real problem occurs precisely for this reason.

NFS Payback: slows down, freezes, lags, low FPS

Here, first of all, make sure that the required video card drivers are updated on your computer. If everything is fine with this, then in the game settings you should turn off anti-aliasing and lower the options that relate to Post-processing. Changing them will not greatly affect the deterioration of graphics quality, but will have a good effect on performance.

Need for Speed ​​Payback: crashes

In this case, first try to reduce the graphic resolution. In certain cases, such actions are enough to solve such an error. If the difficulty does not disappear, then check for updates for such software as VCRedst, Microsoft Visual C++, DirectX, Microsoft. NetFramework.

NFS Payback: management does not work

Make sure that you only have 1 input device working (if necessary, turn off unnecessary ones). In addition, if you use a gamepad and it absolutely does not respond to pressing the keyboard in a video game, try using an Xbox joystick emulator (for example, x360ce).

Need for Speed ​​Payback: error 0xc000007b

This problem is often resolved by simply reinstalling these DirectX, VCRedst, and Microsoft applications. NetFramework, Visual C++.

Need for Speed ​​Payback: not saved

Initially, make sure that the path to the folder with the video game saves does not have Cyrillic characters. Next, try to check if the Read-Only property is set for this folder. If necessary, remove.

This error may occur due to incompatibility reasons. Try running the video game in compatibility mode for a different OS. To do this, you can use the Compatibility Troubleshooter tool.

NFS Payback: no Russian language

Difficulties with localization are quite easy to lead to a positive solution. All you need to do here is open a specific Engine file with a simple text editor. Ini at C: \Users\User_name\AppData\Local\NeedForSpeedPayback or OriginsGame\Saved\Config\WindowsNoEditor\ (depending on whether it is pirated or licensed video game you), select the line Culture=en_US and replace the English layout en_US with the Russian ru_RU.

Need for Speed ​​Payback: won't launch

At the first stage, you need to check if there are any Russian signs on the path to the game - if they are present, the video game will not run on the PC.

The second step is to try running the video game as an administrator.

In addition, too often such an error can only occur if some errors occurred during installation. Provided your computer meets the OS requirements, then reinstalling the video game will probably help resolve the real issue. For the best chance, don't forget to turn off antivirus program during installation.

NFS Payback: computer turns off after startup or during gameplay

Most often this problem occurs due to insufficient cooling of the system. Check your PC's cooling system for dust and clean it if necessary.

Need for Speed ​​Payback: no sound

First you need to make sure that you have the latest drivers. sound card. To do this, go to “My Computer” – “System Properties” – “Device Manager”. We are looking for sound, press the button right click mouse and make an update.

If updating the sound playback device drivers did not help, then try disabling the Realtek application while playing the video game if you have a built-in video device.

Lastly, check whether you have selected the audio playback device in the game options. If everything is in order with the settings, you can minimize the game, open the mixer and check the sound settings here.

NFS Payback: initialization error 4

The problem can be resolved by reinstalling the applications listed above, as well as reinstalling the latest graphics accelerator drivers.

Indian Solitaire