Command
stringlengths
1
20
Text
stringlengths
86
185k
Summary
stringlengths
101
1.77k
git-reflog
This command manages the information recorded in the reflogs. Reference logs, or "reflogs", record when the tips of branches and other references were updated in the local repository. Reflogs are useful in various Git commands, to specify the old value of a reference. For example, HEAD@{2} means "where HEAD used to be two moves ago", master@{one.week.ago} means "where master used to point to one week ago in this local repository", and so on. See gitrevisions(7) for more details. The command takes various subcommands, and different options depending on the subcommand: The "show" subcommand (which is also the default, in the absence of any subcommands) shows the log of the reference provided in the command-line (or HEAD, by default). The reflog covers all recent actions, and in addition the HEAD reflog records branch switching. git reflog show is an alias for git log -g --abbrev-commit --pretty=oneline; see git-log(1) for more information. The "expire" subcommand prunes older reflog entries. Entries older than expire time, or entries older than expire-unreachable time and not reachable from the current tip, are removed from the reflog. This is typically not used directly by end users — instead, see git-gc(1). The "delete" subcommand deletes single entries from the reflog. Its argument must be an exact entry (e.g. "git reflog delete master@{2}"). This subcommand is also typically not used directly by end users. The "exists" subcommand checks whether a ref has a reflog. It exits with zero status if the reflog exists, and non-zero status if it does not. Options for show git reflog show accepts any of the options accepted by git log. Options for expire --all Process the reflogs of all references. --single-worktree By default when --all is specified, reflogs from all working trees are processed. This option limits the processing to reflogs from the current working tree only. --expire=<time> Prune entries older than the specified time. If this option is not specified, the expiration time is taken from the configuration setting gc.reflogExpire, which in turn defaults to 90 days. --expire=all prunes entries regardless of their age; --expire=never turns off pruning of reachable entries (but see --expire-unreachable). --expire-unreachable=<time> Prune entries older than <time> that are not reachable from the current tip of the branch. If this option is not specified, the expiration time is taken from the configuration setting gc.reflogExpireUnreachable, which in turn defaults to 30 days. --expire-unreachable=all prunes unreachable entries regardless of their age; --expire-unreachable=never turns off early pruning of unreachable entries (but see --expire). --updateref Update the reference to the value of the top reflog entry (i.e. <ref>@{0}) if the previous top entry was pruned. (This option is ignored for symbolic references.) --rewrite If a reflog entry’s predecessor is pruned, adjust its "old" SHA-1 to be equal to the "new" SHA-1 field of the entry that now precedes it. --stale-fix Prune any reflog entries that point to "broken commits". A broken commit is a commit that is not reachable from any of the reference tips and that refers, directly or indirectly, to a missing commit, tree, or blob object. This computation involves traversing all the reachable objects, i.e. it has the same cost as git prune. It is primarily intended to fix corruption caused by garbage collecting using older versions of Git, which didn’t protect objects referred to by reflogs. -n, --dry-run Do not actually prune any entries; just show what would have been pruned. --verbose Print extra information on screen. Options for delete git reflog delete accepts options --updateref, --rewrite, -n, --dry-run, and --verbose, with the same meanings as when they are used with expire.
# git reflog > Show a log of changes to local references like HEAD, branches or tags. More > information: https://git-scm.com/docs/git-reflog. * Show the reflog for HEAD: `git reflog` * Show the reflog for a given branch: `git reflog {{branch_name}}` * Show only the 5 latest entries in the reflog: `git reflog -n {{5}}`
git-cat-file
In its first form, the command provides the content or the type of an object in the repository. The type is required unless -t or -p is used to find the object type, or -s is used to find the object size, or --textconv or --filters is used (which imply type "blob"). In the second form, a list of objects (separated by linefeeds) is provided on stdin, and the SHA-1, type, and size of each object is printed on stdout. The output format can be overridden using the optional <format> argument. If either --textconv or --filters was specified, the input is expected to list the object names followed by the path name, separated by a single whitespace, so that the appropriate drivers can be determined. <object> The name of the object to show. For a more complete list of ways to spell object names, see the "SPECIFYING REVISIONS" section in gitrevisions(7). -t Instead of the content, show the object type identified by <object>. -s Instead of the content, show the object size identified by <object>. If used with --use-mailmap option, will show the size of updated object after replacing idents using the mailmap mechanism. -e Exit with zero status if <object> exists and is a valid object. If <object> is of an invalid format exit with non-zero and emits an error on stderr. -p Pretty-print the contents of <object> based on its type. <type> Typically this matches the real type of <object> but asking for a type that can trivially be dereferenced from the given <object> is also permitted. An example is to ask for a "tree" with <object> being a commit object that contains it, or to ask for a "blob" with <object> being a tag object that points at it. --[no-]mailmap, --[no-]use-mailmap Use mailmap file to map author, committer and tagger names and email addresses to canonical real names and email addresses. See git-shortlog(1). --textconv Show the content as transformed by a textconv filter. In this case, <object> has to be of the form <tree-ish>:<path>, or :<path> in order to apply the filter to the content recorded in the index at <path>. --filters Show the content as converted by the filters configured in the current working tree for the given <path> (i.e. smudge filters, end-of-line conversion, etc). In this case, <object> has to be of the form <tree-ish>:<path>, or :<path>. --path=<path> For use with --textconv or --filters, to allow specifying an object name and a path separately, e.g. when it is difficult to figure out the revision from which the blob came. --batch, --batch=<format> Print object information and contents for each object provided on stdin. May not be combined with any other options or arguments except --textconv, --filters, or --use-mailmap. • When used with --textconv or --filters, the input lines must specify the path, separated by whitespace. See the section BATCH OUTPUT below for details. • When used with --use-mailmap, for commit and tag objects, the contents part of the output shows the identities replaced using the mailmap mechanism, while the information part of the output shows the size of the object as if it actually recorded the replacement identities. --batch-check, --batch-check=<format> Print object information for each object provided on stdin. May not be combined with any other options or arguments except --textconv, --filters or --use-mailmap. • When used with --textconv or --filters, the input lines must specify the path, separated by whitespace. See the section BATCH OUTPUT below for details. • When used with --use-mailmap, for commit and tag objects, the printed object information shows the size of the object as if the identities recorded in it were replaced by the mailmap mechanism. --batch-command, --batch-command=<format> Enter a command mode that reads commands and arguments from stdin. May only be combined with --buffer, --textconv, --use-mailmap or --filters. • When used with --textconv or --filters, the input lines must specify the path, separated by whitespace. See the section BATCH OUTPUT below for details. • When used with --use-mailmap, for commit and tag objects, the contents command shows the identities replaced using the mailmap mechanism, while the info command shows the size of the object as if it actually recorded the replacement identities. --batch-command recognizes the following commands: contents <object> Print object contents for object reference <object>. This corresponds to the output of --batch. info <object> Print object info for object reference <object>. This corresponds to the output of --batch-check. flush Used with --buffer to execute all preceding commands that were issued since the beginning or since the last flush was issued. When --buffer is used, no output will come until a flush is issued. When --buffer is not used, commands are flushed each time without issuing flush. --batch-all-objects Instead of reading a list of objects on stdin, perform the requested batch operation on all objects in the repository and any alternate object stores (not just reachable objects). Requires --batch or --batch-check be specified. By default, the objects are visited in order sorted by their hashes; see also --unordered below. Objects are presented as-is, without respecting the "replace" mechanism of git-replace(1). --buffer Normally batch output is flushed after each object is output, so that a process can interactively read and write from cat-file. With this option, the output uses normal stdio buffering; this is much more efficient when invoking --batch-check or --batch-command on a large number of objects. --unordered When --batch-all-objects is in use, visit objects in an order which may be more efficient for accessing the object contents than hash order. The exact details of the order are unspecified, but if you do not require a specific order, this should generally result in faster output, especially with --batch. Note that cat-file will still show each object only once, even if it is stored multiple times in the repository. --allow-unknown-type Allow -s or -t to query broken/corrupt objects of unknown type. --follow-symlinks With --batch or --batch-check, follow symlinks inside the repository when requesting objects with extended SHA-1 expressions of the form tree-ish:path-in-tree. Instead of providing output about the link itself, provide output about the linked-to object. If a symlink points outside the tree-ish (e.g. a link to /foo or a root-level link to ../foo), the portion of the link which is outside the tree will be printed. This option does not (currently) work correctly when an object in the index is specified (e.g. :link instead of HEAD:link) rather than one in the tree. This option cannot (currently) be used unless --batch or --batch-check is used. For example, consider a git repository containing: f: a file containing "hello\n" link: a symlink to f dir/link: a symlink to ../f plink: a symlink to ../f alink: a symlink to /etc/passwd For a regular file f, echo HEAD:f | git cat-file --batch would print ce013625030ba8dba906f756967f9e9ca394464a blob 6 And echo HEAD:link | git cat-file --batch --follow-symlinks would print the same thing, as would HEAD:dir/link, as they both point at HEAD:f. Without --follow-symlinks, these would print data about the symlink itself. In the case of HEAD:link, you would see 4d1ae35ba2c8ec712fa2a379db44ad639ca277bd blob 1 Both plink and alink point outside the tree, so they would respectively print: symlink 4 ../f symlink 11 /etc/passwd -Z Only meaningful with --batch, --batch-check, or --batch-command; input and output is NUL-delimited instead of newline-delimited. -z Only meaningful with --batch, --batch-check, or --batch-command; input is NUL-delimited instead of newline-delimited. This option is deprecated in favor of -Z as the output can otherwise be ambiguous.
# git cat-file > Provide content or type and size information for Git repository objects. > More information: https://git-scm.com/docs/git-cat-file. * Get the [s]ize of the HEAD commit in bytes: `git cat-file -s HEAD` * Get the [t]ype (blob, tree, commit, tag) of a given Git object: `git cat-file -t {{8c442dc3}}` * Pretty-[p]rint the contents of a given Git object based on its type: `git cat-file -p {{HEAD~2}}`
clear
@CLEAR@ clears your terminal's screen if this is possible, including the terminal's scrollback buffer (if the extended “E3” capability is defined). @CLEAR@ looks in the environment for the terminal type given by the environment variable TERM, and then in the terminfo database to determine how to clear the screen. @CLEAR@ writes to the standard output. You can redirect the standard output to a file (which prevents @CLEAR@ from actually clearing the screen), and later cat the file to the screen, clearing it at that point. -T type indicates the type of terminal. Normally this option is unnecessary, because the default is taken from the environment variable TERM. If -T is specified, then the shell variables LINES and COLUMNS will also be ignored. -V reports the version of ncurses which was used in this program, and exits. The options are as follows: -x do not attempt to clear the terminal's scrollback buffer using the extended “E3” capability.
# clear > Clears the screen of the terminal. More information: > https://manned.org/clear. * Clear the screen (equivalent to pressing Control-L in Bash shell): `clear` * Clear the screen but keep the terminal's scrollback buffer: `clear -x` * Indicate the type of terminal to clean (defaults to the value of the environment variable `TERM`): `clear -T {{type_of_terminal}}` * Show the version of `ncurses` used by `clear`: `clear -V`
tput
The @TPUT@ utility uses the terminfo database to make the values of terminal-dependent capabilities and information available to the shell (see sh(1)), to initialize or reset the terminal, or return the long name of the requested terminal type. The result depends upon the capability's type: string @TPUT@ writes the string to the standard output. No trailing newline is supplied. integer @TPUT@ writes the decimal value to the standard output, with a trailing newline. boolean @TPUT@ simply sets the exit code (0 for TRUE if the terminal has the capability, 1 for FALSE if it does not), and writes nothing to the standard output. Before using a value returned on the standard output, the application should test the exit code (e.g., $?, see sh(1)) to be sure it is 0. (See the EXIT CODES and DIAGNOSTICS sections.) For a complete list of capabilities and the capname associated with each, see terminfo(5). Options -S allows more than one capability per invocation of @TPUT@. The capabilities must be passed to @TPUT@ from the standard input instead of from the command line (see example). Only one capname is allowed per line. The -S option changes the meaning of the 0 and 1 boolean and string exit codes (see the EXIT CODES section). Because some capabilities may use string parameters rather than numbers, @TPUT@ uses a table and the presence of parameters in its input to decide whether to use tparm(3X), and how to interpret the parameters. -Ttype indicates the type of terminal. Normally this option is unnecessary, because the default is taken from the environment variable TERM. If -T is specified, then the shell variables LINES and COLUMNS will also be ignored. -V reports the version of ncurses which was used in this program, and exits. -x do not attempt to clear the terminal's scrollback buffer using the extended “E3” capability. Commands A few commands (init, reset and longname) are special; they are defined by the @TPUT@ program. The others are the names of capabilities from the terminal database (see terminfo(5) for a list). Although init and reset resemble capability names, @TPUT@ uses several capabilities to perform these special functions. capname indicates the capability from the terminal database. If the capability is a string that takes parameters, the arguments following the capability will be used as parameters for the string. Most parameters are numbers. Only a few terminal capabilities require string parameters; @TPUT@ uses a table to decide which to pass as strings. Normally @TPUT@ uses tparm(3X) to perform the substitution. If no parameters are given for the capability, @TPUT@ writes the string without performing the substitution. init If the terminal database is present and an entry for the user's terminal exists (see -Ttype, above), the following will occur: (1) first, @TPUT@ retrieves the current terminal mode settings for your terminal. It does this by successively testing • the standard error, • standard output, • standard input and • ultimately “/dev/tty” to obtain terminal settings. Having retrieved these settings, @TPUT@ remembers which file descriptor to use when updating settings. (2) if the window size cannot be obtained from the operating system, but the terminal description (or environment, e.g., LINES and COLUMNS variables specify this), update the operating system's notion of the window size. (3) the terminal modes will be updated: • any delays (e.g., newline) specified in the entry will be set in the tty driver, • tabs expansion will be turned on or off according to the specification in the entry, and • if tabs are not expanded, standard tabs will be set (every 8 spaces). (4) if present, the terminal's initialization strings will be output as detailed in the terminfo(5) section on Tabs and Initialization, (5) output is flushed. If an entry does not contain the information needed for any of these activities, that activity will silently be skipped. reset This is similar to init, with two differences: (1) before any other initialization, the terminal modes will be reset to a “sane” state: • set cooked and echo modes, • turn off cbreak and raw modes, • turn on newline translation and • reset any unset special characters to their default values (2) Instead of putting out initialization strings, the terminal's reset strings will be output if present (rs1, rs2, rs3, rf). If the reset strings are not present, but initialization strings are, the initialization strings will be output. Otherwise, reset acts identically to init. longname If the terminal database is present and an entry for the user's terminal exists (see -Ttype above), then the long name of the terminal will be put out. The long name is the last name in the first line of the terminal's description in the terminfo database [see term(5)]. Aliases @TPUT@ handles the clear, init and reset commands specially: it allows for the possibility that it is invoked by a link with those names. If @TPUT@ is invoked by a link named reset, this has the same effect as @TPUT@ reset. The @TSET@(1) utility also treats a link named reset specially. Before ncurses 6.1, the two utilities were different from each other: • @TSET@ utility reset the terminal modes and special characters (not done with @TPUT@). • On the other hand, @TSET@'s repertoire of terminal capabilities for resetting the terminal was more limited, i.e., only reset_1string, reset_2string and reset_file in contrast to the tab-stops and margins which are set by this utility. • The reset program is usually an alias for @TSET@, because of this difference with resetting terminal modes and special characters. With the changes made for ncurses 6.1, the reset feature of the two programs is (mostly) the same. A few differences remain: • The @TSET@ program waits one second when resetting, in case it happens to be a hardware terminal. • The two programs write the terminal initialization strings to different streams (i.e., the standard error for @TSET@ and the standard output for @TPUT@). Note: although these programs write to different streams, redirecting their output to a file will capture only part of their actions. The changes to the terminal modes are not affected by redirecting the output. If @TPUT@ is invoked by a link named init, this has the same effect as @TPUT@ init. Again, you are less likely to use that link because another program named init has a more well- established use. Terminal Size Besides the special commands (e.g., clear), @TPUT@ treats certain terminfo capabilities specially: lines and cols. @TPUT@ calls setupterm(3X) to obtain the terminal size: • first, it gets the size from the terminal database (which generally is not provided for terminal emulators which do not have a fixed window size) • then it asks the operating system for the terminal's size (which generally works, unless connecting via a serial line which does not support NAWS: negotiations about window size). • finally, it inspects the environment variables LINES and COLUMNS which may override the terminal size. If the -T option is given @TPUT@ ignores the environment variables by calling use_tioctl(TRUE), relying upon the operating system (or finally, the terminal database).
# tput > View and modify terminal settings and capabilities. More information: > https://manned.org/tput. * Move the cursor to a screen location: `tput cup {{row}} {{column}}` * Set foreground (af) or background (ab) color: `tput {{setaf|setab}} {{ansi_color_code}}` * Show number of columns, lines, or colors: `tput {{cols|lines|colors}}` * Ring the terminal bell: `tput bel` * Reset all terminal attributes: `tput sgr0` * Enable or disable word wrap: `tput {{smam|rmam}}`
nice
Run COMMAND with an adjusted niceness, which affects process scheduling. With no COMMAND, print the current niceness. Niceness values range from -20 (most favorable to the process) to 19 (least favorable to the process). Mandatory arguments to long options are mandatory for short options too. -n, --adjustment=N add integer N to the niceness (default 10) --help display this help and exit --version output version information and exit NOTE: your shell may have its own version of nice, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports. Exit status: 125 if the nice command itself fails 126 if COMMAND is found but cannot be invoked 127 if COMMAND cannot be found - the exit status of COMMAND otherwise
# nice > Execute a program with a custom scheduling priority (niceness). Niceness > values range from -20 (the highest priority) to 19 (the lowest). More > information: https://www.gnu.org/software/coreutils/nice. * Launch a program with altered priority: `nice -n {{niceness_value}} {{command}}`
echo
Echo the STRING(s) to standard output. -n do not output the trailing newline -e enable interpretation of backslash escapes -E disable interpretation of backslash escapes (default) --help display this help and exit --version output version information and exit If -e is in effect, the following sequences are recognized: \\ backslash \a alert (BEL) \b backspace \c produce no further output \e escape \f form feed \n new line \r carriage return \t horizontal tab \v vertical tab \0NNN byte with octal value NNN (1 to 3 digits) \xHH byte with hexadecimal value HH (1 to 2 digits) NOTE: your shell may have its own version of echo, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports. NOTE: printf(1) is a preferred alternative, which does not have issues outputting option-like strings.
# echo > Print given arguments. More information: > https://www.gnu.org/software/coreutils/echo. * Print a text message. Note: quotes are optional: `echo "{{Hello World}}"` * Print a message with environment variables: `echo "{{My path is $PATH}}"` * Print a message without the trailing newline: `echo -n "{{Hello World}}"` * Append a message to the file: `echo "{{Hello World}}" >> {{file.txt}}` * Enable interpretation of backslash escapes (special characters): `echo -e "{{Column 1\tColumn 2}}"` * Print the exit status of the last executed command (Note: In Windows Command Prompt and PowerShell the equivalent commands are `echo %errorlevel%` and `$lastexitcode` respectively): `echo $?`
expand
The expand utility shall write files or the standard input to the standard output with <tab> characters replaced with one or more <space> characters needed to pad to the next tab stop. Any <backspace> characters shall be copied to the output and cause the column position count for tab stop calculations to be decremented; the column position count shall not be decremented below zero. The expand utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -t tablist Specify the tab stops. The application shall ensure that the argument tablist consists of either a single positive decimal integer or a list of tabstops. If a single number is given, tabs shall be set that number of column positions apart instead of the default 8. If a list of tabstops is given, the application shall ensure that it consists of a list of two or more positive decimal integers, separated by <blank> or <comma> characters, in ascending order. The <tab> characters shall be set at those specific column positions. Each tab stop N shall be an integer value greater than zero, and the list is in strictly ascending order. This is taken to mean that, from the start of a line of output, tabbing to position N shall cause the next character output to be in the (N+1)th column position on that line. In the event of expand having to process a <tab> at a position beyond the last of those specified in a multiple tab-stop list, the <tab> shall be replaced by a single <space> in the output.
# expand > Convert tabs to spaces. More information: > https://www.gnu.org/software/coreutils/expand. * Convert tabs in each file to spaces, writing to `stdout`: `expand {{path/to/file}}` * Convert tabs to spaces, reading from `stdin`: `expand` * Do not convert tabs after non blanks: `expand -i {{path/to/file}}` * Have tabs a certain number of characters apart, not 8: `expand -t={{number}} {{path/to/file}}` * Use a comma separated list of explicit tab positions: `expand -t={{1,4,6}}`
systemd-firstboot
systemd-firstboot initializes basic system settings interactively during the first boot, or non-interactively on an offline system image. The service is started during boot if ConditionFirstBoot=yes is met, which essentially means that /etc/ is empty, see systemd.unit(5) for details. The following settings may be configured: • The machine ID of the system • The system locale, more specifically the two locale variables LANG= and LC_MESSAGES • The system keyboard map • The system time zone • The system hostname • The kernel command line used when installing kernel images • The root user's password and shell Each of the fields may either be queried interactively by users, set non-interactively on the tool's command line, or be copied from a host system that is used to set up the system image. If a setting is already initialized, it will not be overwritten and the user will not be prompted for the setting. Note that this tool operates directly on the file system and does not involve any running system services, unlike localectl(1), timedatectl(1) or hostnamectl(1). This allows systemd-firstboot to operate on mounted but not booted disk images and in early boot. It is not recommended to use systemd-firstboot on the running system after it has been set up. The following options are understood: --root=root Takes a directory path as an argument. All paths will be prefixed with the given alternate root path, including config search paths. This is useful to operate on a system image mounted to the specified directory instead of the host system itself. --image=path Takes a path to a disk image file or block device node. If specified all operations are applied to file system in the indicated disk image. This is similar to --root= but operates on file systems stored in disk images or block devices. The disk image should either contain just a file system or a set of file systems within a GPT partition table, following the Discoverable Partitions Specification[1]. For further information on supported disk images, see systemd-nspawn(1)'s switch of the same name. --locale=LOCALE, --locale-messages=LOCALE Sets the system locale, more specifically the LANG= and LC_MESSAGES settings. The argument should be a valid locale identifier, such as "de_DE.UTF-8". This controls the locale.conf(5) configuration file. --keymap=KEYMAP Sets the system keyboard layout. The argument should be a valid keyboard map, such as "de-latin1". This controls the "KEYMAP" entry in the vconsole.conf(5) configuration file. --timezone=TIMEZONE Sets the system time zone. The argument should be a valid time zone identifier, such as "Europe/Berlin". This controls the localtime(5) symlink. --hostname=HOSTNAME Sets the system hostname. The argument should be a hostname, compatible with DNS. This controls the hostname(5) configuration file. --setup-machine-id Initialize the system's machine ID to a random ID. This controls the machine-id(5) file. This option only works in combination with --root= or --image=. On a running system, machine-id is written by the manager with help from systemd-machine-id-commit.service(8). --machine-id=ID Set the system's machine ID to the specified value. The same restrictions apply as to --setup-machine-id. --root-password=PASSWORD, --root-password-file=PATH, --root-password-hashed=HASHED_PASSWORD Sets the password of the system's root user. This creates/modifies the passwd(5) and shadow(5) files. This setting exists in three forms: --root-password= accepts the password to set directly on the command line, --root-password-file= reads it from a file and --root-password-hashed= accepts an already hashed password on the command line. See shadow(5) for more information on the format of the hashed password. Note that it is not recommended to specify plaintext passwords on the command line, as other users might be able to see them simply by invoking ps(1). --root-shell=SHELL Sets the shell of the system's root user. This creates/modifies the passwd(5) file. --kernel-command-line=CMDLINE Sets the system's kernel command line. This controls the /etc/kernel/cmdline file which is used by kernel-install(8). --prompt-locale, --prompt-keymap, --prompt-timezone, --prompt-hostname, --prompt-root-password, --prompt-root-shell Prompt the user interactively for a specific basic setting. Note that any explicit configuration settings specified on the command line take precedence, and the user is not prompted for it. --prompt Query the user for locale, keymap, timezone, hostname, root's password, and root's shell. This is equivalent to specifying --prompt-locale, --prompt-keymap, --prompt-timezone, --prompt-hostname, --prompt-root-password, --prompt-root-shell in combination. --copy-locale, --copy-keymap, --copy-timezone, --copy-root-password, --copy-root-shell Copy a specific basic setting from the host. This only works in combination with --root= or --image=. --copy Copy locale, keymap, time zone, root password and shell from the host. This is equivalent to specifying --copy-locale, --copy-keymap, --copy-timezone, --copy-root-password, --copy-root-shell in combination. --force Write configuration even if the relevant files already exist. Without this option, systemd-firstboot doesn't modify or replace existing files. Note that when configuring the root account, even with this option, systemd-firstboot only modifies the entry of the "root" user, leaving other entries in /etc/passwd and /etc/shadow intact. --reset If specified, all existing files that are configured by systemd-firstboot are removed. Note that the files are removed regardless of whether they'll be configured with a new value or not. This operation ensures that the next boot of the image will be considered a first boot, and systemd-firstboot will prompt again to configure each of the removed files. --delete-root-password Removes the password of the system's root user, enabling login as root without a password unless the root account is locked. Note that this is extremely insecure and hence this option should not be used lightly. --welcome= Takes a boolean argument. By default when prompting the user for configuration options a brief welcome text is shown before the first question is asked. Pass false to this option to turn off the welcome text. -h, --help Print a short help text and exit. --version Print a short version string and exit.
# systemd-firstboot > Initialize basic system settings on or before the first boot-up of a system. > More information: https://www.freedesktop.org/software/systemd/man/systemd- > firstboot.html. * Operate on the specified directory instead of the root directory of the host system: `sudo systemd-firstboot --root={{path/to/root_directory}}` * Set the system keyboard layout: `sudo systemd-firstboot --keymap={{keymap}}` * Set the system hostname: `sudo systemd-firstboot --hostname={{hostname}}` * Set the root user's password: `sudo systemd-firstboot --root-password={{password}}` * Prompt the user interactively for a specific basic setting: `sudo systemd-firstboot --prompt={{setting}}` * Force writing configuration even if the relevant files already exist: `sudo systemd-firstboot --force` * Remove all existing files that are configured by `systemd-firstboot`: `sudo systemd-firstboot --reset` * Remove the password of the system's root user: `sudo systemd-firstboot --delete-root-password`
last
last looks through the file wtmp (which records all logins/logouts) and prints information about connect times of users. Records are printed from most recent to least recent. Records can be specified by tty and username. tty names can be abbreviated: last 0 is equivalent to last tty0. Multiple arguments can be specified: last root console will print all of the entries for the user root and all entries logged in on the console tty. The special users reboot and shutdown log in when the system reboots or (surprise) shuts down. last reboot will produce a record of reboot times. If last is interrupted by a quit signal, it prints out how far its search in the wtmp file had reached and then quits. -n num, --lines num Limit the number of lines that last outputs. This is different from u*x last, which lets you specify the number right after a dash. -f filename, --file filename Read from the file filename instead of the system's wtmp file. --complain When the wtmp file has a problem (a time-warp, missing record, or whatever), print out an appropriate error. --tw-leniency num Set the time warp leniency to num seconds. Records in wtmp files might be slightly out of order (most notably when two logins occur within a one-second period - the second one gets written first). By default, this value is set to 60. If the program notices this problem, time is not assigned to users unless the --timewarps flag is used. --tw-suspicious num Set the time warp suspicious value to num seconds. If two records in the wtmp file are farther than this number of seconds apart, there is a problem with the wtmp file (or your machine hasn't been used in a year). If the program notices this problem, time is not assigned to users unless the --timewarps flag is used. --no-truncate-ftp-entries When printing out the information, don't chop the number part off of `ftp'XXXX entries. -x, --more-records Print out run level changes, shutdowns, and time changes in addition to the normal records. -a, --all-records Print out all records in the wtmp file. -i, --ip-address Some machines store the IP address of a connection in a utmp record. Enabling this option makes last print the IP address instead of the hostname. -w, --wide By default, last tries to print each entry within in 80 columns. Use this option to instruct last to print out the fields in the wtmp file with full field widths. --debug Print verbose internal information. -s, --print-seconds Print seconds when displaying dates. -y, --print-year Print year when displaying dates. -V, --version Print last's version number. -h, --help Prints the usage string and default locations of system files to standard output and exits.
# last > View the last logged in users. More information: https://manned.org/last. * View last logins, their duration and other information as read from `/var/log/wtmp`: `last` * Specify how many of the last logins to show: `last -n {{login_count}}` * Print the full date and time for entries and then display the hostname column last to prevent truncation: `last -F -a` * View all logins by a specific user and show the IP address instead of the hostname: `last {{username}} -i` * View all recorded reboots (i.e., the last logins of the pseudo user "reboot"): `last reboot` * View all recorded shutdowns (i.e., the last logins of the pseudo user "shutdown"): `last shutdown`
flatpak
Flatpak is a tool for managing applications and the runtimes they use. In the Flatpak model, applications can be built and distributed independently from the host system they are used on, and they are isolated from the host system ('sandboxed') to some degree, at runtime. Flatpak can operate in system-wide or per-user mode. The system-wide data (runtimes, applications and configuration) is located in $prefix/var/lib/flatpak/, and the per-user data is in $HOME/.local/share/flatpak/. Below these locations, there is a local repository in the repo/ subdirectory and installed runtimes and applications are in the corresponding runtime/ and app/ subdirectories. System-wide remotes can be statically preconfigured by dropping flatpakrepo files into /etc/flatpak/remotes.d/. In addition to the system-wide installation in $prefix/var/lib/flatpak/, which is always considered the default one unless overridden, more system-wide installations can be defined via configuration files in /etc/flatpak/installations.d/, which must define at least the id of the installation and the absolute path to it. Other optional parameters like DisplayName, Priority or StorageType are also supported. Flatpak uses OSTree to distribute and deploy data. The repositories it uses are OSTree repositories and can be manipulated with the ostree utility. Installed runtimes and applications are OSTree checkouts. Basic commands for building flatpaks such as build-init, build and build-finish are included in the flatpak utility. For higher-level build support, see the separate flatpak-builder(1) tool. Flatpak supports installing from sideload repos. These are partial copies of a repository (generated by flatpak create-usb) that are used as an installation source when offline (and online as a performance improvement). Such repositories are configured by creating symlinks to the sideload sources in the sideload-repos subdirectory of the installation directory (i.e. typically /var/lib/flatpak/sideload-repos or ~/.local/share/flatpak/sideload-repos). Additionally symlinks can be created in /run/flatpak/sideload-repos which is a better location for non-persistent sources (as it is cleared on reboot). These symlinks can point to either the directory given to flatpak create-usb which by default writes to the subpath .ostree/repo, or directly to an ostree repo. The following global options are understood. Individual commands have their own options. -h, --help Show help options and exit. -v, --verbose Show debug information during command processing. Use -vv for more detail. --ostree-verbose Show OSTree debug information during command processing. --version Print version information and exit. --default-arch Print the default arch and exit. --supported-arches Print the supported arches in priority order and exit. --gl-drivers Print the list of active gl drivers and exit. --installations Print paths of system installations and exit. --print-system-only When the flatpak --print-updated-env command is run, only print the environment for system flatpak installations, not including the user’s home installation. --print-updated-env Print the set of environment variables needed to use flatpaks, amending the current set of environment variables. This is intended to be used in a systemd environment generator, and should not need to be run manually.
# flatpak > Build, install and run flatpak applications and runtimes. More information: > https://docs.flatpak.org/en/latest/flatpak-command-reference.html#flatpak. * Run an installed application: `flatpak run {{name}}` * Install an application from a remote source: `flatpak install {{remote}} {{name}}` * List all installed applications and runtimes: `flatpak list` * Update all installed applications and runtimes: `flatpak update` * Add a remote source: `flatpak remote-add --if-not-exists {{remote_name}} {{remote_url}}` * Remove an installed application: `flatpak remove {{name}}` * Remove all unused applications: `flatpak remove --unused` * Show information about an installed application: `flatpak info {{name}}`
cksum
Print or verify checksums. By default use the 32 bit CRC algorithm. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -a, --algorithm=TYPE select the digest type to use. See DIGEST below. -b, --base64 emit base64-encoded digests, not hexadecimal -c, --check read checksums from the FILEs and check them -l, --length=BITS digest length in bits; must not exceed the max for the blake2 algorithm and must be a multiple of 8 --raw emit a raw binary digest, not hexadecimal --tag create a BSD-style checksum (the default) --untagged create a reversed style checksum, without digest type -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --debug indicate which implementation used --help display this help and exit --version output version information and exit DIGEST determines the digest algorithm and default output format: sysv (equivalent to sum -s) bsd (equivalent to sum -r) crc (equivalent to cksum) md5 (equivalent to md5sum) sha1 (equivalent to sha1sum) sha224 (equivalent to sha224sum) sha256 (equivalent to sha256sum) sha384 (equivalent to sha384sum) sha512 (equivalent to sha512sum) blake2b (equivalent to b2sum) sm3 (only available through cksum) When checking, the input should be a former output of this program, or equivalent standalone program.
# cksum > Calculates CRC checksums and byte counts of a file. Note, on old UNIX > systems the CRC implementation may differ. More information: > https://www.gnu.org/software/coreutils/cksum. * Display a 32-bit checksum, size in bytes and filename: `cksum {{path/to/file}}`
git-for-each-repo
Run a Git command on a list of repositories. The arguments after the known options or -- indicator are used as the arguments for the Git subprocess. THIS COMMAND IS EXPERIMENTAL. THE BEHAVIOR MAY CHANGE. For example, we could run maintenance on each of a list of repositories stored in a maintenance.repo config variable using git for-each-repo --config=maintenance.repo maintenance run This will run git -C <repo> maintenance run for each value <repo> in the multi-valued config variable maintenance.repo. --config=<config> Use the given config variable as a multi-valued list storing absolute path names. Iterate on that list of paths to run the given arguments. These config values are loaded from system, global, and local Git config, as available. If git for-each-repo is run in a directory that is not a Git repository, then only the system and global config is used.
# git for-each-repo > Run a Git command on a list of repositories. Note: this command is > experimental and may change. More information: https://git-scm.com/docs/git- > for-each-repo. * Run maintenance on each of a list of repositories stored in the `maintenance.repo` user configuration variable: `git for-each-repo --config={{maintenance.repo}} {{maintenance run}}` * Run `git pull` on each repository listed in a global configuration variable: `git for-each-repo --config={{global_configuration_variable}} {{pull}}`
more
more is a filter for paging through text one screenful at a time. This version is especially primitive. Users should realize that less(1) provides more(1) emulation plus extensive enhancements. Options are also taken from the environment variable MORE (make sure to precede them with a dash (-)) but command-line options will override those. -d, --silent Prompt with "[Press space to continue, 'q' to quit.]", and display "[Press 'h' for instructions.]" instead of ringing the bell when an illegal key is pressed. -l, --logical Do not pause after any line containing a ^L (form feed). -e, --exit-on-eof Exit on End-Of-File, enabled by default if POSIXLY_CORRECT environment variable is not set or if not executed on terminal. -f, --no-pause Count logical lines, rather than screen lines (i.e., long lines are not folded). -p, --print-over Do not scroll. Instead, clear the whole screen and then display the text. Notice that this option is switched on automatically if the executable is named page. -c, --clean-print Do not scroll. Instead, paint each screen from the top, clearing the remainder of each line as it is displayed. -s, --squeeze Squeeze multiple blank lines into one. -u, --plain Suppress underlining. This option is silently ignored as backwards compatibility. -n, --lines number Specify the number of lines per screenful. The number argument is a positive decimal integer. The --lines option shall override any values obtained from any other source, such as number of lines reported by terminal. -number A numeric option means the same as --lines option argument. +number Start displaying each file at line number. +/string The string to be searched in each file before starting to display it. -h, --help Display help text and exit. -V, --version Print version and exit.
# more > Open a file for interactive reading, allowing scrolling and search. More > information: https://manned.org/more. * Open a file: `more {{path/to/file}}` * Open a file displaying from a specific line: `more +{{line_number}} {{path/to/file}}` * Display help: `more --help` * Go to the next page: `<Space>` * Search for a string (press `n` to go to the next match): `/{{something}}` * Exit: `q` * Display help about interactive commands: `h`
apropos
Each manual page has a short description available within it. apropos searches the descriptions for instances of keyword. keyword is usually a regular expression, as if (-r) was used, or may contain wildcards (-w), or match the exact keyword (-e). Using these options, it may be necessary to quote the keyword or escape (\) the special characters to stop the shell from interpreting them. The standard matching rules allow matches to be made against the page name and word boundaries in the description. The database searched by apropos is updated by the mandb program. Depending on your installation, this may be run by a periodic cron job, or may need to be run manually after new manual pages have been installed. -d, --debug Print debugging information. -v, --verbose Print verbose warning messages. -r, --regex Interpret each keyword as a regular expression. This is the default behaviour. Each keyword will be matched against the page names and the descriptions independently. It can match any part of either. The match is not limited to word boundaries. -w, --wildcard Interpret each keyword as a pattern containing shell style wildcards. Each keyword will be matched against the page names and the descriptions independently. If --exact is also used, a match will only be found if an expanded keyword matches an entire description or page name. Otherwise the keyword is also allowed to match on word boundaries in the description. -e, --exact Each keyword will be exactly matched against the page names and the descriptions. -a, --and Only display items that match all the supplied keywords. The default is to display items that match any keyword. -l, --long Do not trim output to the terminal width. Normally, output will be truncated to the terminal width to avoid ugly results from poorly-written NAME sections. -s list, --sections=list, --section=list Search only the given manual sections. list is a colon- or comma-separated list of sections. If an entry in list is a simple section, for example "3", then the displayed list of descriptions will include pages in sections "3", "3perl", "3x", and so on; while if an entry in list has an extension, for example "3perl", then the list will only include pages in that exact part of the manual section. -m system[,...], --systems=system[,...] If this system has access to other operating systems' manual page descriptions, they can be searched using this option. To search NewOS's manual page descriptions, use the option -m NewOS. The system specified can be a combination of comma- delimited operating system names. To include a search of the native operating system's whatis descriptions, include the system name man in the argument string. This option will override the $SYSTEM environment variable. -M path, --manpath=path Specify an alternate set of colon-delimited manual page hierarchies to search. By default, apropos uses the $MANPATH environment variable, unless it is empty or unset, in which case it will determine an appropriate manpath based on your $PATH environment variable. This option overrides the contents of $MANPATH. -L locale, --locale=locale apropos will normally determine your current locale by a call to the C function setlocale(3) which interrogates various environment variables, possibly including $LC_MESSAGES and $LANG. To temporarily override the determined value, use this option to supply a locale string directly to apropos. Note that it will not take effect until the search for pages actually begins. Output such as the help message will always be displayed in the initially determined locale. -C file, --config-file=file Use this user configuration file rather than the default of ~/.manpath. -?, --help Print a help message and exit. --usage Print a short usage message and exit. -V, --version Display version information.
# apropos > Search the manual pages for names and descriptions. More information: > https://manned.org/apropos. * Search for a keyword using a regular expression: `apropos {{regular_expression}}` * Search without restricting the output to the terminal width: `apropos -l {{regular_expression}}` * Search for pages that contain all the expressions given: `apropos {{regular_expression_1}} -a {{regular_expression_2}} -a {{regular_expression_3}}`
cat
The cat utility shall read files in sequence and shall write their contents to the standard output in the same sequence. The cat utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -u Write bytes from the input file to the standard output without delay as each is read.
# cat > Print and concatenate files. More information: > https://keith.github.io/xcode-man-pages/cat.1.html. * Print the contents of a file to `stdout`: `cat {{path/to/file}}` * Concatenate several files into an output file: `cat {{path/to/file1 path/to/file2 ...}} > {{path/to/output_file}}` * Append several files to an output file: `cat {{path/to/file1 path/to/file2 ...}} >> {{path/to/output_file}}` * Copy the contents of a file into an output file without buffering: `cat -u {{/dev/tty12}} > {{/dev/tty13}}` * Write `stdin` to a file: `cat - > {{path/to/file}}` * Number all output lines: `cat -n {{path/to/file}}` * Display non-printable and whitespace characters (with `M-` prefix if non-ASCII): `cat -v -t -e {{path/to/file}}`
arch
Print machine architecture. --help display this help and exit --version output version information and exit
# arch > Display the name of the system architecture, or run a command under a > different architecture. See also `uname`. More information: > https://www.unix.com/man-page/osx/1/arch/. * Display the system's architecture: `arch` * Run a command using x86_64: `arch -x86_64 "{{command}}"` * Run a command using arm: `arch -arm64 "{{command}}"`
update-alternatives
update-alternatives creates, removes, maintains and displays information about the symbolic links comprising the Debian alternatives system. It is possible for several programs fulfilling the same or similar functions to be installed on a single system at the same time. For example, many systems have several text editors installed at once. This gives choice to the users of a system, allowing each to use a different editor, if desired, but makes it difficult for a program to make a good choice for an editor to invoke if the user has not specified a particular preference. Debian's alternatives system aims to solve this problem. A generic name in the filesystem is shared by all files providing interchangeable functionality. The alternatives system and the system administrator together determine which actual file is referenced by this generic name. For example, if the text editors ed(1) and nvi(1) are both installed on the system, the alternatives system will cause the generic name /usr/bin/editor to refer to /usr/bin/nvi by default. The system administrator can override this and cause it to refer to /usr/bin/ed instead, and the alternatives system will not alter this setting until explicitly requested to do so. The generic name is not a direct symbolic link to the selected alternative. Instead, it is a symbolic link to a name in the alternatives directory, which in turn is a symbolic link to the actual file referenced. This is done so that the system administrator's changes can be confined within the /usr/local/etc directory: the FHS (q.v.) gives reasons why this is a Good Thing. When each package providing a file with a particular functionality is installed, changed or removed, update- alternatives is called to update information about that file in the alternatives system. update-alternatives is usually called from the following Debian package maintainer scripts, postinst (configure) to install the alternative and from prerm and postrm (remove) to remove the alternative. Note: in most (if not all) cases no other maintainer script actions should call update- alternatives, in particular neither of upgrade nor disappear, as any other such action can lose the manual state of an alternative, or make the alternative temporarily flip-flop, or completely switch when several of them have the same priority. It is often useful for a number of alternatives to be synchronized, so that they are changed as a group; for example, when several versions of the vi(1) editor are installed, the manual page referenced by /usr/share/man/man1/vi.1 should correspond to the executable referenced by /usr/bin/vi. update- alternatives handles this by means of master and slave links; when the master is changed, any associated slaves are changed too. A master link and its associated slaves make up a link group. Each link group is, at any given time, in one of two modes: automatic or manual. When a group is in automatic mode, the alternatives system will automatically decide, as packages are installed and removed, whether and how to update the links. In manual mode, the alternatives system will retain the choice of the administrator and avoid changing the links (except when something is broken). Link groups are in automatic mode when they are first introduced to the system. If the system administrator makes changes to the system's automatic settings, this will be noticed the next time update-alternatives is run on the changed link's group, and the group will automatically be switched to manual mode. Each alternative has a priority associated with it. When a link group is in automatic mode, the alternatives pointed to by members of the group will be those which have the highest priority. When using the --config option, update-alternatives will list all of the choices for the link group of which given name is the master alternative name. The current choice is marked with a ‘*’. You will then be prompted for your choice regarding this link group. Depending on the choice made, the link group might no longer be in auto mode. You will need to use the --auto option in order to return to the automatic mode (or you can rerun --config and select the entry marked as automatic). If you want to configure non-interactively you can use the --set option instead (see below). Different packages providing the same file need to do so cooperatively. In other words, the usage of update-alternatives is mandatory for all involved packages in such case. It is not possible to override some file in a package that does not employ the update-alternatives mechanism. --altdir directory Specifies the alternatives directory, when this is to be different from the default. Defaults to «/usr/local/etc/alternatives». --admindir directory Specifies the administrative directory, when this is to be different from the default. Defaults to «/usr/local/var/lib/dpkg/alternatives» if DPKG_ADMINDIR has not been set. --instdir directory Specifies the installation directory where alternatives links will be created (since version 1.20.1). Defaults to «/» if DPKG_ROOT has not been set. --root directory Specifies the root directory (since version 1.20.1). This also sets the alternatives, installation and administrative directories to match. Defaults to «/» if DPKG_ROOT has not been set. --log file Specifies the log file (since version 1.15.0), when this is to be different from the default (/usr/local/var/log/alternatives.log). --force Allow replacing or dropping any real file that is installed where an alternative link has to be installed or removed. --skip-auto Skip configuration prompt for alternatives which are properly configured in automatic mode. This option is only relevant with --config or --all. --quiet Do not generate any comments unless errors occur. --verbose Generate more comments about what is being done. --debug Generate even more comments, helpful for debugging, about what is being done (since version 1.19.3).
# update-alternatives > A convenient tool for maintaining symbolic links to determine default > commands. More information: https://manned.org/update-alternatives. * Add a symbolic link: `sudo update-alternatives --install {{path/to/symlink}} {{command_name}} {{path/to/command_binary}} {{priority}}` * Configure a symbolic link for `java`: `sudo update-alternatives --config {{java}}` * Remove a symbolic link: `sudo update-alternatives --remove {{java}} {{/opt/java/jdk1.8.0_102/bin/java}}` * Display information about a specified command: `update-alternatives --display {{java}}` * Display all commands and their current selection: `update-alternatives --get-selections`
mailx
The mailx utility provides a message sending and receiving facility. It has two major modes, selected by the options used: Send Mode and Receive Mode. On systems that do not support the User Portability Utilities option, an application using mailx shall have the ability to send messages in an unspecified manner (Send Mode). Unless the first character of one or more lines is <tilde> ('~'), all characters in the input message shall appear in the delivered message, but additional characters may be inserted in the message before it is retrieved. On systems supporting the User Portability Utilities option, mail-receiving capabilities and other interactive features, Receive Mode, described below, also shall be enabled. Send Mode Send Mode can be used by applications or users to send messages from the text in standard input. Receive Mode Receive Mode is more oriented towards interactive users. Mail can be read and sent in this interactive mode. When reading mail, mailx provides commands to facilitate saving, deleting, and responding to messages. When sending mail, mailx allows editing, reviewing, and other modification of the message as it is entered. Incoming mail shall be stored in one or more unspecified locations for each user, collectively called the system mailbox for that user. When mailx is invoked in Receive Mode, the system mailbox shall be the default place to find new mail. As messages are read, they shall be marked to be moved to a secondary file for storage, unless specific action is taken. This secondary file is called the mbox and is normally located in the directory referred to by the HOME environment variable (see MBOX in the ENVIRONMENT VARIABLES section for a description of this file). Messages shall remain in this file until explicitly removed. When the -f option is used to read mail messages from secondary files, messages shall be retained in those files unless specifically removed. All three of these locations—system mailbox, mbox, and secondary file—are referred to in this section as simply ``mailboxes'', unless more specific identification is required. The mailx utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported. (Only the -s subject option shall be required on all systems. The other options are required only on systems supporting the User Portability Utilities option.) -e Test for the presence of mail in the system mailbox. The mailx utility shall write nothing and exit with a successful return code if there is mail to read. -f Read messages from the file named by the file operand instead of the system mailbox. (See also folder.) If no file operand is specified, read messages from mbox instead of the system mailbox. -F Record the message in a file named after the first recipient. The name is the login-name portion of the address found first on the To: line in the mail header. Overrides the record variable, if set (see Internal Variables in mailx). -H Write a header summary only. -i Ignore interrupts. (See also ignore.) -n Do not initialize from the system default start-up file. See the EXTENDED DESCRIPTION section. -N Do not write an initial header summary. -s subject Set the Subject header field to subject. All characters in the subject string shall appear in the delivered message. The results are unspecified if subject is longer than {LINE_MAX} - 10 bytes or contains a <newline>. -u user Read the system mailbox of the login name user. This shall only be successful if the invoking user has appropriate privileges to read the system mailbox of that user.
# mailx > Send and receive mail. More information: https://manned.org/mailx. * Send mail (the content should be typed after the command, and ended with `Ctrl+D`): `mailx -s "{{subject}}" {{to_addr}}` * Send mail with content passed from another command: `echo "{{content}}" | mailx -s "{{subject}}" {{to_addr}}` * Send mail with content read from a file: `mailx -s "{{subject}}" {{to_addr}} < {{content.txt}}` * Send mail to a recipient and CC to another address: `mailx -s "{{subject}}" -c {{cc_addr}} {{to_addr}}` * Send mail specifying the sender address: `mailx -s "{{subject}}" -r {{from_addr}} {{to_addr}}` * Send mail with an attachment: `mailx -a {{path/to/file}} -s "{{subject}}" {{to_addr}}`
dot
The shell shall execute commands from the file in the current environment. If file does not contain a <slash>, the shell shall use the search path specified by PATH to find the directory containing file. Unlike normal command search, however, the file searched for by the dot utility need not be executable. If no readable file is found, a non-interactive shell shall abort; an interactive shell shall write a diagnostic message to standard error, but this condition shall not be considered a syntax error. None.
# dot > Render an image of a `linear directed` network graph from a `graphviz` file. > Layouts: `dot`, `neato`, `twopi`, `circo`, `fdp`, `sfdp`, `osage` & > `patchwork`. More information: https://graphviz.org/doc/info/command.html. * Render a `png` image with a filename based on the input filename and output format (uppercase -O): `dot -T {{png}} -O {{path/to/input.gv}}` * Render a `svg` image with the specified output filename (lowercase -o): `dot -T {{svg}} -o {{path/to/image.svg}} {{path/to/input.gv}}` * Render the output in `ps`, `pdf`, `svg`, `fig`, `png`, `gif`, `jpg`, `json`, or `dot` format: `dot -T {{format}} -O {{path/to/input.gv}}` * Render a `gif` image using `stdin` and `stdout`: `echo "{{digraph {this -> that} }}" | dot -T {{gif}} > {{path/to/image.gif}}` * Display help: `dot -?`
gcc
When you invoke GCC, it normally does preprocessing, compilation, assembly and linking. The "overall options" allow you to stop this process at an intermediate stage. For example, the -c option says not to run the linker. Then the output consists of object files output by the assembler. Other options are passed on to one or more stages of processing. Some options control the preprocessor and others the compiler itself. Yet other options control the assembler and linker; most of these are not documented here, since you rarely need to use any of them. Most of the command-line options that you can use with GCC are useful for C programs; when an option is only useful with another language (usually C++), the explanation says so explicitly. If the description for a particular option does not mention a source language, you can use that option with all supported languages. The usual way to run GCC is to run the executable called gcc, or machine-gcc when cross-compiling, or machine-gcc-version to run a specific version of GCC. When you compile C++ programs, you should invoke GCC as g++ instead. The gcc program accepts options and file names as operands. Many options have multi-letter names; therefore multiple single-letter options may not be grouped: -dv is very different from -d -v. You can mix options and other arguments. For the most part, the order you use doesn't matter. Order does matter when you use several options of the same kind; for example, if you specify -L more than once, the directories are searched in the order specified. Also, the placement of the -l option is significant. Many options have long names starting with -f or with -W---for example, -fmove-loop-invariants, -Wformat and so on. Most of these have both positive and negative forms; the negative form of -ffoo is -fno-foo. This manual documents only one of these two forms, whichever one is not the default. Some options take one or more arguments typically separated either by a space or by the equals sign (=) from the option name. Unless documented otherwise, an argument can be either numeric or a string. Numeric arguments must typically be small unsigned decimal or hexadecimal integers. Hexadecimal arguments must begin with the 0x prefix. Arguments to options that specify a size threshold of some sort may be arbitrarily large decimal or hexadecimal integers followed by a byte size suffix designating a multiple of bytes such as "kB" and "KiB" for kilobyte and kibibyte, respectively, "MB" and "MiB" for megabyte and mebibyte, "GB" and "GiB" for gigabyte and gigibyte, and so on. Such arguments are designated by byte-size in the following text. Refer to the NIST, IEC, and other relevant national and international standards for the full listing and explanation of the binary and decimal byte size prefixes. Option Summary Here is a summary of all the options, grouped by type. Explanations are in the following sections. Overall Options -c -S -E -o file -x language -v -
# gcc > Preprocess and compile C and C++ source files, then assemble and link them > together. More information: https://gcc.gnu.org. * Compile multiple source files into an executable: `gcc {{path/to/source1.c path/to/source2.c ...}} -o {{path/to/output_executable}}` * Show common warnings, debug symbols in output, and optimize without affecting debugging: `gcc {{path/to/source.c}} -Wall -g -Og -o {{path/to/output_executable}}` * Include libraries from a different path: `gcc {{path/to/source.c}} -o {{path/to/output_executable}} -I{{path/to/header}} -L{{path/to/library}} -l{{library_name}}` * Compile source code into Assembler instructions: `gcc -S {{path/to/source.c}}` * Compile source code into an object file without linking: `gcc -c {{path/to/source.c}}`
whoami
Print the user name associated with the current effective user ID. Same as id -un. --help display this help and exit --version output version information and exit
# whoami > Print the username associated with the current effective user ID. More > information: https://www.gnu.org/software/coreutils/whoami. * Display currently logged username: `whoami` * Display the username after a change in the user ID: `sudo whoami`
gitk
Displays changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision. To control which revisions to show, gitk supports most options applicable to the git rev-list command. It also supports a few options applicable to the git diff-* commands to control how the changes each commit introduces are shown. Finally, it supports some gitk-specific options. gitk generally only understands options with arguments in the sticked form (see gitcli(7)) due to limitations in the command-line parser. rev-list options and arguments This manual page describes only the most frequently used options. See git-rev-list(1) for a complete list. --all Show all refs (branches, tags, etc.). --branches[=<pattern>], --tags[=<pattern>], --remotes[=<pattern>] Pretend as if all the branches (tags, remote branches, resp.) are listed on the command line as <commit>. If <pattern> is given, limit refs to ones matching given shell glob. If pattern lacks ?, *, or [, /* at the end is implied. --since=<date> Show commits more recent than a specific date. --until=<date> Show commits older than a specific date. --date-order Sort commits by date when possible. --merge After an attempt to merge stops with conflicts, show the commits on the history between two branches (i.e. the HEAD and the MERGE_HEAD) that modify the conflicted files and do not exist on all the heads being merged. --left-right Mark which side of a symmetric difference a commit is reachable from. Commits from the left side are prefixed with a < symbol and those from the right with a > symbol. --full-history When filtering history with <path>..., does not prune some history. (See "History simplification" in git-log(1) for a more detailed explanation.) --simplify-merges Additional option to --full-history to remove some needless merges from the resulting history, as there are no selected commits contributing to this merge. (See "History simplification" in git-log(1) for a more detailed explanation.) --ancestry-path When given a range of commits to display (e.g. commit1..commit2 or commit2 ^commit1), only display commits that exist directly on the ancestry chain between the commit1 and commit2, i.e. commits that are both descendants of commit1, and ancestors of commit2. (See "History simplification" in git-log(1) for a more detailed explanation.) -L<start>,<end>:<file>, -L:<funcname>:<file> Trace the evolution of the line range given by <start>,<end>, or by the function name regex <funcname>, within the <file>. You may not give any pathspec limiters. This is currently limited to a walk starting from a single revision, i.e., you may only give zero or one positive revision arguments, and <start> and <end> (or <funcname>) must exist in the starting revision. You can specify this option more than once. Implies --patch. Patch output can be suppressed using --no-patch, but other diff formats (namely --raw, --numstat, --shortstat, --dirstat, --summary, --name-only, --name-status, --check) are not currently implemented. <start> and <end> can take one of these forms: • number If <start> or <end> is a number, it specifies an absolute line number (lines count from 1). • /regex/ This form will use the first line matching the given POSIX regex. If <start> is a regex, it will search from the end of the previous -L range, if any, otherwise from the start of file. If <start> is ^/regex/, it will search from the start of file. If <end> is a regex, it will search starting at the line given by <start>. • +offset or -offset This is only valid for <end> and will specify a number of lines before or after the line given by <start>. If :<funcname> is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. :<funcname> searches from the end of the previous -L range, if any, otherwise from the start of file. ^:<funcname> searches from the start of file. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). <revision range> Limit the revisions to show. This can be either a single revision meaning show from the given revision and back, or it can be a range in the form "<from>..<to>" to show all revisions between <from> and back to <to>. Note, more advanced revision selection can be applied. For a more complete list of ways to spell object names, see gitrevisions(7). <path>... Limit commits to the ones touching files in the given paths. Note, to avoid ambiguity with respect to revision names use "--" to separate the paths from any preceding options. gitk-specific options --argscmd=<command> Command to be run each time gitk has to determine the revision range to show. The command is expected to print on its standard output a list of additional revisions to be shown, one per line. Use this instead of explicitly specifying a <revision range> if the set of commits to show may vary between refreshes. --select-commit=<ref> Select the specified commit after loading the graph. Default behavior is equivalent to specifying --select-commit=HEAD.
# gitk > A graphical Git repository browser. More information: https://git- > scm.com/docs/gitk. * Show the repository browser for the current Git repository: `gitk` * Show repository browser for a specific file or directory: `gitk {{path/to/file_or_directory}}` * Show commits made since 1 week ago: `gitk --since="{{1 week ago}}"` * Show commits older than 1/1/2016: `gitk --until="{{1/1/2015}}"` * Show at most 100 changes in all branches: `gitk --max-count={{100}} --all`
realpath
Print the resolved absolute file name; all but the last component must exist -e, --canonicalize-existing all components of the path must exist -m, --canonicalize-missing no path components need exist or be a directory -L, --logical resolve '..' components before symlinks -P, --physical resolve symlinks as encountered (default) -q, --quiet suppress most error messages --relative-to=DIR print the resolved path relative to DIR --relative-base=DIR print absolute paths unless paths below DIR -s, --strip, --no-symlinks don't expand symlinks -z, --zero end each output line with NUL, not newline --help display this help and exit --version output version information and exit
# realpath > Display the resolved absolute path for a file or directory. More > information: https://www.gnu.org/software/coreutils/realpath. * Display the absolute path for a file or directory: `realpath {{path/to/file_or_directory}}` * Require all path components to exist: `realpath --canonicalize-existing {{path/to/file_or_directory}}` * Resolve ".." components before symlinks: `realpath --logical {{path/to/file_or_directory}}` * Disable symlink expansion: `realpath --no-symlinks {{path/to/file_or_directory}}` * Suppress error messages: `realpath --quiet {{path/to/file_or_directory}}`
csplit
Output pieces of FILE separated by PATTERN(s) to files 'xx00', 'xx01', ..., and output byte counts of each piece to standard output. Read standard input if FILE is - Mandatory arguments to long options are mandatory for short options too. -b, --suffix-format=FORMAT use sprintf FORMAT instead of %02d -f, --prefix=PREFIX use PREFIX instead of 'xx' -k, --keep-files do not remove output files on errors --suppress-matched suppress the lines matching PATTERN -n, --digits=DIGITS use specified number of digits instead of 2 -s, --quiet, --silent do not print counts of output file sizes -z, --elide-empty-files suppress empty output files --help display this help and exit --version output version information and exit Each PATTERN may be: INTEGER copy up to but not including specified line number /REGEXP/[OFFSET] copy up to but not including a matching line %REGEXP%[OFFSET] skip to, but not including a matching line {INTEGER} repeat the previous pattern specified number of times {*} repeat the previous pattern as many times as possible A line OFFSET is an integer optionally preceded by '+' or '-'
# csplit > Split a file into pieces. This generates files named "xx00", "xx01", and so > on. More information: https://www.gnu.org/software/coreutils/csplit. * Split a file at lines 5 and 23: `csplit {{path/to/file}} {{5}} {{23}}` * Split a file every 5 lines (this will fail if the total number of lines is not divisible by 5): `csplit {{path/to/file}} {{5}} {*}` * Split a file every 5 lines, ignoring exact-division error: `csplit -k {{path/to/file}} {{5}} {*}` * Split a file at line 5 and use a custom prefix for the output files: `csplit {{path/to/file}} {{5}} -f {{prefix}}` * Split a file at a line matching a regular expression: `csplit {{path/to/file}} /{{regular_expression}}/`
ps
The ps utility shall write information about processes, subject to having appropriate privileges to obtain information about those processes. By default, ps shall select all processes with the same effective user ID as the current user and the same controlling terminal as the invoker. The ps utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -a Write information for all processes associated with terminals. Implementations may omit session leaders from this list. -A Write information for all processes. -d Write information for all processes, except session leaders. -e Write information for all processes. (Equivalent to -A.) -f Generate a full listing. (See the STDOUT section for the contents of a full listing.) -g grouplist Write information for processes whose session leaders are given in grouplist. The application shall ensure that the grouplist is a single argument in the form of a <blank> or <comma>-separated list. -G grouplist Write information for processes whose real group ID numbers are given in grouplist. The application shall ensure that the grouplist is a single argument in the form of a <blank> or <comma>-separated list. -l Generate a long listing. (See STDOUT for the contents of a long listing.) -n namelist Specify the name of an alternative system namelist file in place of the default. The name of the default file and the format of a namelist file are unspecified. -o format Write information according to the format specification given in format. This is fully described in the STDOUT section. Multiple -o options can be specified; the format specification shall be interpreted as the <space>-separated concatenation of all the format option-arguments. -p proclist Write information for processes whose process ID numbers are given in proclist. The application shall ensure that the proclist is a single argument in the form of a <blank> or <comma>-separated list. -t termlist Write information for processes associated with terminals given in termlist. The application shall ensure that the termlist is a single argument in the form of a <blank> or <comma>-separated list. Terminal identifiers shall be given in an implementation-defined format. On XSI-conformant systems, they shall be given in one of two forms: the device's filename (for example, tty04) or, if the device's filename starts with tty, just the identifier following the characters tty (for example, "04"). -u userlist Write information for processes whose user ID numbers or login names are given in userlist. The application shall ensure that the userlist is a single argument in the form of a <blank> or <comma>-separated list. In the listing, the numerical user ID shall be written unless the -f option is used, in which case the login name shall be written. -U userlist Write information for processes whose real user ID numbers or login names are given in userlist. The application shall ensure that the userlist is a single argument in the form of a <blank> or <comma>-separated list. With the exception of -f, -l, -n namelist, and -o format, all of the options shown are used to select processes. If any are specified, the default list shall be ignored and ps shall select the processes represented by the inclusive OR of all the selection-criteria options.
# ps > Information about running processes. More information: > https://www.unix.com/man-page/osx/1/ps/. * List all running processes: `ps aux` * List all running processes including the full command string: `ps auxww` * Search for a process that matches a string: `ps aux | grep {{string}}` * Get the parent PID of a process: `ps -o ppid= -p {{pid}}` * Sort processes by memory usage: `ps -m` * Sort processes by CPU usage: `ps -r`
journalctl
journalctl is used to print the log entries stored in the journal by systemd-journald.service(8) and systemd-journal-remote.service(8). If called without parameters, it will show the contents of the journal accessible to the calling user, starting with the oldest entry collected. If one or more match arguments are passed, the output is filtered accordingly. A match is in the format "FIELD=VALUE", e.g. "_SYSTEMD_UNIT=httpd.service", referring to the components of a structured journal entry. See systemd.journal-fields(7) for a list of well-known fields. If multiple matches are specified matching different fields, the log entries are filtered by both, i.e. the resulting output will show only entries matching all the specified matches of this kind. If two matches apply to the same field, then they are automatically matched as alternatives, i.e. the resulting output will show entries matching any of the specified matches for the same field. Finally, the character "+" may appear as a separate word between other terms on the command line. This causes all matches before and after to be combined in a disjunction (i.e. logical OR). It is also possible to filter the entries by specifying an absolute file path as an argument. The file path may be a file or a symbolic link and the file must exist at the time of the query. If a file path refers to an executable binary, an "_EXE=" match for the canonicalized binary path is added to the query. If a file path refers to an executable script, a "_COMM=" match for the script name is added to the query. If a file path refers to a device node, "_KERNEL_DEVICE=" matches for the kernel name of the device and for each of its ancestor devices is added to the query. Symbolic links are dereferenced, kernel names are synthesized, and parent devices are identified from the environment at the time of the query. In general, a device node is the best proxy for an actual device, as log entries do not usually contain fields that identify an actual device. For the resulting log entries to be correct for the actual device, the relevant parts of the environment at the time the entry was logged, in particular the actual device corresponding to the device node, must have been the same as those at the time of the query. Because device nodes generally change their corresponding devices across reboots, specifying a device node path causes the resulting entries to be restricted to those from the current boot. Additional constraints may be added using options --boot, --unit=, etc., to further limit what entries will be shown (logical AND). Output is interleaved from all accessible journal files, whether they are rotated or currently being written, and regardless of whether they belong to the system itself or are accessible user journals. The --header option can be used to identify which files are being shown. The set of journal files which will be used can be modified using the --user, --system, --directory, and --file options, see below. All users are granted access to their private per-user journals. However, by default, only root and users who are members of a few special groups are granted access to the system journal and the journals of other users. Members of the groups "systemd-journal", "adm", and "wheel" can read all journal files. Note that the two latter groups traditionally have additional privileges specified by the distribution. Members of the "wheel" group can often perform administrative tasks. The output is paged through less by default, and long lines are "truncated" to screen width. The hidden part can be viewed by using the left-arrow and right-arrow keys. Paging can be disabled; see the --no-pager option and the "Environment" section below. When outputting to a tty, lines are colored according to priority: lines of level ERROR and higher are colored red; lines of level NOTICE and higher are highlighted; lines of level DEBUG are colored lighter grey; other lines are displayed normally. To write entries to the journal, a few methods may be used. In general, output from systemd units is automatically connected to the journal, see systemd-journald.service(8). In addition, systemd-cat(1) may be used to send messages to the journal directly.
# journalctl > Query the systemd journal. More information: https://manned.org/journalctl. * Show all messages with priority level 3 (errors) from this [b]oot: `journalctl -b --priority={{3}}` * Show all messages from last [b]oot: `journalctl -b -1` * Delete journal logs which are older than 2 days: `journalctl --vacuum-time={{2d}}` * [f]ollow new messages (like `tail -f` for traditional syslog): `journalctl -f` * Show all messages by a specific [u]nit: `journalctl -u {{unit}}` * Filter messages within a time range (either timestamp or placeholders like "yesterday"): `journalctl --since {{now|today|yesterday|tomorrow}} --until {{YYYY-MM-DD HH:MM:SS}}` * Show all messages by a specific process: `journalctl _PID={{pid}}` * Show all messages by a specific executable: `journalctl {{path/to/executable}}`
head
The head utility shall copy its input files to the standard output, ending the output for each file at a designated point. Copying shall end at the point in each input file indicated by the -n number option. The option-argument number shall be counted in units of lines. The head utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -n number The first number lines of each input file shall be copied to standard output. The application shall ensure that the number option-argument is a positive decimal integer. When a file contains less than number lines, it shall be copied to standard output in its entirety. This shall not be an error. If no options are specified, head shall act as if -n 10 had been specified.
# head > Output the first part of files. More information: > https://keith.github.io/xcode-man-pages/head.1.html. * Output the first few lines of a file: `head --lines {{8}} {{path/to/file}}` * Output the first few bytes of a file: `head --bytes {{8}} {{path/to/file}}` * Output everything but the last few lines of a file: `head --lines -{{8}} {{path/to/file}}` * Output everything but the last few bytes of a file: `head --bytes -{{8}} {{path/to/file}}`
basename
Print NAME with any leading directory components removed. If specified, also remove a trailing SUFFIX. Mandatory arguments to long options are mandatory for short options too. -a, --multiple support multiple arguments and treat each as a NAME -s, --suffix=SUFFIX remove a trailing SUFFIX; implies -a -z, --zero end each output line with NUL, not newline --help display this help and exit --version output version information and exit
# basename > Remove leading directory portions from a path. More information: > https://www.gnu.org/software/coreutils/basename. * Show only the file name from a path: `basename {{path/to/file}}` * Show only the rightmost directory name from a path: `basename {{path/to/directory/}}` * Show only the file name from a path, with a suffix removed: `basename {{path/to/file}} {{suffix}}`
git-maintenance
Run tasks to optimize Git repository data, speeding up other Git commands and reducing storage requirements for the repository. Git commands that add repository data, such as git add or git fetch, are optimized for a responsive user experience. These commands do not take time to optimize the Git data, since such optimizations scale with the full size of the repository while these user commands each perform a relatively small action. The git maintenance command provides flexibility for how to optimize the Git repository. --auto When combined with the run subcommand, run maintenance tasks only if certain thresholds are met. For example, the gc task runs when the number of loose objects exceeds the number stored in the gc.auto config setting, or when the number of pack-files exceeds the gc.autoPackLimit config setting. Not compatible with the --schedule option. --schedule When combined with the run subcommand, run maintenance tasks only if certain time conditions are met, as specified by the maintenance.<task>.schedule config value for each <task>. This config value specifies a number of seconds since the last time that task ran, according to the maintenance.<task>.lastRun config value. The tasks that are tested are those provided by the --task=<task> option(s) or those with maintenance.<task>.enabled set to true. --quiet Do not report progress or other information over stderr. --task=<task> If this option is specified one or more times, then only run the specified tasks in the specified order. If no --task=<task> arguments are specified, then only the tasks with maintenance.<task>.enabled configured as true are considered. See the TASKS section for the list of accepted <task> values. --scheduler=auto|crontab|systemd-timer|launchctl|schtasks When combined with the start subcommand, specify the scheduler for running the hourly, daily and weekly executions of git maintenance run. Possible values for <scheduler> are auto, crontab (POSIX), systemd-timer (Linux), launchctl (macOS), and schtasks (Windows). When auto is specified, the appropriate platform-specific scheduler is used; on Linux, systemd-timer is used if available, otherwise crontab. Default is auto.
# git-maintenance > Run tasks to optimize Git repository data. More information: https://git- > scm.com/docs/git-maintenance. * Register the current repository in the user's list of repositories to daily have maintenance run: `git maintenance register` * Start running maintenance on the current repository: `git maintenance start` * Halt the background maintenance schedule for the current repository: `git maintenance stop` * Remove the current repository from the user's maintenance repository list: `git maintenance unregister` * Run a specific maintenance task on the current repository: `git maintenance run --task={{commit-graph|gc|incremental-repack|loose- objects|pack-refs|prefetch}}`
git-diff-files
Compares the files in the working tree and the index. When paths are specified, compares only those named paths. Otherwise all entries in the index are compared. The output format is the same as for git diff-index and git diff-tree. -p, -u, --patch Generate patch (see section titled "Generating patch text with -p"). -s, --no-patch Suppress all output from the diff machinery. Useful for commands like git show that show the patch by default to squelch their output, or to cancel the effect of options like --patch, --stat earlier on the command line in an alias. -U<n>, --unified=<n> Generate diffs with <n> lines of context instead of the usual three. Implies --patch. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char>, --output-indicator-old=<char>, --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are +, - and ' ' respectively. --raw Generate the diff in raw format. This is the default. --patch-with-raw Synonym for -p --raw. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: default, myers The basic greedy diff algorithm. Currently, this is the default. minimal Spend extra time to make sure the smallest possible diff is produced. patience Use "patience diff" algorithm when generating patches. histogram This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the diff.algorithm variable to a non-default value and want to use the default one, then you have to use --diff-algorithm=default option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by <width>. The width of the filename part can be limited by giving another width <name-width> after a comma. The width of the graph part can be limited by using --stat-graph-width=<width> (affects all commands generating a stat graph) or by setting diff.statGraphWidth=<width> (does not affect git format-patch). By giving a third parameter <count>, you can limit the output to the first <count> lines, followed by ... if there are more. These parameters can also be set individually with --stat-width=<width>, --stat-name-width=<name-width> and --stat-count=<count>. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies --stat. --numstat Similar to --stat, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two - instead of saying 0 0. --shortstat Output only the last line of the --stat format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,...>], --dirstat[=<param1,param2,...>] Output the distribution of relative amount of changes for each sub-directory. The behavior of --dirstat can be customized by passing it a comma separated list of parameters. The defaults are controlled by the diff.dirstat configuration variable (see git-config(1)). The following parameters are available: changes Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. lines Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive --dirstat behavior than the changes behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other --*stat options. files Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest --dirstat behavior, since it does not have to look at the file contents at all. cumulative Count changes in a child directory for the parent directory as well. Note that when using cumulative, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the noncumulative parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: --dirstat=files,10,cumulative. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>...] Synonym for --dirstat=files,param1,param2... --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --patch-with-stat Synonym for -p --stat. -z When --raw, --numstat, --name-only or --name-status has been given, do not munge pathnames and use NULs as output field terminators. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). --name-only Show only names of changed files. The file names are often encoded in UTF-8. For more information see the discussion about encoding in the git-log(1) manual page. --name-status Show only names and status of changed files. See the description of the --diff-filter option on what the status letters mean. Just like --name-only the file names are often encoded in UTF-8. --submodule[=<format>] Specify how differences in submodules are shown. When specifying --submodule=short the short format is used. This format just shows the names of the commits at the beginning and end of the range. When --submodule or --submodule=log is specified, the log format is used. This format lists the commits in the range like git-submodule(1) summary does. When --submodule=diff is specified, the diff format is used. This format shows an inline diff of the changes in the submodule contents between the commit range. Defaults to diff.submodule or the short format if the config option is unset. --color[=<when>] Show colored diff. --color (i.e. without =<when>) is the same as --color=always. <when> can be one of always, never, or auto. --no-color Turn off colored diff. It is the same as --color=never. --color-moved[=<mode>] Moved lines of code are colored differently. The <mode> defaults to no if the option is not given and to zebra if the option with no mode is given. The mode must be one of: no Moved lines are not highlighted. default Is a synonym for zebra. This may change to a more sensible mode in the future. plain Any line that is added in one location and was removed in another location will be colored with color.diff.newMoved. Similarly color.diff.oldMoved will be used for removed lines that are added somewhere else in the diff. This mode picks up any moved line, but it is not very useful in a review to determine if a block of code was moved without permutation. blocks Blocks of moved text of at least 20 alphanumeric characters are detected greedily. The detected blocks are painted using either the color.diff.{old,new}Moved color. Adjacent blocks cannot be told apart. zebra Blocks of moved text are detected as in blocks mode. The blocks are painted using either the color.diff.{old,new}Moved color or color.diff.{old,new}MovedAlternative. The change between the two colors indicates that a new block was detected. dimmed-zebra Similar to zebra, but additional dimming of uninteresting parts of moved code is performed. The bordering lines of two adjacent blocks are considered interesting, the rest is uninteresting. dimmed_zebra is a deprecated synonym. --no-color-moved Turn off move detection. This can be used to override configuration settings. It is the same as --color-moved=no. --color-moved-ws=<modes> This configures how whitespace is ignored when performing the move detection for --color-moved. These modes can be given as a comma separated list: no Do not ignore whitespace when performing move detection. ignore-space-at-eol Ignore changes in whitespace at EOL. ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. allow-indentation-change Initially ignore any whitespace in the move detection, then group the moved code blocks only into a block if the change in whitespace is the same per line. This is incompatible with the other modes. --no-color-moved-ws Do not ignore whitespace when performing move detection. This can be used to override configuration settings. It is the same as --color-moved-ws=no. --word-diff[=<mode>] Show a word diff, using the <mode> to delimit changed words. By default, words are delimited by whitespace; see --word-diff-regex below. The <mode> defaults to plain, and must be one of: color Highlight changed words using only colors. Implies --color. plain Show words as [-removed-] and {+added+}. Makes no attempts to escape the delimiters if they appear in the input, so the output may be ambiguous. porcelain Use a special line-based format intended for script consumption. Added/removed/unchanged runs are printed in the usual unified diff format, starting with a +/-/` ` character at the beginning of the line and extending to the end of the line. Newlines in the input are represented by a tilde ~ on a line of its own. none Disable word diff again. Note that despite the name of the first mode, color is used to highlight the changed parts in all modes if enabled. --word-diff-regex=<regex> Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word. Also implies --word-diff unless it was already enabled. Every non-overlapping match of the <regex> is considered a word. Anything between these matches is considered whitespace and ignored(!) for the purposes of finding differences. You may want to append |[^[:space:]] to your regular expression to make sure that it matches all non-whitespace characters. A match that contains a newline is silently truncated(!) at the newline. For example, --word-diff-regex=. will treat each character as a word and, correspondingly, show differences character by character. The regex can also be set via a diff driver or configuration option, see gitattributes(5) or git-config(1). Giving it explicitly overrides any diff driver or configuration setting. Diff drivers override configuration settings. --color-words[=<regex>] Equivalent to --word-diff=color plus (if a regex was specified) --word-diff-regex=<regex>. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --check Warn if changes introduce conflict markers or whitespace errors. What are considered whitespace errors is controlled by core.whitespace configuration. By default, trailing whitespaces (including lines that consist solely of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. Exits with non-zero status if problems are found. Not compatible with --exit-code. --ws-error-highlight=<kind> Highlight whitespace errors in the context, old or new lines of the diff. Multiple values are separated by comma, none resets previous values, default reset the list to new and all is a shorthand for old,new,context. When this option is not given, and the configuration variable diff.wsErrorHighlight is not set, only whitespace errors in new lines are highlighted. The whitespace errors are colored with color.diff.whitespace. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to --full-index, output a binary diff that can be applied with git-apply. Implies --patch. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least <n> hexdigits long that uniquely refers the object. In diff-patch output format, --full-index takes higher precedence, i.e. if --full-index is specified, full blob names will be shown regardless of --abbrev. Non default number of digits can be specified with --abbrev=<n>. -B[<n>][/<m>], --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number m controls this aspect of the -B option (defaults to 60%). -B/70% specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number n controls this aspect of the -B option (defaults to 50%). -B20% specifies that a change with addition and deletion compared to 20% or more of the file’s size are eligible for being picked up as a possible source of a rename to another file. -M[<n>], --find-renames[=<n>] Detect renames. If n is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the file’s size). For example, -M90% means Git should consider a delete/add pair to be a rename if more than 90% of the file hasn’t changed. Without a % sign, the number is to be read as a fraction, with a decimal point before it. I.e., -M5 becomes 0.5, and is thus the same as -M50%. Similarly, -M05 is the same as -M5%. To limit detection to exact renames, use -M100%. The default similarity index is 50%. -C[<n>], --find-copies[=<n>] Detect copies as well as renames. See also --find-copies-harder. If n is specified, it has the same meaning as for -M<n>. --find-copies-harder For performance reasons, by default, -C option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one -C option has the same effect. -D, --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and /dev/null. The resulting patch is not meant to be applied with patch or git apply; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with -B, omit also the preimage in the deletion part of a delete/create pair. -l<num> The -M and -C options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. --diff-filter=[(A|C|D|M|R|T|U|X|B)...[*]] Select only files that are Added (A), Copied (C), Deleted (D), Modified (M), Renamed (R), have their type (i.e. regular file, symlink, submodule, ...) changed (T), are Unmerged (U), are Unknown (X), or have had their pairing Broken (B). Any combination of the filter characters (including none) can be used. When * (All-or-none) is added to the combination, all paths are selected if there is any file that matches other criteria in the comparison; if there is no file that matches other criteria, nothing is selected. Also, these upper-case letters can be downcased to exclude. E.g. --diff-filter=ad excludes added and deleted paths. Note that not all diffs can feature all types. For instance, copied and renamed entries cannot appear if detection for those types is disabled. -S<string> Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file. Intended for the scripter’s use. It is useful when you’re looking for an exact block of code (like a struct), and want to know the history of that block since it first came into being: use the feature iteratively to feed the interesting block in the preimage back into -S, and keep going until you get the very first version of the block. Binary files are searched as well. -G<regex> Look for differences whose patch text contains added/removed lines that match <regex>. To illustrate the difference between -S<regex> --pickaxe-regex and -G<regex>, consider a commit with the following diff in the same file: + return frotz(nitfol, two->ptr, 1, 0); ... - hit = frotz(nitfol, mf2.ptr, 1, 0); While git log -G"frotz\(nitfol" will show this commit, git log -S"frotz\(nitfol" --pickaxe-regex will not (because the number of occurrences of that string did not change). Unless --text is supplied patches of binary files without a textconv filter will be ignored. See the pickaxe entry in gitdiffcore(7) for more information. --find-object=<object-id> Look for differences that change the number of occurrences of the specified object. Similar to -S, just the argument is different in that it doesn’t search for a specific string but for a specific object id. The object can be a blob or a submodule commit. It implies the -t option in git-log to also find trees. --pickaxe-all When -S or -G finds a change, show all the changes in that changeset, not just the files that contain the change in <string>. --pickaxe-regex Treat the <string> given to -S as an extended POSIX regular expression to match. -O<orderfile> Control the order in which files appear in the output. This overrides the diff.orderFile configuration variable (see git-config(1)). To cancel diff.orderFile, use -O/dev/null. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: • Blank lines are ignored, so they can be used as separators for readability. • Lines starting with a hash ("#") are ignored, so they can be used for comments. Add a backslash ("\") to the beginning of the pattern if it starts with a hash. • Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "foo*bar" matches "fooasdfbar" and "foo/bar/baz/asdf" but not "foobarx". --skip-to=<file>, --rotate-to=<file> Discard the files before the named <file> from the output (i.e. skip to), or move them to the end of the output (i.e. rotate to). These were invented primarily for use of the git difftool command, and may not be very useful otherwise. -R Swap two inputs; that is, show differences from index or on-disk file to tree contents. --relative[=<path>], --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. --no-relative can be used to countermand both diff.relative config option and previous --relative. -a, --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b, --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w, --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex>, --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to diff.interHunkContext or 0 if the config option is unset. -W, --function-context Show whole function as context lines for each change. The function names are determined in the same way as git diff works out patch hunk headers (see Defining a custom hunk-header in gitattributes(5)). --exit-code Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences. --quiet Disable all output of the program. Implies --exit-code. --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with gitattributes(5), you need to use this option with git-log(1) and friends. --no-ext-diff Disallow external diff drivers. --textconv, --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See gitattributes(5) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for git-diff(1) and git-log(1), but not for git-format-patch(1) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the ignore option in git-config(1) or gitmodules(5). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --default-prefix Use the default source and destination prefixes ("a/" and "b/"). This is usually the default already, but may be used to override config such as diff.noprefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with --ita-visible-in-index. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also gitdiffcore(7). -1 --base, -2 --ours, -3 --theirs, -0 Diff against the "base" version, "our branch" or "their branch" respectively. With these options, diffs for merged entries are not shown. The default is to diff against our branch (-2) and the cleanly resolved paths. The option -0 can be given to omit diff output for unmerged entries and just show "Unmerged". -c, --cc This compares stage 2 (our branch), stage 3 (their branch) and the working tree file and outputs a combined diff, similar to the way diff-tree shows a merge commit with these flags. -q Remain silent even on nonexistent files
# git diff-files > Compare files using their sha1 hashes and modes. More information: > https://git-scm.com/docs/git-diff-files. * Compare all changed files: `git diff-files` * Compare only specified files: `git diff-files {{path/to/file}}` * Show only the names of changed files: `git diff-files --name-only` * Output a summary of extended header information: `git diff-files --summary`
expr
--help display this help and exit --version output version information and exit Print the value of EXPRESSION to standard output. A blank line below separates increasing precedence groups. EXPRESSION may be: ARG1 | ARG2 ARG1 if it is neither null nor 0, otherwise ARG2 ARG1 & ARG2 ARG1 if neither argument is null or 0, otherwise 0 ARG1 < ARG2 ARG1 is less than ARG2 ARG1 <= ARG2 ARG1 is less than or equal to ARG2 ARG1 = ARG2 ARG1 is equal to ARG2 ARG1 != ARG2 ARG1 is unequal to ARG2 ARG1 >= ARG2 ARG1 is greater than or equal to ARG2 ARG1 > ARG2 ARG1 is greater than ARG2 ARG1 + ARG2 arithmetic sum of ARG1 and ARG2 ARG1 - ARG2 arithmetic difference of ARG1 and ARG2 ARG1 * ARG2 arithmetic product of ARG1 and ARG2 ARG1 / ARG2 arithmetic quotient of ARG1 divided by ARG2 ARG1 % ARG2 arithmetic remainder of ARG1 divided by ARG2 STRING : REGEXP anchored pattern match of REGEXP in STRING match STRING REGEXP same as STRING : REGEXP substr STRING POS LENGTH substring of STRING, POS counted from 1 index STRING CHARS index in STRING where any CHARS is found, or 0 length STRING length of STRING + TOKEN interpret TOKEN as a string, even if it is a keyword like 'match' or an operator like '/' ( EXPRESSION ) value of EXPRESSION Beware that many operators need to be escaped or quoted for shells. Comparisons are arithmetic if both ARGs are numbers, else lexicographical. Pattern matches return the string matched between \( and \) or null; if \( and \) are not used, they return the number of characters matched or 0. Exit status is 0 if EXPRESSION is neither null nor 0, 1 if EXPRESSION is null or 0, 2 if EXPRESSION is syntactically invalid, and 3 if an error occurred.
# expr > Evaluate expressions and manipulate strings. More information: > https://www.gnu.org/software/coreutils/expr. * Get the length of a specific string: `expr length "{{string}}"` * Get the substring of a string with a specific length: `expr substr "{{string}}" {{from}} {{length}}` * Match a specific substring against an anchored pattern: `expr match "{{string}}" '{{pattern}}'` * Get the first char position from a specific set in a string: `expr index "{{string}}" "{{chars}}"` * Calculate a specific mathematic expression: `expr {{expression1}} {{+|-|*|/|%}} {{expression2}}` * Get the first expression if its value is non-zero and not null otherwise get the second one: `expr {{expression1}} \| {{expression2}}` * Get the first expression if both expressions are non-zero and not null otherwise get zero: `expr {{expression1}} \& {{expression2}}`
mv
Rename SOURCE to DEST, or move SOURCE(s) to DIRECTORY. Mandatory arguments to long options are mandatory for short options too. --backup[=CONTROL] make a backup of each existing destination file -b like --backup but does not accept an argument --debug explain how a file is copied. Implies -v -f, --force do not prompt before overwriting -i, --interactive prompt before overwrite -n, --no-clobber do not overwrite an existing file If you specify more than one of -i, -f, -n, only the final one takes effect. --no-copy do not copy if renaming fails --strip-trailing-slashes remove any trailing slashes from each SOURCE argument -S, --suffix=SUFFIX override the usual backup suffix -t, --target-directory=DIRECTORY move all SOURCE arguments into DIRECTORY -T, --no-target-directory treat DEST as a normal file --update[=UPDATE] control which existing files are updated; UPDATE={all,none,older(default)}. See below -u equivalent to --update[=older] -v, --verbose explain what is being done -Z, --context set SELinux security context of destination file to default type --help display this help and exit --version output version information and exit UPDATE controls which existing files in the destination are replaced. 'all' is the default operation when an --update option is not specified, and results in all existing files in the destination being replaced. 'none' is similar to the --no-clobber option, in that no files in the destination are replaced, but also skipped files do not induce a failure. 'older' is the default operation when --update is specified, and results in files being replaced if they're older than the corresponding source file. The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX. The version control method may be selected via the --backup option or through the VERSION_CONTROL environment variable. Here are the values: none, off never make backups (even if --backup is given) numbered, t make numbered backups existing, nil numbered if numbered backups exist, simple otherwise simple, never always make simple backups
# mv > Move or rename files and directories. More information: > https://www.gnu.org/software/coreutils/mv. * Rename a file or directory when the target is not an existing directory: `mv {{path/to/source}} {{path/to/target}}` * Move a file or directory into an existing directory: `mv {{path/to/source}} {{path/to/existing_directory}}` * Move multiple files into an existing directory, keeping the filenames unchanged: `mv {{path/to/source1 path/to/source2 ...}} {{path/to/existing_directory}}` * Do not prompt for confirmation before overwriting existing files: `mv -f {{path/to/source}} {{path/to/target}}` * Prompt for confirmation before overwriting existing files, regardless of file permissions: `mv -i {{path/to/source}} {{path/to/target}}` * Do not overwrite existing files at the target: `mv -n {{path/to/source}} {{path/to/target}}` * Move files in verbose mode, showing files after they are moved: `mv -v {{path/to/source}} {{path/to/target}}`
loginctl
loginctl may be used to introspect and control the state of the systemd(1) login manager systemd-logind.service(8). The following options are understood: --no-ask-password Do not query the user for authentication for privileged operations. -p, --property= When showing session/user/seat properties, limit display to certain properties as specified as argument. If not specified, all set properties are shown. The argument should be a property name, such as "Sessions". If specified more than once, all properties with the specified names are shown. --value When showing session/user/seat properties, only print the value, and skip the property name and "=". -a, --all When showing session/user/seat properties, show all properties regardless of whether they are set or not. -l, --full Do not ellipsize process tree entries. --kill-whom= When used with kill-session, choose which processes to kill. Must be one of leader, or all to select whether to kill only the leader process of the session or all processes of the session. If omitted, defaults to all. -s, --signal= When used with kill-session or kill-user, choose which signal to send to selected processes. Must be one of the well known signal specifiers, such as SIGTERM, SIGINT or SIGSTOP. If omitted, defaults to SIGTERM. The special value "help" will list the known values and the program will exit immediately, and the special value "list" will list known values along with the numerical signal numbers and the program will exit immediately. -n, --lines= When used with user-status and session-status, controls the number of journal lines to show, counting from the most recent ones. Takes a positive integer argument. Defaults to 10. -o, --output= When used with user-status and session-status, controls the formatting of the journal entries that are shown. For the available choices, see journalctl(1). Defaults to "short". -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. -h, --help Print a short help text and exit. --version Print a short version string and exit.
# loginctl > Manage the systemd login manager. More information: > https://www.freedesktop.org/software/systemd/man/loginctl.html. * Print all current sessions: `loginctl list-sessions` * Print all properties of a specific session: `loginctl show-session {{session_id}} --all` * Print all properties of a specific user: `loginctl show-user {{username}}` * Print a specific property of a user: `loginctl show-user {{username}} --property={{property_name}}` * Execute a `loginctl` operation on a remote host: `loginctl list-users -H {{hostname}}`
cut
The cut utility shall cut out bytes (-b option), characters (-c option), or character-delimited fields (-f option) from each line in one or more files, concatenate them, and write them to standard output. The cut utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The application shall ensure that the option-argument list (see options -b, -c, and -f below) is a <comma>-separated list or <blank>-separated list of positive numbers and ranges. Ranges can be in three forms. The first is two positive numbers separated by a <hyphen-minus> (low-high), which represents all fields from the first number to the second number. The second is a positive number preceded by a <hyphen-minus> (-high), which represents all fields from field number 1 to that number. The third is a positive number followed by a <hyphen-minus> (low-), which represents that number to the last field, inclusive. The elements in list can be repeated, can overlap, and can be specified in any order, but the bytes, characters, or fields selected shall be written in the order of the input data. If an element appears in the selection list more than once, it shall be written exactly once. The following options shall be supported: -b list Cut based on a list of bytes. Each selected byte shall be output unless the -n option is also specified. It shall not be an error to select bytes not present in the input line. -c list Cut based on a list of characters. Each selected character shall be output. It shall not be an error to select characters not present in the input line. -d delim Set the field delimiter to the character delim. The default is the <tab>. -f list Cut based on a list of fields, assumed to be separated in the file by a delimiter character (see -d). Each selected field shall be output. Output fields shall be separated by a single occurrence of the field delimiter character. Lines with no field delimiters shall be passed through intact, unless -s is specified. It shall not be an error to select fields not present in the input line. -n Do not split characters. When specified with the -b option, each element in list of the form low-high (<hyphen-minus>-separated numbers) shall be modified as follows: * If the byte selected by low is not the first byte of a character, low shall be decremented to select the first byte of the character originally selected by low. If the byte selected by high is not the last byte of a character, high shall be decremented to select the last byte of the character prior to the character originally selected by high, or zero if there is no prior character. If the resulting range element has high equal to zero or low greater than high, the list element shall be dropped from list for that input line without causing an error. Each element in list of the form low- shall be treated as above with high set to the number of bytes in the current line, not including the terminating <newline>. Each element in list of the form -high shall be treated as above with low set to 1. Each element in list of the form num (a single number) shall be treated as above with low set to num and high set to num. -s Suppress lines with no delimiter characters, when used with the -f option. Unless specified, lines with no delimiters shall be passed through untouched.
# cut > Cut out fields from `stdin` or files. More information: > https://manned.org/man/freebsd-13.0/cut.1. * Print a specific character/field range of each line: `{{command}} | cut -{{c|f}} {{1|1,10|1-10|1-|-10}}` * Print a range of each line with a specific delimiter: `{{command}} | cut -d "{{,}}" -{{c}} {{1}}` * Print a range of each line of a specific file: `cut -{{c}} {{1}} {{path/to/file}}`
kill
The default signal for kill is TERM. Use -l or -L to list available signals. Particularly useful signals include HUP, INT, KILL, STOP, CONT, and 0. Alternate signals may be specified in three ways: -9, -SIGKILL or -KILL. Negative PID values may be used to choose whole process groups; see the PGID column in ps command output. A PID of -1 is special; it indicates all processes except the kill process itself and init. <pid> [...] Send signal to every <pid> listed. -<signal> -s <signal> --signal <signal> Specify the signal to be sent. The signal can be specified by using name or number. The behavior of signals is explained in signal(7) manual page. -q, --queue value Use sigqueue(3) rather than kill(2) and the value argument is used to specify an integer to be sent with the signal. If the receiving process has installed a handler for this signal using the SA_SIGINFO flag to sigaction(2), then it can obtain this data via the si_value field of the siginfo_t structure. -l, --list [signal] List signal names. This option has optional argument, which will convert signal number to signal name, or other way round. -L, --table List signal names in a nice table.
# kill > Sends a signal to a process, usually related to stopping the process. All > signals except for SIGKILL and SIGSTOP can be intercepted by the process to > perform a clean exit. More information: https://manned.org/kill. * Terminate a program using the default SIGTERM (terminate) signal: `kill {{process_id}}` * List available signal names (to be used without the `SIG` prefix): `kill -l` * Terminate a background job: `kill %{{job_id}}` * Terminate a program using the SIGHUP (hang up) signal. Many daemons will reload instead of terminating: `kill -{{1|HUP}} {{process_id}}` * Terminate a program using the SIGINT (interrupt) signal. This is typically initiated by the user pressing `Ctrl + C`: `kill -{{2|INT}} {{process_id}}` * Signal the operating system to immediately terminate a program (which gets no chance to capture the signal): `kill -{{9|KILL}} {{process_id}}` * Signal the operating system to pause a program until a SIGCONT ("continue") signal is received: `kill -{{17|STOP}} {{process_id}}` * Send a `SIGUSR1` signal to all processes with the given GID (group id): `kill -{{SIGUSR1}} -{{group_id}}`
sleep
Pause for NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days. NUMBER need not be an integer. Given two or more arguments, pause for the amount of time specified by the sum of their values. --help display this help and exit --version output version information and exit
# sleep > Delay for a specified amount of time. More information: > https://pubs.opengroup.org/onlinepubs/9699919799/utilities/sleep.html. * Delay in seconds: `sleep {{seconds}}` * Execute a specific command after 20 seconds delay: `sleep 20 && {{command}}`
printf
The printf utility shall write formatted operands to the standard output. The argument operands shall be formatted under control of the format operand. None.
# printf > Format and print text. More information: > https://www.gnu.org/software/coreutils/printf. * Print a text message: `printf "{{%s\n}}" "{{Hello world}}"` * Print an integer in bold blue: `printf "{{\e[1;34m%.3d\e[0m\n}}" {{42}}` * Print a float number with the Unicode Euro sign: `printf "{{\u20AC %.2f\n}}" {{123.4}}` * Print a text message composed with environment variables: `printf "{{var1: %s\tvar2: %s\n}}" "{{$VAR1}}" "{{$VAR2}}"` * Store a formatted message in a variable (does not work on zsh): `printf -v {{myvar}} {{"This is %s = %d\n" "a year" 2016}}`
c99
The c99 utility is an interface to the standard C compilation system; it shall accept source code conforming to the ISO C standard. The system conceptually consists of a compiler and link editor. The input files referenced by pathname operands and -l option-arguments shall be compiled and linked to produce an executable file. (It is unspecified whether the linking occurs entirely within the operation of c99; some implementations may produce objects that are not fully resolved until the file is executed.) If the -c option is specified, for all pathname operands of the form file.c, the files: $(basename pathname .c).o shall be created as the result of successful compilation. If the -c option is not specified, it is unspecified whether such .o files are created or deleted for the file.c operands. If there are no options that prevent link editing (such as -c or -E), and all input files compile and link without error, the resulting executable file shall be written according to the -o outfile option (if present) or to the file a.out. The executable file shall be created as specified in Section 1.1.1.4, File Read, Write, and Creation, except that the file permission bits shall be set to: S_IRWXO | S_IRWXG | S_IRWXU and the bits specified by the umask of the process shall be cleared. The c99 utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that: * Options can be interspersed with operands. * The order of specifying the -L and -l options, and the order of specifying -l options with respect to pathname operands is significant. * Conforming applications shall specify each option separately; that is, grouping option letters (for example, -cO) need not be recognized by all implementations. The following options shall be supported: -c Suppress the link-edit phase of the compilation, and do not remove any object files that are produced. -D name[=value] Define name as if by a C-language #define directive. If no =value is given, a value of 1 shall be used. The -D option has lower precedence than the -U option. That is, if name is used in both a -U and a -D option, name shall be undefined regardless of the order of the options. Additional implementation-defined names may be provided by the compiler. Implementations shall support at least 2048 bytes of -D definitions and 256 names. -E Copy C-language source files to standard output, executing all preprocessor directives; no compilation shall be performed. If any operand is not a text file, the effects are unspecified. -g Produce symbolic information in the object or executable files; the nature of this information is unspecified, and may be modified by implementation- defined interactions with other options. -I directory Change the algorithm for searching for headers whose names are not absolute pathnames to look in the directory named by the directory pathname before looking in the usual places. Thus, headers whose names are enclosed in double-quotes ("") shall be searched for first in the directory of the file with the #include line, then in directories named in -I options, and last in the usual places. For headers whose names are enclosed in angle brackets ("<>"), the header shall be searched for only in directories named in -I options and then in the usual places. Directories named in -I options shall be searched in the order specified. If the -I option is used to specify a directory that is one of the usual places searched by default, the results are unspecified. Implementations shall support at least ten instances of this option in a single c99 command invocation. -L directory Change the algorithm of searching for the libraries named in the -l objects to look in the directory named by the directory pathname before looking in the usual places. Directories named in -L options shall be searched in the order specified. If the -L option is used to specify a directory that is one of the usual places searched by default, the results are unspecified. Implementations shall support at least ten instances of this option in a single c99 command invocation. If a directory specified by a -L option contains files with names starting with any of the strings "libc.", "libl.", "libpthread.", "libm.", "librt.", "libtrace.", "libxnet.", or "liby.", the results are unspecified. -l library Search the library named liblibrary.a. A library shall be searched when its name is encountered, so the placement of a -l option is significant. Several standard libraries can be specified in this manner, as described in the EXTENDED DESCRIPTION section. Implementations may recognize implementation-defined suffixes other than .a as denoting libraries. -O optlevel Specify the level of code optimization. If the optlevel option-argument is the digit '0', all special code optimizations shall be disabled. If it is the digit '1', the nature of the optimization is unspecified. If the -O option is omitted, the nature of the system's default optimization is unspecified. It is unspecified whether code generated in the presence of the -O 0 option is the same as that generated when -O is omitted. Other optlevel values may be supported. -o outfile Use the pathname outfile, instead of the default a.out, for the executable file produced. If the -o option is present with -c or -E, the result is unspecified. -s Produce object or executable files, or both, from which symbolic and other information not required for proper execution using the exec family defined in the System Interfaces volume of POSIX.1‐2017 has been removed (stripped). If both -g and -s options are present, the action taken is unspecified. -U name Remove any initial definition of name. Multiple instances of the -D, -I, -L, -l, and -U options can be specified.
# c99 > Compiles C programs according to the ISO C standard. More information: > https://manned.org/c99. * Compile source file(s) and create an executable: `c99 {{file.c}}` * Compile source file(s) and create an executable with a custom name: `c99 -o {{executable_name}} {{file.c}}` * Compile source file(s) and create object file(s): `c99 -c {{file.c}}` * Compile source file(s), link with object file(s), and create an executable: `c99 {{file.c}} {{file.o}}`
runuser
runuser can be used to run commands with a substitute user and group ID. If the option -u is not given, runuser falls back to su-compatible semantics and a shell is executed. The difference between the commands runuser and su is that runuser does not ask for a password (because it may be executed by the root user only) and it uses a different PAM configuration. The command runuser does not have to be installed with set-user-ID permissions. If the PAM session is not required, then the recommended solution is to use the setpriv(1) command. When called without arguments, runuser defaults to running an interactive shell as root. For backward compatibility, runuser defaults to not changing the current directory and to setting only the environment variables HOME and SHELL (plus USER and LOGNAME if the target user is not root). This version of runuser uses PAM for session management. Note that runuser in all cases use PAM (pam_getenvlist()) to do the final environment modification. Command-line options such as --login and --preserve-environment affect the environment before it is modified by PAM. Since version 2.38 runuser resets process resource limits RLIMIT_NICE, RLIMIT_RTPRIO, RLIMIT_FSIZE, RLIMIT_AS and RLIMIT_NOFILE. -c, --command=command Pass command to the shell with the -c option. -f, --fast Pass -f to the shell, which may or may not be useful, depending on the shell. -g, --group=group The primary group to be used. This option is allowed for the root user only. -G, --supp-group=group Specify a supplementary group. This option is available to the root user only. The first specified supplementary group is also used as a primary group if the option --group is not specified. -, -l, --login Start the shell as a login shell with an environment similar to a real login: • clears all the environment variables except for TERM and variables specified by --whitelist-environment • initializes the environment variables HOME, SHELL, USER, LOGNAME, and PATH • changes to the target user’s home directory • sets argv[0] of the shell to '-' in order to make the shell a login shell -P, --pty Create a pseudo-terminal for the session. The independent terminal provides better security as the user does not share a terminal with the original session. This can be used to avoid TIOCSTI ioctl terminal injection and other security attacks against terminal file descriptors. The entire session can also be moved to the background (e.g., runuser --pty -u username -- command &). If the pseudo-terminal is enabled, then runuser works as a proxy between the sessions (sync stdin and stdout). This feature is mostly designed for interactive sessions. If the standard input is not a terminal, but for example a pipe (e.g., echo "date" | runuser --pty -u user), then the ECHO flag for the pseudo-terminal is disabled to avoid messy output. -m, -p, --preserve-environment Preserve the entire environment, i.e., do not set HOME, SHELL, USER or LOGNAME. The option is ignored if the option --login is specified. -s, --shell=shell Run the specified shell instead of the default. The shell to run is selected according to the following rules, in order: • the shell specified with --shell • the shell specified in the environment variable SHELL if the --preserve-environment option is used • the shell listed in the passwd entry of the target user • /bin/sh If the target user has a restricted shell (i.e., not listed in /etc/shells), then the --shell option and the SHELL environment variables are ignored unless the calling user is root. --session-command=command Same as -c, but do not create a new session. (Discouraged.) -w, --whitelist-environment=list Don’t reset the environment variables specified in the comma-separated list when clearing the environment for --login. The whitelist is ignored for the environment variables HOME, SHELL, USER, LOGNAME, and PATH. -h, --help Display help text and exit. -V, --version Print version and exit.
# runuser > Run commands as a specific user and group without asking for password (needs > root privileges). More information: https://manned.org/runuser. * Run command as a different user: `runuser {{user}} -c '{{command}}'` * Run command as a different user and group: `runuser {{user}} -g {{group}} -c '{{command}}'` * Start a login shell as a specific user: `runuser {{user}} -l` * Specify a shell for running instead of the default shell (also works for login): `runuser {{user}} -s {{/bin/sh}}` * Preserve the entire environment of root (only if `--login` is not specified): `runuser {{user}} --preserve-environment -c '{{command}}'`
man
man is the system's manual pager. Each page argument given to man is normally the name of a program, utility or function. The manual page associated with each of these arguments is then found and displayed. A section, if provided, will direct man to look only in that section of the manual. The default action is to search in all of the available sections following a pre-defined order (see DEFAULTS), and to show only the first page found, even if page exists in several sections. The table below shows the section numbers of the manual followed by the types of pages they contain. 1 Executable programs or shell commands 2 System calls (functions provided by the kernel) 3 Library calls (functions within program libraries) 4 Special files (usually found in /dev) 5 File formats and conventions, e.g. /etc/passwd 6 Games 7 Miscellaneous (including macro packages and conventions), e.g. man(7), groff(7), man-pages(7) 8 System administration commands (usually only for root) 9 Kernel routines [Non standard] A manual page consists of several sections. Conventional section names include NAME, SYNOPSIS, CONFIGURATION, DESCRIPTION, OPTIONS, EXIT STATUS, RETURN VALUE, ERRORS, ENVIRONMENT, FILES, VERSIONS, CONFORMING TO, NOTES, BUGS, EXAMPLE, AUTHORS, and SEE ALSO. The following conventions apply to the SYNOPSIS section and can be used as a guide in other sections. bold text type exactly as shown. italic text replace with appropriate argument. [-abc] any or all arguments within [ ] are optional. -a|-b options delimited by | cannot be used together. argument ... argument is repeatable. [expression] ... entire expression within [ ] is repeatable. Exact rendering may vary depending on the output device. For instance, man will usually not be able to render italics when running in a terminal, and will typically use underlined or coloured text instead. The command or function illustration is a pattern that should match all possible invocations. In some cases it is advisable to illustrate several exclusive invocations as is shown in the SYNOPSIS section of this manual page. Non-argument options that are duplicated either on the command line, in $MANOPT, or both, are not harmful. For options that require an argument, each duplication will override the previous argument value. General options -C file, --config-file=file Use this user configuration file rather than the default of ~/.manpath. -d, --debug Print debugging information. -D, --default This option is normally issued as the very first option and resets man's behaviour to its default. Its use is to reset those options that may have been set in $MANOPT. Any options that follow -D will have their usual effect. --warnings[=warnings] Enable warnings from groff. This may be used to perform sanity checks on the source text of manual pages. warnings is a comma-separated list of warning names; if it is not supplied, the default is "mac". See the “Warnings” node in info groff for a list of available warning names. Main modes of operation -f, --whatis Equivalent to whatis. Display a short description from the manual page, if available. See whatis(1) for details. -k, --apropos Equivalent to apropos. Search the short manual page descriptions for keywords and display any matches. See apropos(1) for details. -K, --global-apropos Search for text in all manual pages. This is a brute- force search, and is likely to take some time; if you can, you should specify a section to reduce the number of pages that need to be searched. Search terms may be simple strings (the default), or regular expressions if the --regex option is used. Note that this searches the sources of the manual pages, not the rendered text, and so may include false positives due to things like comments in source files. Searching the rendered text would be much slower. -l, --local-file Activate "local" mode. Format and display local manual files instead of searching through the system's manual collection. Each manual page argument will be interpreted as an nroff source file in the correct format. No cat file is produced. If '-' is listed as one of the arguments, input will be taken from stdin. When this option is not used, and man fails to find the page required, before displaying the error message, it attempts to act as if this option was supplied, using the name as a filename and looking for an exact match. -w, --where, --path, --location Don't actually display the manual page, but do print the location of the source nroff file that would be formatted. If the -a option is also used, then print the locations of all source files that match the search criteria. -W, --where-cat, --location-cat Don't actually display the manual page, but do print the location of the preformatted cat file that would be displayed. If the -a option is also used, then print the locations of all preformatted cat files that match the search criteria. If -w and -W are both used, then print both source file and cat file separated by a space. If all of -w, -W, and -a are used, then do this for each possible match. -c, --catman This option is not for general use and should only be used by the catman program. -R encoding, --recode=encoding Instead of formatting the manual page in the usual way, output its source converted to the specified encoding. If you already know the encoding of the source file, you can also use manconv(1) directly. However, this option allows you to convert several manual pages to a single encoding without having to explicitly state the encoding of each, provided that they were already installed in a structure similar to a manual page hierarchy. Consider using man-recode(1) instead for converting multiple manual pages, since it has an interface designed for bulk conversion and so can be much faster. Finding manual pages -L locale, --locale=locale man will normally determine your current locale by a call to the C function setlocale(3) which interrogates various environment variables, possibly including $LC_MESSAGES and $LANG. To temporarily override the determined value, use this option to supply a locale string directly to man. Note that it will not take effect until the search for pages actually begins. Output such as the help message will always be displayed in the initially determined locale. -m system[,...], --systems=system[,...] If this system has access to other operating systems' manual pages, they can be accessed using this option. To search for a manual page from NewOS's manual page collection, use the option -m NewOS. The system specified can be a combination of comma delimited operating system names. To include a search of the native operating system's manual pages, include the system name man in the argument string. This option will override the $SYSTEM environment variable. -M path, --manpath=path Specify an alternate manpath to use. By default, man uses manpath derived code to determine the path to search. This option overrides the $MANPATH environment variable and causes option -m to be ignored. A path specified as a manpath must be the root of a manual page hierarchy structured into sections as described in the man-db manual (under "The manual page system"). To view manual pages outside such hierarchies, see the -l option. -S list, -s list, --sections=list The given list is a colon- or comma-separated list of sections, used to determine which manual sections to search and in what order. This option overrides the $MANSECT environment variable. (The -s spelling is for compatibility with System V.) -e sub-extension, --extension=sub-extension Some systems incorporate large packages of manual pages, such as those that accompany the Tcl package, into the main manual page hierarchy. To get around the problem of having two manual pages with the same name such as exit(3), the Tcl pages were usually all assigned to section l. As this is unfortunate, it is now possible to put the pages in the correct section, and to assign a specific "extension" to them, in this case, exit(3tcl). Under normal operation, man will display exit(3) in preference to exit(3tcl). To negotiate this situation and to avoid having to know which section the page you require resides in, it is now possible to give man a sub-extension string indicating which package the page must belong to. Using the above example, supplying the option -e tcl to man will restrict the search to pages having an extension of *tcl. -i, --ignore-case Ignore case when searching for manual pages. This is the default. -I, --match-case Search for manual pages case-sensitively. --regex Show all pages with any part of either their names or their descriptions matching each page argument as a regular expression, as with apropos(1). Since there is usually no reasonable way to pick a "best" page when searching for a regular expression, this option implies -a. --wildcard Show all pages with any part of either their names or their descriptions matching each page argument using shell-style wildcards, as with apropos(1) --wildcard. The page argument must match the entire name or description, or match on word boundaries in the description. Since there is usually no reasonable way to pick a "best" page when searching for a wildcard, this option implies -a. --names-only If the --regex or --wildcard option is used, match only page names, not page descriptions, as with whatis(1). Otherwise, no effect. -a, --all By default, man will exit after displaying the most suitable manual page it finds. Using this option forces man to display all the manual pages with names that match the search criteria. -u, --update This option causes man to update its database caches of installed manual pages. This is only needed in rare situations, and it is normally better to run mandb(8) instead. --no-subpages By default, man will try to interpret pairs of manual page names given on the command line as equivalent to a single manual page name containing a hyphen or an underscore. This supports the common pattern of programs that implement a number of subcommands, allowing them to provide manual pages for each that can be accessed using similar syntax as would be used to invoke the subcommands themselves. For example: $ man -aw git diff /usr/share/man/man1/git-diff.1.gz To disable this behaviour, use the --no-subpages option. $ man -aw --no-subpages git diff /usr/share/man/man1/git.1.gz /usr/share/man/man3/Git.3pm.gz /usr/share/man/man1/diff.1.gz Controlling formatted output -P pager, --pager=pager Specify which output pager to use. By default, man uses less, falling back to cat if less is not found or is not executable. This option overrides the $MANPAGER environment variable, which in turn overrides the $PAGER environment variable. It is not used in conjunction with -f or -k. The value may be a simple command name or a command with arguments, and may use shell quoting (backslashes, single quotes, or double quotes). It may not use pipes to connect multiple commands; if you need that, use a wrapper script, which may take the file to display either as an argument or on standard input. -r prompt, --prompt=prompt If a recent version of less is used as the pager, man will attempt to set its prompt and some sensible options. The default prompt looks like Manual page name(sec) line x where name denotes the manual page name, sec denotes the section it was found under and x the current line number. This is achieved by using the $LESS environment variable. Supplying -r with a string will override this default. The string may contain the text $MAN_PN which will be expanded to the name of the current manual page and its section name surrounded by "(" and ")". The string used to produce the default could be expressed as \ Manual\ page\ \$MAN_PN\ ?ltline\ %lt?L/%L.: byte\ %bB?s/%s..?\ (END):?pB\ %pB\\%.. (press h for help or q to quit) It is broken into three lines here for the sake of readability only. For its meaning see the less(1) manual page. The prompt string is first evaluated by the shell. All double quotes, back-quotes and backslashes in the prompt must be escaped by a preceding backslash. The prompt string may end in an escaped $ which may be followed by further options for less. By default man sets the -ix8 options. The $MANLESS environment variable described below may be used to set a default prompt string if none is supplied on the command line. -7, --ascii When viewing a pure ascii(7) manual page on a 7 bit terminal or terminal emulator, some characters may not display correctly when using the latin1(7) device description with GNU nroff. This option allows pure ascii manual pages to be displayed in ascii with the latin1 device. It will not translate any latin1 text. The following table shows the translations performed: some parts of it may only be displayed properly when using GNU nroff's latin1(7) device. Description Octal latin1 ascii ──────────────────────────────────────── continuation 255 ‐ - hyphen bullet (middle 267 • o dot) acute accent 264 ´ ' multiplication 327 × x sign If the latin1 column displays correctly, your terminal may be set up for latin1 characters and this option is not necessary. If the latin1 and ascii columns are identical, you are reading this page using this option or man did not format this page using the latin1 device description. If the latin1 column is missing or corrupt, you may need to view manual pages with this option. This option is ignored when using options -t, -H, -T, or -Z and may be useless for nroff other than GNU's. -E encoding, --encoding=encoding Generate output for a character encoding other than the default. For backward compatibility, encoding may be an nroff device such as ascii, latin1, or utf8 as well as a true character encoding such as UTF-8. --no-hyphenation, --nh Normally, nroff will automatically hyphenate text at line breaks even in words that do not contain hyphens, if it is necessary to do so to lay out words on a line without excessive spacing. This option disables automatic hyphenation, so words will only be hyphenated if they already contain hyphens. If you are writing a manual page and simply want to prevent nroff from hyphenating a word at an inappropriate point, do not use this option, but consult the nroff documentation instead; for instance, you can put "\%" inside a word to indicate that it may be hyphenated at that point, or put "\%" at the start of a word to prevent it from being hyphenated. --no-justification, --nj Normally, nroff will automatically justify text to both margins. This option disables full justification, leaving justified only to the left margin, sometimes called "ragged-right" text. If you are writing a manual page and simply want to prevent nroff from justifying certain paragraphs, do not use this option, but consult the nroff documentation instead; for instance, you can use the ".na", ".nf", ".fi", and ".ad" requests to temporarily disable adjusting and filling. -p string, --preprocessor=string Specify the sequence of preprocessors to run before nroff or troff/groff. Not all installations will have a full set of preprocessors. Some of the preprocessors and the letters used to designate them are: eqn (e), grap (g), pic (p), tbl (t), vgrind (v), refer (r). This option overrides the $MANROFFSEQ environment variable. zsoelim is always run as the very first preprocessor. -t, --troff Use groff -mandoc to format the manual page to stdout. This option is not required in conjunction with -H, -T, or -Z. -T[device], --troff-device[=device] This option is used to change groff (or possibly troff's) output to be suitable for a device other than the default. It implies -t. Examples (provided with Groff-1.17) include dvi, latin1, ps, utf8, X75 and X100. -H[browser], --html[=browser] This option will cause groff to produce HTML output, and will display that output in a web browser. The choice of browser is determined by the optional browser argument if one is provided, by the $BROWSER environment variable, or by a compile-time default if that is unset (usually lynx). This option implies -t, and will only work with GNU troff. -X[dpi], --gxditview[=dpi] This option displays the output of groff in a graphical window using the gxditview program. The dpi (dots per inch) may be 75, 75-12, 100, or 100-12, defaulting to 75; the -12 variants use a 12-point base font. This option implies -T with the X75, X75-12, X100, or X100-12 device respectively. -Z, --ditroff groff will run troff and then use an appropriate post- processor to produce output suitable for the chosen device. If groff -mandoc is groff, this option is passed to groff and will suppress the use of a post-processor. It implies -t. Getting help -?, --help Print a help message and exit. --usage Print a short usage message and exit. -V, --version Display version information.
# man > Format and display manual pages. More information: > https://www.man7.org/linux/man-pages/man1/man.1.html. * Display the man page for a command: `man {{command}}` * Display the man page for a command from section 7: `man {{7}} {{command}}` * List all available sections for a command: `man -f {{command}}` * Display the path searched for manpages: `man --path` * Display the location of a manpage rather than the manpage itself: `man -w {{command}}` * Display the man page using a specific locale: `man {{command}} --locale={{locale}}` * Search for manpages containing a search string: `man -k "{{search_string}}"`
git-cherry
Determine whether there are commits in <head>..<upstream> that are equivalent to those in the range <limit>..<head>. The equivalence test is based on the diff, after removing whitespace and line numbers. git-cherry therefore detects when commits have been "copied" by means of git-cherry-pick(1), git-am(1) or git-rebase(1). Outputs the SHA1 of every commit in <limit>..<head>, prefixed with - for commits that have an equivalent in <upstream>, and + for commits that do not. -v Show the commit subjects next to the SHA1s. <upstream> Upstream branch to search for equivalent commits. Defaults to the upstream branch of HEAD. <head> Working branch; defaults to HEAD. <limit> Do not report commits up to (and including) limit.
# git cherry > Find commits that have yet to be applied upstream. More information: > https://git-scm.com/docs/git-cherry. * Show commits (and their messages) with equivalent commits upstream: `git cherry -v` * Specify a different upstream and topic branch: `git cherry {{origin}} {{topic}}` * Limit commits to those within a given limit: `git cherry {{origin}} {{topic}} {{base}}`
fold
The fold utility is a filter that shall fold lines from its input files, breaking the lines to have a maximum of width column positions (or bytes, if the -b option is specified). Lines shall be broken by the insertion of a <newline> such that each output line (referred to later in this section as a segment) is the maximum width possible that does not exceed the specified number of column positions (or bytes). A line shall not be broken in the middle of a character. The behavior is undefined if width is less than the number of columns any single character in the input would occupy. If the <carriage-return>, <backspace>, or <tab> characters are encountered in the input, and the -b option is not specified, they shall be treated specially: <backspace> The current count of line width shall be decremented by one, although the count never shall become negative. The fold utility shall not insert a <newline> immediately before or after any <backspace>, unless the following character has a width greater than 1 and would cause the line width to exceed width. <carriage-return> The current count of line width shall be set to zero. The fold utility shall not insert a <newline> immediately before or after any <carriage-return>. <tab> Each <tab> encountered shall advance the column position pointer to the next tab stop. Tab stops shall be at each column position n such that n modulo 8 equals 1. The fold utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -b Count width in bytes rather than column positions. -s If a segment of a line contains a <blank> within the first width column positions (or bytes), break the line after the last such <blank> meeting the width constraints. If there is no <blank> meeting the requirements, the -s option shall have no effect for that output segment of the input line. -w width Specify the maximum line length, in column positions (or bytes if -b is specified). The results are unspecified if width is not a positive decimal number. The default value shall be 80.
# fold > Wrap each line in an input file to fit a specified width and print it to > `stdout`. More information: https://manned.org/fold.1p. * Wrap each line to default width (80 characters): `fold {{path/to/file}}` * Wrap each line to width "30": `fold -w30 {{path/to/file}}` * Wrap each line to width "5" and break the line at spaces (puts each space separated word in a new line, words with length > 5 are wrapped): `fold -w5 -s {{path/to/file}}`
dirname
Output each NAME with its last non-slash component and trailing slashes removed; if NAME contains no /'s, output '.' (meaning the current directory). -z, --zero end each output line with NUL, not newline --help display this help and exit --version output version information and exit
# dirname > Calculates the parent directory of a given file or directory path. More > information: https://www.gnu.org/software/coreutils/dirname. * Calculate the parent directory of a given path: `dirname {{path/to/file_or_directory}}` * Calculate the parent directory of multiple paths: `dirname {{path/to/file_a}} {{path/to/directory_b}}` * Delimit output with a NUL character instead of a newline (useful when combining with `xargs`): `dirname --zero {{path/to/directory_a}} {{path/to/file_b}}`
tsort
Write totally ordered list consistent with the partial ordering in FILE. With no FILE, or when FILE is -, read standard input. --help display this help and exit --version output version information and exit
# tsort > Perform a topological sort. A common use is to show the dependency order of > nodes in a directed acyclic graph. More information: > https://www.gnu.org/software/coreutils/tsort. * Perform a topological sort consistent with a partial sort per line of input separated by blanks: `tsort {{path/to/file}}` * Perform a topological sort consistent on strings: `echo -e "{{UI Backend\nBackend Database\nDocs UI}}" | tsort`
base32
Base32 encode or decode FILE, or standard input, to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -d, --decode decode data -i, --ignore-garbage when decoding, ignore non-alphabet characters -w, --wrap=COLS wrap encoded lines after COLS character (default 76). Use 0 to disable line wrapping --help display this help and exit --version output version information and exit The data are encoded as described for the base32 alphabet in RFC 4648. When decoding, the input may contain newlines in addition to the bytes of the formal base32 alphabet. Use --ignore-garbage to attempt to recover from any other non-alphabet bytes in the encoded stream.
# base32 > Encode or decode file or `stdin` to/from Base32, to `stdout`. More > information: https://www.gnu.org/software/coreutils/base32. * Encode a file: `base32 {{path/to/file}}` * Decode a file: `base32 --decode {{path/to/file}}` * Encode from `stdin`: `{{somecommand}} | base32` * Decode from `stdin`: `{{somecommand}} | base32 --decode`
git-commit-tree
This is usually not what an end user wants to run directly. See git-commit(1) instead. Creates a new commit object based on the provided tree object and emits the new commit object id on stdout. The log message is read from the standard input, unless -m or -F options are given. The -m and -F options can be given any number of times, in any order. The commit log message will be composed in the order in which the options are given. A commit object may have any number of parents. With exactly one parent, it is an ordinary commit. Having more than one parent makes the commit a merge between several lines of history. Initial (root) commits have no parents. While a tree represents a particular directory state of a working directory, a commit represents that state in "time", and explains how to get there. Normally a commit would identify a new "HEAD" state, and while Git doesn’t care where you save the note about that state, in practice we tend to just write the result to the file that is pointed at by .git/HEAD, so that we can always see what the last committed state was. <tree> An existing tree object. -p <parent> Each -p indicates the id of a parent commit object. -m <message> A paragraph in the commit log message. This can be given more than once and each <message> becomes its own paragraph. -F <file> Read the commit log message from the given file. Use - to read from the standard input. This can be given more than once and the content of each file becomes its own paragraph. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand a --gpg-sign option given earlier on the command line.
# git commit-tree > Low level utility to create commit objects. See also: `git commit`. More > information: https://git-scm.com/docs/git-commit-tree. * Create a commit object with the specified message: `git commit-tree {{tree}} -m "{{message}}"` * Create a commit object reading the message from a file (use `-` for `stdin`): `git commit-tree {{tree}} -F {{path/to/file}}` * Create a GPG-signed commit object: `git commit-tree {{tree}} -m "{{message}}" --gpg-sign` * Create a commit object with the specified parent commit object: `git commit-tree {{tree}} -m "{{message}}" -p {{parent_commit_sha}}`
reset
The @TPUT@ utility uses the terminfo database to make the values of terminal-dependent capabilities and information available to the shell (see sh(1)), to initialize or reset the terminal, or return the long name of the requested terminal type. The result depends upon the capability's type: string @TPUT@ writes the string to the standard output. No trailing newline is supplied. integer @TPUT@ writes the decimal value to the standard output, with a trailing newline. boolean @TPUT@ simply sets the exit code (0 for TRUE if the terminal has the capability, 1 for FALSE if it does not), and writes nothing to the standard output. Before using a value returned on the standard output, the application should test the exit code (e.g., $?, see sh(1)) to be sure it is 0. (See the EXIT CODES and DIAGNOSTICS sections.) For a complete list of capabilities and the capname associated with each, see terminfo(5). Options -S allows more than one capability per invocation of @TPUT@. The capabilities must be passed to @TPUT@ from the standard input instead of from the command line (see example). Only one capname is allowed per line. The -S option changes the meaning of the 0 and 1 boolean and string exit codes (see the EXIT CODES section). Because some capabilities may use string parameters rather than numbers, @TPUT@ uses a table and the presence of parameters in its input to decide whether to use tparm(3X), and how to interpret the parameters. -Ttype indicates the type of terminal. Normally this option is unnecessary, because the default is taken from the environment variable TERM. If -T is specified, then the shell variables LINES and COLUMNS will also be ignored. -V reports the version of ncurses which was used in this program, and exits. -x do not attempt to clear the terminal's scrollback buffer using the extended “E3” capability. Commands A few commands (init, reset and longname) are special; they are defined by the @TPUT@ program. The others are the names of capabilities from the terminal database (see terminfo(5) for a list). Although init and reset resemble capability names, @TPUT@ uses several capabilities to perform these special functions. capname indicates the capability from the terminal database. If the capability is a string that takes parameters, the arguments following the capability will be used as parameters for the string. Most parameters are numbers. Only a few terminal capabilities require string parameters; @TPUT@ uses a table to decide which to pass as strings. Normally @TPUT@ uses tparm(3X) to perform the substitution. If no parameters are given for the capability, @TPUT@ writes the string without performing the substitution. init If the terminal database is present and an entry for the user's terminal exists (see -Ttype, above), the following will occur: (1) first, @TPUT@ retrieves the current terminal mode settings for your terminal. It does this by successively testing • the standard error, • standard output, • standard input and • ultimately “/dev/tty” to obtain terminal settings. Having retrieved these settings, @TPUT@ remembers which file descriptor to use when updating settings. (2) if the window size cannot be obtained from the operating system, but the terminal description (or environment, e.g., LINES and COLUMNS variables specify this), update the operating system's notion of the window size. (3) the terminal modes will be updated: • any delays (e.g., newline) specified in the entry will be set in the tty driver, • tabs expansion will be turned on or off according to the specification in the entry, and • if tabs are not expanded, standard tabs will be set (every 8 spaces). (4) if present, the terminal's initialization strings will be output as detailed in the terminfo(5) section on Tabs and Initialization, (5) output is flushed. If an entry does not contain the information needed for any of these activities, that activity will silently be skipped. reset This is similar to init, with two differences: (1) before any other initialization, the terminal modes will be reset to a “sane” state: • set cooked and echo modes, • turn off cbreak and raw modes, • turn on newline translation and • reset any unset special characters to their default values (2) Instead of putting out initialization strings, the terminal's reset strings will be output if present (rs1, rs2, rs3, rf). If the reset strings are not present, but initialization strings are, the initialization strings will be output. Otherwise, reset acts identically to init. longname If the terminal database is present and an entry for the user's terminal exists (see -Ttype above), then the long name of the terminal will be put out. The long name is the last name in the first line of the terminal's description in the terminfo database [see term(5)]. Aliases @TPUT@ handles the clear, init and reset commands specially: it allows for the possibility that it is invoked by a link with those names. If @TPUT@ is invoked by a link named reset, this has the same effect as @TPUT@ reset. The @TSET@(1) utility also treats a link named reset specially. Before ncurses 6.1, the two utilities were different from each other: • @TSET@ utility reset the terminal modes and special characters (not done with @TPUT@). • On the other hand, @TSET@'s repertoire of terminal capabilities for resetting the terminal was more limited, i.e., only reset_1string, reset_2string and reset_file in contrast to the tab-stops and margins which are set by this utility. • The reset program is usually an alias for @TSET@, because of this difference with resetting terminal modes and special characters. With the changes made for ncurses 6.1, the reset feature of the two programs is (mostly) the same. A few differences remain: • The @TSET@ program waits one second when resetting, in case it happens to be a hardware terminal. • The two programs write the terminal initialization strings to different streams (i.e., the standard error for @TSET@ and the standard output for @TPUT@). Note: although these programs write to different streams, redirecting their output to a file will capture only part of their actions. The changes to the terminal modes are not affected by redirecting the output. If @TPUT@ is invoked by a link named init, this has the same effect as @TPUT@ init. Again, you are less likely to use that link because another program named init has a more well- established use. Terminal Size Besides the special commands (e.g., clear), @TPUT@ treats certain terminfo capabilities specially: lines and cols. @TPUT@ calls setupterm(3X) to obtain the terminal size: • first, it gets the size from the terminal database (which generally is not provided for terminal emulators which do not have a fixed window size) • then it asks the operating system for the terminal's size (which generally works, unless connecting via a serial line which does not support NAWS: negotiations about window size). • finally, it inspects the environment variables LINES and COLUMNS which may override the terminal size. If the -T option is given @TPUT@ ignores the environment variables by calling use_tioctl(TRUE), relying upon the operating system (or finally, the terminal database).
# reset > Reinitializes the current terminal. Clears the entire terminal screen. More > information: https://manned.org/reset. * Reinitialize the current terminal: `reset` * Display the terminal type instead: `reset -q`
git-init
This command creates an empty Git repository - basically a .git directory with subdirectories for objects, refs/heads, refs/tags, and template files. An initial branch without any commits will be created (see the --initial-branch option below for its name). If the $GIT_DIR environment variable is set then it specifies a path to use instead of ./.git for the base of the repository. If the object storage directory is specified via the $GIT_OBJECT_DIRECTORY environment variable then the sha1 directories are created underneath - otherwise the default $GIT_DIR/objects directory is used. Running git init in an existing repository is safe. It will not overwrite things that are already there. The primary reason for rerunning git init is to pick up newly added templates (or to move the repository to another place if --separate-git-dir is given). -q, --quiet Only print error and warning messages; all other output will be suppressed. --bare Create a bare repository. If GIT_DIR environment is not set, it is set to the current working directory. --object-format=<format> Specify the given object format (hash algorithm) for the repository. The valid values are sha1 and (if enabled) sha256. sha1 is the default. THIS OPTION IS EXPERIMENTAL! SHA-256 support is experimental and still in an early stage. A SHA-256 repository will in general not be able to share work with "regular" SHA-1 repositories. It should be assumed that, e.g., Git internal file formats in relation to SHA-256 repositories may change in backwards-incompatible ways. Only use --object-format=sha256 for testing purposes. --template=<template-directory> Specify the directory from which templates will be used. (See the "TEMPLATE DIRECTORY" section below.) --separate-git-dir=<git-dir> Instead of initializing the repository as a directory to either $GIT_DIR or ./.git/, create a text file there containing the path to the actual repository. This file acts as filesystem-agnostic Git symbolic link to the repository. If this is reinitialization, the repository will be moved to the specified path. -b <branch-name>, --initial-branch=<branch-name> Use the specified name for the initial branch in the newly created repository. If not specified, fall back to the default name (currently master, but this is subject to change in the future; the name can be customized via the init.defaultBranch configuration variable). --shared[=(false|true|umask|group|all|world|everybody|<perm>)] Specify that the Git repository is to be shared amongst several users. This allows users belonging to the same group to push into that repository. When specified, the config variable "core.sharedRepository" is set so that files and directories under $GIT_DIR are created with the requested permissions. When not specified, Git will use permissions reported by umask(2). The option can have the following values, defaulting to group if no value is given: umask (or false) Use permissions reported by umask(2). The default, when --shared is not specified. group (or true) Make the repository group-writable, (and g+sx, since the git group may be not the primary group of all users). This is used to loosen the permissions of an otherwise safe umask(2) value. Note that the umask still applies to the other permission bits (e.g. if umask is 0022, using group will not remove read privileges from other (non-group) users). See 0xxx for how to exactly specify the repository permissions. all (or world or everybody) Same as group, but make the repository readable by all users. <perm> <perm> is a 3-digit octal number prefixed with ‘0` and each file will have mode <perm>. <perm> will override users’ umask(2) value (and not only loosen permissions as group and all does). 0640 will create a repository which is group-readable, but not group-writable or accessible to others. 0660 will create a repo that is readable and writable to the current user and group, but inaccessible to others (directories and executable files get their x bit from the r bit for corresponding classes of users). By default, the configuration flag receive.denyNonFastForwards is enabled in shared repositories, so that you cannot force a non fast-forwarding push into it. If you provide a directory, the command is run inside it. If this directory does not exist, it will be created.
# git init > Initializes a new local Git repository. More information: https://git- > scm.com/docs/git-init. * Initialize a new local repository: `git init` * Initialize a repository with the specified name for the initial branch: `git init --initial-branch={{branch_name}}` * Initialize a repository using SHA256 for object hashes (requires Git version 2.29+): `git init --object-format={{sha256}}` * Initialize a barebones repository, suitable for use as a remote over ssh: `git init --bare`
csplit
Output pieces of FILE separated by PATTERN(s) to files 'xx00', 'xx01', ..., and output byte counts of each piece to standard output. Read standard input if FILE is - Mandatory arguments to long options are mandatory for short options too. -b, --suffix-format=FORMAT use sprintf FORMAT instead of %02d -f, --prefix=PREFIX use PREFIX instead of 'xx' -k, --keep-files do not remove output files on errors --suppress-matched suppress the lines matching PATTERN -n, --digits=DIGITS use specified number of digits instead of 2 -s, --quiet, --silent do not print counts of output file sizes -z, --elide-empty-files suppress empty output files --help display this help and exit --version output version information and exit Each PATTERN may be: INTEGER copy up to but not including specified line number /REGEXP/[OFFSET] copy up to but not including a matching line %REGEXP%[OFFSET] skip to, but not including a matching line {INTEGER} repeat the previous pattern specified number of times {*} repeat the previous pattern as many times as possible A line OFFSET is an integer optionally preceded by '+' or '-'
# csplit > Split a file into pieces. This generates files named "xx00", "xx01", and so > on. More information: https://www.gnu.org/software/coreutils/csplit. * Split a file at lines 5 and 23: `csplit {{path/to/file}} {{5}} {{23}}` * Split a file every 5 lines (this will fail if the total number of lines is not divisible by 5): `csplit {{path/to/file}} {{5}} {*}` * Split a file every 5 lines, ignoring exact-division error: `csplit -k {{path/to/file}} {{5}} {*}` * Split a file at line 5 and use a custom prefix for the output files: `csplit {{path/to/file}} {{5}} -f {{prefix}}` * Split a file at a line matching a regular expression: `csplit {{path/to/file}} /{{regular_expression}}/`
make
The make utility will determine automatically which pieces of a large program need to be recompiled, and issue the commands to recompile them. The manual describes the GNU implementation of make, which was written by Richard Stallman and Roland McGrath, and is currently maintained by Paul Smith. Our examples show C programs, since they are very common, but you can use make with any programming language whose compiler can be run with a shell command. In fact, make is not limited to programs. You can use it to describe any task where some files must be updated automatically from others whenever the others change. To prepare to use make, you must write a file called the makefile that describes the relationships among files in your program, and provides commands for updating each file. In a program, typically the executable file is updated from object files, which are in turn made by compiling source files. Once a suitable makefile exists, each time you change some source files, this simple shell command: make suffices to perform all necessary recompilations. The make program uses the makefile description and the last-modification times of the files to decide which of the files need to be updated. For each of those files, it issues the commands recorded in the makefile. make executes commands in the makefile to update one or more targets, where target is typically a program. If no -f option is present, make will look for the makefiles GNUmakefile, makefile, and Makefile, in that order. Normally you should call your makefile either makefile or Makefile. (We recommend Makefile because it appears prominently near the beginning of a directory listing, right near other important files such as README.) The first name checked, GNUmakefile, is not recommended for most makefiles. You should use this name if you have a makefile that is specific to GNU make, and will not be understood by other versions of make. If makefile is '-', the standard input is read. make updates a target if it depends on prerequisite files that have been modified since the target was last modified, or if the target does not exist. -b, -m These options are ignored for compatibility with other versions of make. -B, --always-make Unconditionally make all targets. -C dir, --directory=dir Change to directory dir before reading the makefiles or doing anything else. If multiple -C options are specified, each is interpreted relative to the previous one: -C / -C etc is equivalent to -C /etc. This is typically used with recursive invocations of make. -d Print debugging information in addition to normal processing. The debugging information says which files are being considered for remaking, which file-times are being compared and with what results, which files actually need to be remade, which implicit rules are considered and which are applied---everything interesting about how make decides what to do. --debug[=FLAGS] Print debugging information in addition to normal processing. If the FLAGS are omitted, then the behavior is the same as if -d was specified. FLAGS may be any or all of the following names, comma- or space-separated. Only the first character is significant: the rest may be omitted: all for all debugging output (same as using -d), basic for basic debugging, verbose for more verbose basic debugging, implicit for showing implicit rule search operations, jobs for details on invocation of commands, makefile for debugging while remaking makefiles, print shows all recipes that are run even if they are silent, and why shows the reason make decided to rebuild each target. Use none to disable all previous debugging flags. -e, --environment-overrides Give variables taken from the environment precedence over variables from makefiles. -E string, --eval string Interpret string using the eval function, before parsing any makefiles. -f file, --file=file, --makefile=FILE Use file as a makefile. -i, --ignore-errors Ignore all errors in commands executed to remake files. -I dir, --include-dir=dir Specifies a directory dir to search for included makefiles. If several -I options are used to specify several directories, the directories are searched in the order specified. Unlike the arguments to other flags of make, directories given with -I flags may come directly after the flag: -Idir is allowed, as well as -I dir. This syntax is allowed for compatibility with the C preprocessor's -I flag. -j [jobs], --jobs[=jobs] Specifies the number of jobs (commands) to run simultaneously. If there is more than one -j option, the last one is effective. If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously. --jobserver-style=style The style of jobserver to use. The style may be one of fifo, pipe, or sem (Windows only). -k, --keep-going Continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same. -l [load], --load-average[=load] Specifies that no new jobs (commands) should be started if there are others jobs running and the load average is at least load (a floating-point number). With no argument, removes a previous load limit. -L, --check-symlink-times Use the latest mtime between symlinks and target. -n, --just-print, --dry-run, --recon Print the commands that would be executed, but do not execute them (except in certain circumstances). -o file, --old-file=file, --assume-old=file Do not remake the file file even if it is older than its dependencies, and do not remake anything on account of changes in file. Essentially the file is treated as very old and its rules are ignored. -O[type], --output-sync[=type] When running multiple jobs in parallel with -j, ensure the output of each job is collected together rather than interspersed with output from other jobs. If type is not specified or is target the output from the entire recipe for each target is grouped together. If type is line the output from each command line within a recipe is grouped together. If type is recurse output from an entire recursive make is grouped together. If type is none output synchronization is disabled. -p, --print-data-base Print the data base (rules and variable values) that results from reading the makefiles; then execute as usual or as otherwise specified. This also prints the version information given by the -v switch (see below). To print the data base without trying to remake any files, use make -p -f/dev/null. -q, --question ``Question mode''. Do not run any commands, or print anything; just return an exit status that is zero if the specified targets are already up to date, nonzero otherwise. -r, --no-builtin-rules Eliminate use of the built-in implicit rules. Also clear out the default list of suffixes for suffix rules. -R, --no-builtin-variables Don't define any built-in variables. -s, --silent, --quiet Silent operation; do not print the commands as they are executed. --no-silent Cancel the effect of the -s option. -S, --no-keep-going, --stop Cancel the effect of the -k option. -t, --touch Touch files (mark them up to date without really changing them) instead of running their commands. This is used to pretend that the commands were done, in order to fool future invocations of make. --trace Information about the disposition of each target is printed (why the target is being rebuilt and what commands are run to rebuild it). -v, --version Print the version of the make program plus a copyright, a list of authors and a notice that there is no warranty. -w, --print-directory Print a message containing the working directory before and after other processing. This may be useful for tracking down errors from complicated nests of recursive make commands. --no-print-directory Turn off -w, even if it was turned on implicitly. --shuffle[=MODE] Enable shuffling of goal and prerequisite ordering. MODE is one of none to disable shuffle mode, random to shuffle prerequisites in random order, reverse to consider prerequisites in reverse order, or an integer <seed> which enables random mode with a specific seed value. If MODE is omitted the default is random. -W file, --what-if=file, --new-file=file, --assume-new=file Pretend that the target file has just been modified. When used with the -n flag, this shows you what would happen if you were to modify that file. Without -n, it is almost the same as running a touch command on the given file before running make, except that the modification time is changed only in the imagination of make. --warn-undefined-variables Warn when an undefined variable is referenced.
# make > Task runner for targets described in Makefile. Mostly used to control the > compilation of an executable from source code. More information: > https://www.gnu.org/software/make/manual/make.html. * Call the first target specified in the Makefile (usually named "all"): `make` * Call a specific target: `make {{target}}` * Call a specific target, executing 4 jobs at a time in parallel: `make -j{{4}} {{target}}` * Use a specific Makefile: `make --file {{path/to/file}}` * Execute make from another directory: `make --directory {{path/to/directory}}` * Force making of a target, even if source files are unchanged: `make --always-make {{target}}` * Override a variable defined in the Makefile: `make {{target}} {{variable}}={{new_value}}` * Override variables defined in the Makefile by the environment: `make --environment-overrides {{target}}`
sed
The sed utility is a stream editor that shall read one or more text files, make editing changes according to a script of editing commands, and write the results to standard output. The script shall be obtained from either the script operand string or a combination of the option-arguments from the -e script and -f script_file options. The sed utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that the order of presentation of the -e and -f options is significant. The following options shall be supported: -e script Add the editing commands specified by the script option-argument to the end of the script of editing commands. -f script_file Add the editing commands in the file script_file to the end of the script of editing commands. -n Suppress the default output (in which each line, after it is examined for editing, is written to standard output). Only lines explicitly selected for output are written. If any -e or -f options are specified, the script of editing commands shall initially be empty. The commands specified by each -e or -f option shall be added to the script in the order specified. When each addition is made, if the previous addition (if any) was from a -e option, a <newline> shall be inserted before the new addition. The resulting script shall have the same properties as the script operand, described in the OPERANDS section.
# sed > Edit text in a scriptable manner. See also: `awk`, `ed`. More information: > https://keith.github.io/xcode-man-pages/sed.1.html. * Replace all `apple` (basic regex) occurrences with `mango` (basic regex) in all input lines and print the result to `stdout`: `{{command}} | sed 's/apple/mango/g'` * Execute a specific script [f]ile and print the result to `stdout`: `{{command}} | sed -f {{path/to/script_file.sed}}` * Replace all `apple` (extended regex) occurrences with `APPLE` (extended regex) in all input lines and print the result to `stdout`: `{{command}} | sed -E 's/(apple)/\U\1/g'` * Print just a first line to `stdout`: `{{command}} | sed -n '1p'` * Replace all `apple` (basic regex) occurrences with `mango` (basic regex) in a `file` and save a backup of the original to `file.bak`: `sed -i bak 's/apple/mango/g' {{path/to/file}}`
dash
dash is the standard command interpreter for the system. The current version of dash is in the process of being changed to conform with the POSIX 1003.2 and 1003.2a specifications for the shell. This version has many features which make it appear similar in some respects to the Korn shell, but it is not a Korn shell clone (see ksh(1)). Only features designated by POSIX, plus a few Berkeley extensions, are being incorporated into this shell. This man page is not intended to be a tutorial or a complete specification of the shell. Overview The shell is a command that reads lines from either a file or the terminal, interprets them, and generally executes other commands. It is the program that is running when a user logs into the system (although a user can select a different shell with the chsh(1) command). The shell implements a language that has flow control constructs, a macro facility that provides a variety of features in addition to data storage, along with built in history and line editing capabilities. It incorporates many features to aid interactive use and has the advantage that the interpretative language is common to both interactive and non-interactive use (shell scripts). That is, commands can be typed directly to the running shell or can be put into a file and the file can be executed directly by the shell. Invocation If no args are present and if the standard input of the shell is connected to a terminal (or if the -i flag is set), and the -c option is not present, the shell is considered an interactive shell. An interactive shell generally prompts before each command and handles programming and command errors differently (as described below). When first starting, the shell inspects argument 0, and if it begins with a dash ‘-’, the shell is also considered a login shell. This is normally done automatically by the system when the user first logs in. A login shell first reads commands from the files /etc/profile and .profile if they exist. If the environment variable ENV is set on entry to an interactive shell, or is set in the .profile of a login shell, the shell next reads commands from the file named in ENV. Therefore, a user should place commands that are to be executed only at login time in the .profile file, and commands that are executed for every interactive shell inside the ENV file. To set the ENV variable to some file, place the following line in your .profile of your home directory ENV=$HOME/.shinit; export ENV substituting for “.shinit” any filename you wish. If command line arguments besides the options have been specified, then the shell treats the first argument as the name of a file from which to read commands (a shell script), and the remaining arguments are set as the positional parameters of the shell ($1, $2, etc). Otherwise, the shell reads commands from its standard input. Argument List Processing All of the single letter options that have a corresponding name can be used as an argument to the -o option. The set -o name is provided next to the single letter option in the description below. Specifying a dash “-” turns the option on, while using a plus “+” disables the option. The following options can be set from the command line or with the set builtin (described later). -a allexport Export all variables assigned to. -c Read commands from the command_string operand instead of from the standard input. Special parameter 0 will be set from the command_name operand and the positional parameters ($1, $2, etc.) set from the remaining argument operands. -C noclobber Don't overwrite existing files with “>”. -e errexit If not interactive, exit immediately if any untested command fails. The exit status of a command is considered to be explicitly tested if the command is used to control an if, elif, while, or until; or if the command is the left hand operand of an “&&” or “||” operator. -f noglob Disable pathname expansion. -n noexec If not interactive, read commands but do not execute them. This is useful for checking the syntax of shell scripts. -u nounset Write a message to standard error when attempting to expand a variable that is not set, and if the shell is not interactive, exit immediately. -v verbose The shell writes its input to standard error as it is read. Useful for debugging. -x xtrace Write each command to standard error (preceded by a ‘+ ’) before it is executed. Useful for debugging. -I ignoreeof Ignore EOF's from input when interactive. -i interactive Force the shell to behave interactively. -l Make dash act as if it had been invoked as a login shell. -m monitor Turn on job control (set automatically when interactive). -s stdin Read commands from standard input (set automatically if no file arguments are present). This option has no effect when set after the shell has already started running (i.e. with set). -V vi Enable the built-in vi(1) command line editor (disables -E if it has been set). -E emacs Enable the built-in emacs(1) command line editor (disables -V if it has been set). -b notify Enable asynchronous notification of background job completion. (UNIMPLEMENTED for 4.4alpha) Lexical Structure The shell reads input in terms of lines from a file and breaks it up into words at whitespace (blanks and tabs), and at certain sequences of characters that are special to the shell called “operators”. There are two types of operators: control operators and redirection operators (their meaning is discussed later). Following is a list of operators: Control operators: & && ( ) ; ;; | || <newline> Redirection operators: < > >| << >> <& >& <<- <> Quoting Quoting is used to remove the special meaning of certain characters or words to the shell, such as operators, whitespace, or keywords. There are three types of quoting: matched single quotes, matched double quotes, and backslash. Backslash A backslash preserves the literal meaning of the following character, with the exception of ⟨newline⟩. A backslash preceding a ⟨newline⟩ is treated as a line continuation. Single Quotes Enclosing characters in single quotes preserves the literal meaning of all the characters (except single quotes, making it impossible to put single-quotes in a single-quoted string). Double Quotes Enclosing characters within double quotes preserves the literal meaning of all characters except dollarsign ($), backquote (`), and backslash (\). The backslash inside double quotes is historically weird, and serves to quote only the following characters: $ ` " \ <newline>. Otherwise it remains literal. Reserved Words Reserved words are words that have special meaning to the shell and are recognized at the beginning of a line and after a control operator. The following are reserved words: ! elif fi while case else for then { } do done until if esac Their meaning is discussed later. Aliases An alias is a name and corresponding value set using the alias(1) builtin command. Whenever a reserved word may occur (see above), and after checking for reserved words, the shell checks the word to see if it matches an alias. If it does, it replaces it in the input stream with its value. For example, if there is an alias called “lf” with the value “ls -F”, then the input: lf foobar ⟨return⟩ would become ls -F foobar ⟨return⟩ Aliases provide a convenient way for naive users to create shorthands for commands without having to learn how to create functions with arguments. They can also be used to create lexically obscure code. This use is discouraged. Commands The shell interprets the words it reads according to a language, the specification of which is outside the scope of this man page (refer to the BNF in the POSIX 1003.2 document). Essentially though, a line is read and if the first word of the line (or after a control operator) is not a reserved word, then the shell has recognized a simple command. Otherwise, a complex command or some other special construct may have been recognized. Simple Commands If a simple command has been recognized, the shell performs the following actions: 1. Leading words of the form “name=value” are stripped off and assigned to the environment of the simple command. Redirection operators and their arguments (as described below) are stripped off and saved for processing. 2. The remaining words are expanded as described in the section called “Expansions”, and the first remaining word is considered the command name and the command is located. The remaining words are considered the arguments of the command. If no command name resulted, then the “name=value” variable assignments recognized in item 1 affect the current shell. 3. Redirections are performed as described in the next section. Redirections Redirections are used to change where a command reads its input or sends its output. In general, redirections open, close, or duplicate an existing reference to a file. The overall format used for redirection is: [n] redir-op file where redir-op is one of the redirection operators mentioned previously. Following is a list of the possible redirections. The [n] is an optional number between 0 and 9, as in ‘3’ (not ‘[3]’), that refers to a file descriptor. [n]> file Redirect standard output (or n) to file. [n]>| file Same, but override the -C option. [n]>> file Append standard output (or n) to file. [n]< file Redirect standard input (or n) from file. [n1]<&n2 Copy file descriptor n2 as stdout (or fd n1). fd n2. [n]<&- Close standard input (or n). [n1]>&n2 Copy file descriptor n2 as stdin (or fd n1). fd n2. [n]>&- Close standard output (or n). [n]<> file Open file for reading and writing on standard input (or n). The following redirection is often called a “here-document”. [n]<< delimiter here-doc-text ... delimiter All the text on successive lines up to the delimiter is saved away and made available to the command on standard input, or file descriptor n if it is specified. If the delimiter as specified on the initial line is quoted, then the here-doc-text is treated literally, otherwise the text is subjected to parameter expansion, command substitution, and arithmetic expansion (as described in the section on “Expansions”). If the operator is “<<-” instead of “<<”, then leading tabs in the here-doc-text are stripped. Search and Execution There are three types of commands: shell functions, builtin commands, and normal programs – and the command is searched for (by name) in that order. They each are executed in a different way. When a shell function is executed, all of the shell positional parameters (except $0, which remains unchanged) are set to the arguments of the shell function. The variables which are explicitly placed in the environment of the command (by placing assignments to them before the function name) are made local to the function and are set to the values given. Then the command given in the function definition is executed. The positional parameters are restored to their original values when the command completes. This all occurs within the current shell. Shell builtins are executed internally to the shell, without spawning a new process. Otherwise, if the command name doesn't match a function or builtin, the command is searched for as a normal program in the file system (as described in the next section). When a normal program is executed, the shell runs the program, passing the arguments and the environment to the program. If the program is not a normal executable file (i.e., if it does not begin with the "magic number" whose ASCII representation is "#!", so execve(2) returns ENOEXEC then) the shell will interpret the program in a subshell. The child shell will reinitialize itself in this case, so that the effect will be as if a new shell had been invoked to handle the ad- hoc shell script, except that the location of hashed commands located in the parent shell will be remembered by the child. Note that previous versions of this document and the source code itself misleadingly and sporadically refer to a shell script without a magic number as a "shell procedure". Path Search When locating a command, the shell first looks to see if it has a shell function by that name. Then it looks for a builtin command by that name. If a builtin command is not found, one of two things happen: 1. Command names containing a slash are simply executed without performing any searches. 2. The shell searches each entry in PATH in turn for the command. The value of the PATH variable should be a series of entries separated by colons. Each entry consists of a directory name. The current directory may be indicated implicitly by an empty directory name, or explicitly by a single period. Command Exit Status Each command has an exit status that can influence the behaviour of other shell commands. The paradigm is that a command exits with zero for normal or success, and non-zero for failure, error, or a false indication. The man page for each command should indicate the various exit codes and what they mean. Additionally, the builtin commands return exit codes, as does an executed shell function. If a command consists entirely of variable assignments then the exit status of the command is that of the last command substitution if any, otherwise 0. Complex Commands Complex commands are combinations of simple commands with control operators or reserved words, together creating a larger complex command. More generally, a command is one of the following: • simple command • pipeline • list or compound-list • compound command • function definition Unless otherwise stated, the exit status of a command is that of the last simple command executed by the command. Pipelines A pipeline is a sequence of one or more commands separated by the control operator |. The standard output of all but the last command is connected to the standard input of the next command. The standard output of the last command is inherited from the shell, as usual. The format for a pipeline is: [!] command1 [| command2 ...] The standard output of command1 is connected to the standard input of command2. The standard input, standard output, or both of a command is considered to be assigned by the pipeline before any redirection specified by redirection operators that are part of the command. If the pipeline is not in the background (discussed later), the shell waits for all commands to complete. If the reserved word ! does not precede the pipeline, the exit status is the exit status of the last command specified in the pipeline. Otherwise, the exit status is the logical NOT of the exit status of the last command. That is, if the last command returns zero, the exit status is 1; if the last command returns greater than zero, the exit status is zero. Because pipeline assignment of standard input or standard output or both takes place before redirection, it can be modified by redirection. For example: $ command1 2>&1 | command2 sends both the standard output and standard error of command1 to the standard input of command2. A ; or ⟨newline⟩ terminator causes the preceding AND-OR-list (described next) to be executed sequentially; a & causes asynchronous execution of the preceding AND-OR-list. Note that unlike some other shells, each process in the pipeline is a child of the invoking shell (unless it is a shell builtin, in which case it executes in the current shell – but any effect it has on the environment is wiped). Background Commands – & If a command is terminated by the control operator ampersand (&), the shell executes the command asynchronously – that is, the shell does not wait for the command to finish before executing the next command. The format for running a command in background is: command1 & [command2 & ...] If the shell is not interactive, the standard input of an asynchronous command is set to /dev/null. Lists – Generally Speaking A list is a sequence of zero or more commands separated by newlines, semicolons, or ampersands, and optionally terminated by one of these three characters. The commands in a list are executed in the order they are written. If command is followed by an ampersand, the shell starts the command and immediately proceeds onto the next command; otherwise it waits for the command to terminate before proceeding to the next one. Short-Circuit List Operators “&&” and “||” are AND-OR list operators. “&&” executes the first command, and then executes the second command if and only if the exit status of the first command is zero. “||” is similar, but executes the second command if and only if the exit status of the first command is nonzero. “&&” and “||” both have the same priority. Flow-Control Constructs – if, while, for, case The syntax of the if command is if list then list [ elif list then list ] ... [ else list ] fi The syntax of the while command is while list do list done The two lists are executed repeatedly while the exit status of the first list is zero. The until command is similar, but has the word until in place of while, which causes it to repeat until the exit status of the first list is zero. The syntax of the for command is for variable [ in [ word ... ] ] do list done The words following in are expanded, and then the list is executed repeatedly with the variable set to each word in turn. Omitting in word ... is equivalent to in "$@". The syntax of the break and continue command is break [ num ] continue [ num ] Break terminates the num innermost for or while loops. Continue continues with the next iteration of the innermost loop. These are implemented as builtin commands. The syntax of the case command is case word in [(]pattern) list ;; ... esac The pattern can actually be one or more patterns (see Shell Patterns described later), separated by “|” characters. The “(” character before the pattern is optional. Grouping Commands Together Commands may be grouped by writing either (list) or { list; } The first of these executes the commands in a subshell. Builtin commands grouped into a (list) will not affect the current shell. The second form does not fork another shell so is slightly more efficient. Grouping commands together this way allows you to redirect their output as though they were one program: { printf " hello " ; printf " world\n" ; } > greeting Note that “}” must follow a control operator (here, “;”) so that it is recognized as a reserved word and not as another command argument. Functions The syntax of a function definition is name () command A function definition is an executable statement; when executed it installs a function named name and returns an exit status of zero. The command is normally a list enclosed between “{” and “}”. Variables may be declared to be local to a function by using a local command. This should appear as the first statement of a function, and the syntax is local [variable | -] ... Local is implemented as a builtin command. When a variable is made local, it inherits the initial value and exported and readonly flags from the variable with the same name in the surrounding scope, if there is one. Otherwise, the variable is initially unset. The shell uses dynamic scoping, so that if you make the variable x local to function f, which then calls function g, references to the variable x made inside g will refer to the variable x declared inside f, not to the global variable named x. The only special parameter that can be made local is “-”. Making “-” local any shell options that are changed via the set command inside the function to be restored to their original values when the function returns. The syntax of the return command is return [exitstatus] It terminates the currently executing function. Return is implemented as a builtin command. Variables and Parameters The shell maintains a set of parameters. A parameter denoted by a name is called a variable. When starting up, the shell turns all the environment variables into shell variables. New variables can be set using the form name=value Variables set by the user must have a name consisting solely of alphabetics, numerics, and underscores - the first of which must not be numeric. A parameter can also be denoted by a number or a special character as explained below. Positional Parameters A positional parameter is a parameter denoted by a number (n > 0). The shell sets these initially to the values of its command line arguments that follow the name of the shell script. The set builtin can also be used to set or reset them. Special Parameters A special parameter is a parameter denoted by one of the following special characters. The value of the parameter is listed next to its character. * Expands to the positional parameters, starting from one. When the expansion occurs within a double-quoted string it expands to a single field with the value of each parameter separated by the first character of the IFS variable, or by a ⟨space⟩ if IFS is unset. @ Expands to the positional parameters, starting from one. When the expansion occurs within double-quotes, each positional parameter expands as a separate argument. If there are no positional parameters, the expansion of @ generates zero arguments, even when @ is double-quoted. What this basically means, for example, is if $1 is “abc” and $2 is “def ghi”, then "$@" expands to the two arguments: "abc" "def ghi" # Expands to the number of positional parameters. ? Expands to the exit status of the most recent pipeline. - (Hyphen.) Expands to the current option flags (the single-letter option names concatenated into a string) as specified on invocation, by the set builtin command, or implicitly by the shell. $ Expands to the process ID of the invoked shell. A subshell retains the same value of $ as its parent. ! Expands to the process ID of the most recent background command executed from the current shell. For a pipeline, the process ID is that of the last command in the pipeline. 0 (Zero.) Expands to the name of the shell or shell script. Word Expansions This clause describes the various expansions that are performed on words. Not all expansions are performed on every word, as explained later. Tilde expansions, parameter expansions, command substitutions, arithmetic expansions, and quote removals that occur within a single word expand to a single field. It is only field splitting or pathname expansion that can create multiple fields from a single word. The single exception to this rule is the expansion of the special parameter @ within double-quotes, as was described above. The order of word expansion is: 1. Tilde Expansion, Parameter Expansion, Command Substitution, Arithmetic Expansion (these all occur at the same time). 2. Field Splitting is performed on fields generated by step (1) unless the IFS variable is null. 3. Pathname Expansion (unless set -f is in effect). 4. Quote Removal. The $ character is used to introduce parameter expansion, command substitution, or arithmetic evaluation. Tilde Expansion (substituting a user's home directory) A word beginning with an unquoted tilde character (~) is subjected to tilde expansion. All the characters up to a slash (/) or the end of the word are treated as a username and are replaced with the user's home directory. If the username is missing (as in ~/foobar), the tilde is replaced with the value of the HOME variable (the current user's home directory). Parameter Expansion The format for parameter expansion is as follows: ${expression} where expression consists of all characters until the matching “}”. Any “}” escaped by a backslash or within a quoted string, and characters in embedded arithmetic expansions, command substitutions, and variable expansions, are not examined in determining the matching “}”. The simplest form for parameter expansion is: ${parameter} The value, if any, of parameter is substituted. The parameter name or symbol can be enclosed in braces, which are optional except for positional parameters with more than one digit or when parameter is followed by a character that could be interpreted as part of the name. If a parameter expansion occurs inside double-quotes: 1. Pathname expansion is not performed on the results of the expansion. 2. Field splitting is not performed on the results of the expansion, with the exception of @. In addition, a parameter expansion can be modified by using one of the following formats. ${parameter:-word} Use Default Values. If parameter is unset or null, the expansion of word is substituted; otherwise, the value of parameter is substituted. ${parameter:=word} Assign Default Values. If parameter is unset or null, the expansion of word is assigned to parameter. In all cases, the final value of parameter is substituted. Only variables, not positional parameters or special parameters, can be assigned in this way. ${parameter:?[word]} Indicate Error if Null or Unset. If parameter is unset or null, the expansion of word (or a message indicating it is unset if word is omitted) is written to standard error and the shell exits with a nonzero exit status. Otherwise, the value of parameter is substituted. An interactive shell need not exit. ${parameter:+word} Use Alternative Value. If parameter is unset or null, null is substituted; otherwise, the expansion of word is substituted. In the parameter expansions shown previously, use of the colon in the format results in a test for a parameter that is unset or null; omission of the colon results in a test for a parameter that is only unset. ${#parameter} String Length. The length in characters of the value of parameter. The following four varieties of parameter expansion provide for substring processing. In each case, pattern matching notation (see Shell Patterns), rather than regular expression notation, is used to evaluate the patterns. If parameter is * or @, the result of the expansion is unspecified. Enclosing the full parameter expansion string in double-quotes does not cause the following four varieties of pattern characters to be quoted, whereas quoting characters within the braces has this effect. ${parameter%word} Remove Smallest Suffix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the smallest portion of the suffix matched by the pattern deleted. ${parameter%%word} Remove Largest Suffix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the largest portion of the suffix matched by the pattern deleted. ${parameter#word} Remove Smallest Prefix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the smallest portion of the prefix matched by the pattern deleted. ${parameter
# dash > Debian Almquist Shell, a modern, POSIX-compliant implementation of `sh` (not > Bash-compatible). More information: https://manned.org/dash. * Start an interactive shell session: `dash` * Execute specific [c]ommands: `dash -c "{{echo 'dash is executed'}}"` * Execute a specific script: `dash {{path/to/script.sh}}` * Check a specific script for syntax errors: `dash -n {{path/to/script.sh}}` * Execute a specific script while printing each command before executing it: `dash -x {{path/to/script.sh}}` * Execute a specific script and stop at the first [e]rror: `dash -e {{path/to/script.sh}}` * Execute specific commands from `stdin`: `{{echo "echo 'dash is executed'"}} | dash`
ex
The ex utility is a line-oriented text editor. There are two other modes of the editor—open and visual—in which screen- oriented editing is available. This is described more fully by the ex open and visual commands and in vi(1p). If an operand is '-', the results are unspecified. This section uses the term edit buffer to describe the current working text. No specific implementation is implied by this term. All editing changes are performed on the edit buffer, and no changes to it shall affect any file until an editor command writes the file. Certain terminals do not have all the capabilities necessary to support the complete ex definition, such as the full-screen editing commands (visual mode or open mode). When these commands cannot be supported on such terminals, this condition shall not produce an error message such as ``not an editor command'' or report a syntax error. The implementation may either accept the commands and produce results on the screen that are the result of an unsuccessful attempt to meet the requirements of this volume of POSIX.1‐2017 or report an error describing the terminal- related deficiency. The ex utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for the unspecified usage of '-', and that '+' may be recognized as an option delimiter as well as '-'. The following options shall be supported: -c command Specify an initial command to be executed in the first edit buffer loaded from an existing file (see the EXTENDED DESCRIPTION section). Implementations may support more than a single -c option. In such implementations, the specified commands shall be executed in the order specified on the command line. -r Recover the named files (see the EXTENDED DESCRIPTION section). Recovery information for a file shall be saved during an editor or system crash (for example, when the editor is terminated by a signal which the editor can catch), or after the use of an ex preserve command. A crash in this context is an unexpected failure of the system or utility that requires restarting the failed system or utility. A system crash implies that any utilities running at the time also crash. In the case of an editor or system crash, the number of changes to the edit buffer (since the most recent preserve command) that will be recovered is unspecified. If no file operands are given and the -t option is not specified, all other options, the EXINIT variable, and any .exrc files shall be ignored; a list of all recoverable files available to the invoking user shall be written, and the editor shall exit normally without further action. -R Set readonly edit option. -s Prepare ex for batch use by taking the following actions: * Suppress writing prompts and informational (but not diagnostic) messages. * Ignore the value of TERM and any implementation default terminal type and assume the terminal is a type incapable of supporting open or visual modes; see the visual command and the description of vi(1p). * Suppress the use of the EXINIT environment variable and the reading of any .exrc file; see the EXTENDED DESCRIPTION section. * Suppress autoindentation, ignoring the value of the autoindent edit option. -t tagstring Edit the file containing the specified tagstring; see ctags(1p). The tags feature represented by -t tagstring and the tag command is optional. It shall be provided on any system that also provides a conforming implementation of ctags; otherwise, the use of -t produces undefined results. On any system, it shall be an error to specify more than a single -t option. -v Begin in visual mode (see vi(1p)). -w size Set the value of the window editor option to size.
# ex > Command-line text editor. See also: `vim`. More information: > https://www.vim.org. * Open a file: `ex {{path/to/file}}` * Save and Quit: `wq<Enter>` * Undo the last operation: `undo<Enter>` * Search for a pattern in the file: `/{{search_pattern}}<Enter>` * Perform a regular expression substitution in the whole file: `%s/{{regular_expression}}/{{replacement}}/g<Enter>` * Insert text: `i<Enter>{{text}}<C-c>` * Switch to Vim: `visual<Enter>`
time
The time utility shall invoke the utility named by the utility operand with arguments supplied as the argument operands and write a message to standard error that lists timing statistics for the utility. The message shall include the following information: * The elapsed (real) time between invocation of utility and its termination. * The User CPU time, equivalent to the sum of the tms_utime and tms_cutime fields returned by the times() function defined in the System Interfaces volume of POSIX.1‐2017 for the process in which utility is executed. * The System CPU time, equivalent to the sum of the tms_stime and tms_cstime fields returned by the times() function for the process in which utility is executed. The precision of the timing shall be no less than the granularity defined for the size of the clock tick unit on the system, but the results shall be reported in terms of standard time units (for example, 0.02 seconds, 00:00:00.02, 1m33.75s, 365.21 seconds), not numbers of clock ticks. When time is used as part of a pipeline, the times reported are unspecified, except when it is the sole command within a grouping command (see Section 2.9.4.1, Grouping Commands) in that pipeline. For example, the commands on the left are unspecified; those on the right report on utilities a and c, respectively: time a | b | c { time a; } | b | c a | b | time c a | b | (time c) The time utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -p Write the timing output to standard error in the format shown in the STDERR section.
# time > Measure how long a command took to run. Note: `time` can either exist as a > shell builtin, a standalone program or both. More information: > https://manned.org/time. * Run the `command` and print the time measurements to `stdout`: `time {{command}}`
printf
The printf utility shall write formatted operands to the standard output. The argument operands shall be formatted under control of the format operand. None.
# printf > Format and print text. More information: > https://www.gnu.org/software/coreutils/printf. * Print a text message: `printf "{{%s\n}}" "{{Hello world}}"` * Print an integer in bold blue: `printf "{{\e[1;34m%.3d\e[0m\n}}" {{42}}` * Print a float number with the Unicode Euro sign: `printf "{{\u20AC %.2f\n}}" {{123.4}}` * Print a text message composed with environment variables: `printf "{{var1: %s\tvar2: %s\n}}" "{{$VAR1}}" "{{$VAR2}}"` * Store a formatted message in a variable (does not work on zsh): `printf -v {{myvar}} {{"This is %s = %d\n" "a year" 2016}}`
pwd
The pwd utility shall write to standard output an absolute pathname of the current working directory, which does not contain the filenames dot or dot-dot. The pwd utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported by the implementation: -L If the PWD environment variable contains an absolute pathname of the current directory and the pathname does not contain any components that are dot or dot-dot, pwd shall write this pathname to standard output, except that if the PWD environment variable is longer than {PATH_MAX} bytes including the terminating null, it is unspecified whether pwd writes this pathname to standard output or behaves as if the -P option had been specified. Otherwise, the -L option shall behave as the -P option. -P The pathname written to standard output shall not contain any components that refer to files of type symbolic link. If there are multiple pathnames that the pwd utility could write to standard output, one beginning with a single <slash> character and one or more beginning with two <slash> characters, then it shall write the pathname beginning with a single <slash> character. The pathname shall not contain any unnecessary <slash> characters after the leading one or two <slash> characters. If both -L and -P are specified, the last one shall apply. If neither -L nor -P is specified, the pwd utility shall behave as if -L had been specified.
# pwd > Print name of current/working directory. More information: > https://www.gnu.org/software/coreutils/pwd. * Print the current directory: `pwd` * Print the current directory, and resolve all symlinks (i.e. show the "physical" path): `pwd -P`
loadkeys
The program loadkeys reads the file or files specified by FILENAME.... Its main purpose is to load the kernel keymap for the console. You can specify console device by the -C (or --console ) option.
# loadkeys > Load the kernel keymap for the console. More information: > https://manned.org/loadkeys. * Load a default keymap: `loadkeys --default` * Load default keymap when an unusual keymap is loaded and `-` sign cannot be found: `loadkeys defmap` * Create a kernel source table: `loadkeys --mktable` * Create a binary keymap: `loadkeys --bkeymap` * Search and parse keymap without action: `loadkeys --parse` * Load the keymap suppressing all output: `loadkeys --quiet` * Load a keymap from the specified file for the console: `loadkeys --console {{/dev/ttyN}} {{/path/to/file}}` * Use standard names for keymaps of different locales: `loadkeys --console {{/dev/ttyN}} {{uk}}`
env
Set each NAME to VALUE in the environment and run COMMAND. Mandatory arguments to long options are mandatory for short options too. -i, --ignore-environment start with an empty environment -0, --null end each output line with NUL, not newline -u, --unset=NAME remove variable from the environment -C, --chdir=DIR change working directory to DIR -S, --split-string=S process and split S into separate arguments; used to pass multiple arguments on shebang lines --block-signal[=SIG] block delivery of SIG signal(s) to COMMAND --default-signal[=SIG] reset handling of SIG signal(s) to the default --ignore-signal[=SIG] set handling of SIG signal(s) to do nothing --list-signal-handling list non default signal handling to stderr -v, --debug print verbose information for each processing step --help display this help and exit --version output version information and exit A mere - implies -i. If no COMMAND, print the resulting environment. SIG may be a signal name like 'PIPE', or a signal number like '13'. Without SIG, all known signals are included. Multiple signals can be comma-separated. An empty SIG argument is a no-op. Exit status: 125 if the env command itself fails 126 if COMMAND is found but cannot be invoked 127 if COMMAND cannot be found - the exit status of COMMAND otherwise -S/--split-string usage in scripts The -S option allows specifying multiple parameters in a script. Running a script named 1.pl containing the following first line: #!/usr/bin/env -S perl -w -T ... Will execute perl -w -T 1.pl . Without the '-S' parameter the script will likely fail with: /usr/bin/env: 'perl -w -T': No such file or directory See the full documentation for more details. --default-signal[=SIG] usage This option allows setting a signal handler to its default action, which is not possible using the traditional shell trap command. The following example ensures that seq will be terminated by SIGPIPE no matter how this signal is being handled in the process invoking the command. sh -c 'env --default-signal=PIPE seq inf | head -n1'
# env > Show the environment or run a program in a modified environment. More > information: https://www.gnu.org/software/coreutils/env. * Show the environment: `env` * Run a program. Often used in scripts after the shebang (#!) for looking up the path to the program: `env {{program}}` * Clear the environment and run a program: `env -i {{program}}` * Remove variable from the environment and run a program: `env -u {{variable}} {{program}}` * Set a variable and run a program: `env {{variable}}={{value}} {{program}}` * Set multiple variables and run a program: `env {{variable1}}={{value}} {{variable2}}={{value}} {{variable3}}={{value}} {{program}}`
look
The look utility displays any lines in file which contain string as a prefix. As look performs a binary search, the lines in file must be sorted (where sort(1) was given the same options -d and/or -f that look is invoked with). If file is not specified, the file /usr/share/dict/words is used, only alphanumeric characters are compared and the case of alphabetic characters is ignored. -a, --alternative Use the alternative dictionary file. -d, --alphanum Use normal dictionary character set and order, i.e., only blanks and alphanumeric characters are compared. This is on by default if no file is specified. Note that blanks have been added to dictionary character set for compatibility with sort -d command since version 2.28. -f, --ignore-case Ignore the case of alphabetic characters. This is on by default if no file is specified. -t, --terminate character Specify a string termination character, i.e., only the characters in string up to and including the first occurrence of character are compared. -h, --help Display help text and exit. -V, --version Print version and exit. The look utility exits 0 if one or more lines were found and displayed, 1 if no lines were found, and >1 if an error occurred.
# look > Look for lines in sorted file. More information: https://manned.org/look. * Look for lines which begins with the given prefix: `look {{prefix}} {{path/to/file}}` * Look for lines ignoring case: `look --ignore-case {{prefix}} {{path/to/file}}`
fgrep
grep searches for PATTERNS in each FILE. PATTERNS is one or more patterns separated by newline characters, and grep prints each line that matches a pattern. Typically PATTERNS should be quoted when grep is used in a shell command. A FILE of “-” stands for standard input. If no FILE is given, recursive searches examine the working directory, and nonrecursive searches read standard input. Generic Program Information --help Output a usage message and exit. -V, --version Output the version number of grep and exit. Pattern Syntax -E, --extended-regexp Interpret PATTERNS as extended regular expressions (EREs, see below). -F, --fixed-strings Interpret PATTERNS as fixed strings, not regular expressions. -G, --basic-regexp Interpret PATTERNS as basic regular expressions (BREs, see below). This is the default. -P, --perl-regexp Interpret PATTERNS as Perl-compatible regular expressions (PCREs). This option is experimental when combined with the -z (--null-data) option, and grep -P may warn of unimplemented features. Matching Control -e PATTERNS, --regexp=PATTERNS Use PATTERNS as the patterns. If this option is used multiple times or is combined with the -f (--file) option, search for all patterns given. This option can be used to protect a pattern beginning with “-”. -f FILE, --file=FILE Obtain patterns from FILE, one per line. If this option is used multiple times or is combined with the -e (--regexp) option, search for all patterns given. The empty file contains zero patterns, and therefore matches nothing. If FILE is - , read patterns from standard input. -i, --ignore-case Ignore case distinctions in patterns and input data, so that characters that differ only in case match each other. --no-ignore-case Do not ignore case distinctions in patterns and input data. This is the default. This option is useful for passing to shell scripts that already use -i, to cancel its effects because the two options override each other. -v, --invert-match Invert the sense of matching, to select non-matching lines. -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. -x, --line-regexp Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and $. General Output Control -c, --count Suppress normal output; instead print a count of matching lines for each input file. With the -v, --invert-match option (see above), count non-matching lines. --color[=WHEN], --colour[=WHEN] Surround the matched (non-empty) strings, matching lines, context lines, file names, line numbers, byte offsets, and separators (for fields and groups of context lines) with escape sequences to display them in color on the terminal. The colors are defined by the environment variable GREP_COLORS. WHEN is never, always, or auto. -L, --files-without-match Suppress normal output; instead print the name of each input file from which no output would normally have been printed. -l, --files-with-matches Suppress normal output; instead print the name of each input file from which output would normally have been printed. Scanning each input file stops upon first match. -m NUM, --max-count=NUM Stop reading a file after NUM matching lines. If NUM is zero, grep stops right away without reading input. A NUM of -1 is treated as infinity and grep does not stop; this is the default. If the input is standard input from a regular file, and NUM matching lines are output, grep ensures that the standard input is positioned to just after the last matching line before exiting, regardless of the presence of trailing context lines. This enables a calling process to resume a search. When grep stops after NUM matching lines, it outputs any trailing context lines. When the -c or --count option is also used, grep does not output a count greater than NUM. When the -v or --invert-match option is also used, grep stops after outputting NUM non-matching lines. -o, --only-matching Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line. -q, --quiet, --silent Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected. Also see the -s or --no-messages option. -s, --no-messages Suppress error messages about nonexistent or unreadable files. Output Line Prefix Control -b, --byte-offset Print the 0-based byte offset within the input file before each line of output. If -o (--only-matching) is specified, print the offset of the matching part itself. -H, --with-filename Print the file name for each match. This is the default when there is more than one file to search. This is a GNU extension. -h, --no-filename Suppress the prefixing of file names on output. This is the default when there is only one file (or only standard input) to search. --label=LABEL Display input actually coming from standard input as input coming from file LABEL. This can be useful for commands that transform a file's contents before searching, e.g., gzip -cd foo.gz | grep --label=foo -H 'some pattern'. See also the -H option. -n, --line-number Prefix each line of output with the 1-based line number within its input file. -T, --initial-tab Make sure that the first character of actual line content lies on a tab stop, so that the alignment of tabs looks normal. This is useful with options that prefix their output to the actual content: -H,-n, and -b. In order to improve the probability that lines from a single file will all start at the same column, this also causes the line number and byte offset (if present) to be printed in a minimum size field width. -Z, --null Output a zero byte (the ASCII NUL character) instead of the character that normally follows a file name. For example, grep -lZ outputs a zero byte after each file name instead of the usual newline. This option makes the output unambiguous, even in the presence of file names containing unusual characters like newlines. This option can be used with commands like find -print0, perl -0, sort -z, and xargs -0 to process arbitrary file names, even those that contain newline characters. Context Line Control -A NUM, --after-context=NUM Print NUM lines of trailing context after matching lines. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. -B NUM, --before-context=NUM Print NUM lines of leading context before matching lines. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. -C NUM, -NUM, --context=NUM Print NUM lines of output context. Places a line containing a group separator (--) between contiguous groups of matches. With the -o or --only-matching option, this has no effect and a warning is given. --group-separator=SEP When -A, -B, or -C are in use, print SEP instead of -- between groups of lines. --no-group-separator When -A, -B, or -C are in use, do not print a separator between groups of lines. File and Directory Selection -a, --text Process a binary file as if it were text; this is equivalent to the --binary-files=text option. --binary-files=TYPE If a file's data or metadata indicate that the file contains binary data, assume that the file is of type TYPE. Non-text bytes indicate binary data; these are either output bytes that are improperly encoded for the current locale, or null input bytes when the -z option is not given. By default, TYPE is binary, and grep suppresses output after null input binary data is discovered, and suppresses output lines that contain improperly encoded data. When some output is suppressed, grep follows any output with a message to standard error saying that a binary file matches. If TYPE is without-match, when grep discovers null input binary data it assumes that the rest of the file does not match; this is equivalent to the -I option. If TYPE is text, grep processes a binary file as if it were text; this is equivalent to the -a option. When type is binary, grep may treat non-text bytes as line terminators even without the -z option. This means choosing binary versus text can affect whether a pattern matches a file. For example, when type is binary the pattern q$ might match q immediately followed by a null byte, even though this is not matched when type is text. Conversely, when type is binary the pattern . (period) might not match a null byte. Warning: The -a option might output binary garbage, which can have nasty side effects if the output is a terminal and if the terminal driver interprets some of it as commands. On the other hand, when reading files whose text encodings are unknown, it can be helpful to use -a or to set LC_ALL='C' in the environment, in order to find more matches even if the matches are unsafe for direct display. -D ACTION, --devices=ACTION If an input file is a device, FIFO or socket, use ACTION to process it. By default, ACTION is read, which means that devices are read just as if they were ordinary files. If ACTION is skip, devices are silently skipped. -d ACTION, --directories=ACTION If an input file is a directory, use ACTION to process it. By default, ACTION is read, i.e., read directories just as if they were ordinary files. If ACTION is skip, silently skip directories. If ACTION is recurse, read all files under each directory, recursively, following symbolic links only if they are on the command line. This is equivalent to the -r option. --exclude=GLOB Skip any command-line file with a name suffix that matches the pattern GLOB, using wildcard matching; a name suffix is either the whole name, or a trailing part that starts with a non-slash character immediately after a slash (/) in the name. When searching recursively, skip any subfile whose base name matches GLOB; the base name is the part after the last slash. A pattern can use *, ?, and [...] as wildcards, and \ to quote a wildcard or backslash character literally. --exclude-from=FILE Skip files whose base name matches any of the file-name globs read from FILE (using wildcard matching as described under --exclude). --exclude-dir=GLOB Skip any command-line directory with a name suffix that matches the pattern GLOB. When searching recursively, skip any subdirectory whose base name matches GLOB. Ignore any redundant trailing slashes in GLOB. -I Process a binary file as if it did not contain matching data; this is equivalent to the --binary-files=without-match option. --include=GLOB Search only files whose base name matches GLOB (using wildcard matching as described under --exclude). If contradictory --include and --exclude options are given, the last matching one wins. If no --include or --exclude options match, a file is included unless the first such option is --include. -r, --recursive Read all files under each directory, recursively, following symbolic links only if they are on the command line. Note that if no file operand is given, grep searches the working directory. This is equivalent to the -d recurse option. -R, --dereference-recursive Read all files under each directory, recursively. Follow all symbolic links, unlike -r. Other Options --line-buffered Use line buffering on output. This can cause a performance penalty. -U, --binary Treat the file(s) as binary. By default, under MS-DOS and MS-Windows, grep guesses whether a file is text or binary as described for the --binary-files option. If grep decides the file is a text file, it strips the CR characters from the original file contents (to make regular expressions with ^ and $ work correctly). Specifying -U overrules this guesswork, causing all files to be read and passed to the matching mechanism verbatim; if the file is a text file with CR/LF pairs at the end of each line, this will cause some regular expressions to fail. This option has no effect on platforms other than MS-DOS and MS-Windows. -z, --null-data Treat input and output data as sequences of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline. Like the -Z or --null option, this option can be used with commands like sort -z to process arbitrary file names.
# fgrep > Matches fixed strings in files. Equivalent to `grep -F`. More information: > https://www.gnu.org/software/grep/manual/grep.html. * Search for an exact string in a file: `fgrep {{search_string}} {{path/to/file}}` * Search only lines that match entirely in files: `fgrep -x {{path/to/file1}} {{path/to/file2}}` * Count the number of lines that match the given string in a file: `fgrep -c {{search_string}} {{path/to/file}}` * Show the line number in the file along with the line matched: `fgrep -n {{search_string}} {{path/to/file}}` * Display all lines except those that contain the search string: `fgrep -v {{search_string}} {{path/to/file}}` * Display filenames whose content matches the search string at least once: `fgrep -l {{search_string}} {{path/to/file1}} {{path/to/file2}}`
df
This manual page documents the GNU version of df. df displays the amount of space available on the file system containing each file name argument. If no file name is given, the space available on all currently mounted file systems is shown. Space is shown in 1K blocks by default, unless the environment variable POSIXLY_CORRECT is set, in which case 512-byte blocks are used. If an argument is the absolute file name of a device node containing a mounted file system, df shows the space available on that file system rather than on the file system containing the device node. This version of df cannot show the space available on unmounted file systems, because on most kinds of systems doing so requires very nonportable intimate knowledge of file system structures. Show information about the file system on which each FILE resides, or all file systems by default. Mandatory arguments to long options are mandatory for short options too. -a, --all include pseudo, duplicate, inaccessible file systems -B, --block-size=SIZE scale sizes by SIZE before printing them; e.g., '-BM' prints sizes in units of 1,048,576 bytes; see SIZE format below -h, --human-readable print sizes in powers of 1024 (e.g., 1023M) -H, --si print sizes in powers of 1000 (e.g., 1.1G) -i, --inodes list inode information instead of block usage -k like --block-size=1K -l, --local limit listing to local file systems --no-sync do not invoke sync before getting usage info (default) --output[=FIELD_LIST] use the output format defined by FIELD_LIST, or print all fields if FIELD_LIST is omitted. -P, --portability use the POSIX output format --sync invoke sync before getting usage info --total elide all entries insignificant to available space, and produce a grand total -t, --type=TYPE limit listing to file systems of type TYPE -T, --print-type print file system type -x, --exclude-type=TYPE limit listing to file systems not of type TYPE -v (ignored) --help display this help and exit --version output version information and exit Display values are in units of the first available SIZE from --block-size, and the DF_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables. Otherwise, units default to 1024 bytes (or 512 if POSIXLY_CORRECT is set). The SIZE argument is an integer and optional unit (example: 10K is 10*1024). Units are K,M,G,T,P,E,Z,Y,R,Q (powers of 1024) or KB,MB,... (powers of 1000). Binary prefixes can be used, too: KiB=K, MiB=M, and so on. FIELD_LIST is a comma-separated list of columns to be included. Valid field names are: 'source', 'fstype', 'itotal', 'iused', 'iavail', 'ipcent', 'size', 'used', 'avail', 'pcent', 'file' and 'target' (see info page).
# df > Gives an overview of the filesystem disk space usage. More information: > https://www.gnu.org/software/coreutils/df. * Display all filesystems and their disk usage: `df` * Display all filesystems and their disk usage in human-readable form: `df -h` * Display the filesystem and its disk usage containing the given file or directory: `df {{path/to/file_or_directory}}` * Display statistics on the number of free inodes: `df -i` * Display filesystems but exclude the specified types: `df -x {{squashfs}} -x {{tmpfs}}`
sha512sum
Print or check SHA512 (512-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in FIPS-180-2. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems.
# sha512sum > Calculate SHA512 cryptographic checksums. More information: > https://www.gnu.org/software/coreutils/manual/html_node/sha2-utilities.html. * Calculate the SHA512 checksum for one or more files: `sha512sum {{path/to/file1 path/to/file2 ...}}` * Calculate and save the list of SHA512 checksums to a file: `sha512sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.sha512}}` * Calculate a SHA512 checksum from `stdin`: `{{command}} | sha512sum` * Read a file of SHA512 sums and filenames and verify all files have matching checksums: `sha512sum --check {{path/to/file.sha512}}` * Only show a message for missing files or when verification fails: `sha512sum --check --quiet {{path/to/file.sha512}}` * Only show a message when verification fails, ignoring missing files: `sha512sum --ignore-missing --check --quiet {{path/to/file.sha512}}`
dpkg-deb
dpkg-deb packs, unpacks and provides information about Debian archives. Use dpkg to install and remove packages from your system. You can also invoke dpkg-deb by calling dpkg with whatever options you want to pass to dpkg-deb. dpkg will spot that you wanted dpkg-deb and run it for you. For most commands taking an input archive argument, the archive can be read from standard input if the archive name is given as a single minus character («-»); otherwise lack of support will be documented in their respective command description. --showformat=format This option is used to specify the format of the output --show will produce. The format is a string that will be output for each package listed. The string may reference any status field using the “${field- name}” form, a list of the valid fields can be easily produced using -I on the same package. A complete explanation of the formatting options (including escape sequences and field tabbing) can be found in the explanation of the --showformat option in dpkg-query(1). The default for this field is “${Package}\t${Version}\n”. -zcompress-level Specify which compression level to use on the compressor backend, when building a package (default is 9 for gzip, 6 for xz, 3 for zstd). The accepted values are compressor specific. For gzip, from 0-9 with 0 being mapped to compressor none. For xz from 0-9. For zstd from 0-22, with levels from 20 to 22 enabling its ultra mode. Before dpkg 1.16.2 level 0 was equivalent to compressor none for all compressors. -Scompress-strategy Specify which compression strategy to use on the compressor backend, when building a package (since dpkg 1.16.2). Allowed values are none (since dpkg 1.16.4), filtered, huffman, rle and fixed for gzip (since dpkg 1.17.0) and extreme for xz. -Zcompress-type Specify which compression type to use when building a package. Allowed values are gzip, xz (since dpkg 1.15.6), zstd (since dpkg 1.21.18) and none (default is xz). --[no-]uniform-compression Specify that the same compression parameters should be used for all archive members (i.e. control.tar and data.tar; since dpkg 1.17.6). Otherwise only the data.tar member will use those parameters. The only supported compression types allowed to be uniformly used are none, gzip, xz and zstd. The --no-uniform-compression option disables uniform compression (since dpkg 1.19.0). Uniform compression is the default (since dpkg 1.19.0). --threads-max=threads Sets the maximum number of threads allowed for compressors that support multi-threaded operations (since dpkg 1.21.9). --root-owner-group Set the owner and group for each entry in the filesystem tree data to root with id 0 (since dpkg 1.19.0). Note: This option can be useful for rootless builds (see rootless-builds.txt), but should not be used when the entries have an owner or group that is not root. Support for these will be added later in the form of a meta manifest. --deb-format=format Set the archive format version used when building (since dpkg 1.17.0). Allowed values are 2.0 for the new format, and 0.939000 for the old one (default is 2.0). The old archive format is less easily parsed by non-Debian tools and is now obsolete; its only use is when building packages to be parsed by versions of dpkg older than 0.93.76 (September 1995), which was released as i386 a.out only. --nocheck Inhibits dpkg-deb --build's usual checks on the proposed contents of an archive. You can build any archive you want, no matter how broken, this way. -v, --verbose Enables verbose output (since dpkg 1.16.1). This currently only affects --extract making it behave like --vextract. -D, --debug Enables debugging output. This is not very interesting.
# dpkg-deb > Pack, unpack and provide information about Debian archives. More > information: https://manpages.debian.org/latest/dpkg/dpkg-deb.html. * Display information about a package: `dpkg-deb --info {{path/to/file.deb}}` * Display the package's name and version on one line: `dpkg-deb --show {{path/to/file.deb}}` * List the package's contents: `dpkg-deb --contents {{path/to/file.deb}}` * Extract package's contents into a directory: `dpkg-deb --extract {{path/to/file.deb}} {{path/to/directory}}` * Create a package from a specified directory: `dpkg-deb --build {{path/to/directory}}`
updatedb
This manual page documents the GNU version of updatedb, which updates file name databases used by GNU locate. The file name databases contain lists of files that were in particular directory trees when the databases were last updated. The file name of the default database is determined when locate and updatedb are configured and installed. The frequency with which the databases are updated and the directories for which they contain entries depend on how often updatedb is run, and with which arguments. In networked environments, it often makes sense to build a database at the root of each filesystem, containing the entries for that filesystem. updatedb is then run for each filesystem on the fileserver where that filesystem is on a local disk, to prevent thrashing the network. Users can select which databases locate searches using an environment variable or command line option; see locate(1). Databases cannot be concatenated together. The LOCATGE02 database format was introduced in GNU findutils version 4.0 in order to allow machines with different byte orderings to share the databases. GNU locate can read both the old and LOCATE02 database formats, though support for the old pre-4.0 database format will be removed shortly. --findoptions='-option1 -option2...' Global options to pass on to find. The environment variable FINDOPTIONS also sets this value. Default is none. --localpaths='path1 path2...' Non-network directories to put in the database. Default is /. --netpaths='path1 path2...' Network (NFS, AFS, RFS, etc.) directories to put in the database. The environment variable NETPATHS also sets this value. Default is none. --prunepaths='path1 path2...' Directories to not put in the database, which would otherwise be. Remove any trailing slashes from the path names, otherwise updatedb won't recognise the paths you want to omit (because it uses them as regular expression patterns). The environment variable PRUNEPATHS also sets this value. Default is /tmp /usr/tmp /var/tmp /afs. --prunefs='path...' File systems to not put in the database, which would otherwise be. Note that files are pruned when a file system is reached; any file system mounted under an undesired file system will be ignored. The environment variable PRUNEFS also sets this value. Default is nfs NFS proc. --output=dbfile The database file to build. Default is system-dependent. In Debian GNU/Linux, the default is /var/cache/locate/locatedb. --localuser=user The user to search non-network directories as, using su(1). Default is to search the non-network directories as the current user. You can also use the environment variable LOCALUSER to set this user. --netuser=user The user to search network directories as, using su(1). Default is daemon. You can also use the environment variable NETUSER to set this user. --dbformat=F Create the database in format F. The default format is called LOCATE02. Alternatively the slocate format is also supported. When the slocate format is in use, the database produced is marked as having security level 1. If you want to build a system-wide slocate database, you may want to run updatedb as root. --version Print the version number of updatedb and exit. --help Print a summary of the options to updatedb and exit.
# updatedb > Create or update the database used by `locate`. It is usually run daily by > cron. More information: https://manned.org/updatedb. * Refresh database content: `sudo updatedb` * Display file names as soon as they are found: `sudo updatedb --verbose`
sort
The sort utility shall perform one of the following functions: 1. Sort lines of all the named files together and write the result to the specified output. 2. Merge lines of all the named (presorted) files together and write the result to the specified output. 3. Check that a single input file is correctly presorted. Comparisons shall be based on one or more sort keys extracted from each line of input (or, if no sort keys are specified, the entire line up to, but not including, the terminating <newline>), and shall be performed using the collating sequence of the current locale. If this collating sequence does not have a total ordering of all characters (see the Base Definitions volume of POSIX.1‐2017, Section 7.3.2, LC_COLLATE), any lines of input that collate equally should be further compared byte-by-byte using the collating sequence for the POSIX locale. The sort utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for Guideline 9, and the -k keydef option should follow the -b, -d, -f, -i, -n, and -r options. In addition, '+' may be recognized as an option delimiter as well as '-'. The following options shall be supported: -c Check that the single input file is ordered as specified by the arguments and the collating sequence of the current locale. Output shall not be sent to standard output. The exit code shall indicate whether or not disorder was detected or an error occurred. If disorder (or, with -u, a duplicate key) is detected, a warning message shall be sent to standard error indicating where the disorder or duplicate key was found. -C Same as -c, except that a warning message shall not be sent to standard error if disorder or, with -u, a duplicate key is detected. -m Merge only; the input file shall be assumed to be already sorted. -o output Specify the name of an output file to be used instead of the standard output. This file can be the same as one of the input files. -u Unique: suppress all but one in each set of lines having equal keys. If used with the -c option, check that there are no lines with duplicate keys, in addition to checking that the input file is sorted. The following options shall override the default ordering rules. When ordering options appear independent of any key field specifications, the requested field ordering rules shall be applied globally to all sort keys. When attached to a specific key (see -k), the specified ordering options shall override all global ordering options for that key. -d Specify that only <blank> characters and alphanumeric characters, according to the current setting of LC_CTYPE, shall be significant in comparisons. The behavior is undefined for a sort key to which -i or -n also applies. -f Consider all lowercase characters that have uppercase equivalents, according to the current setting of LC_CTYPE, to be the uppercase equivalent for the purposes of comparison. -i Ignore all characters that are non-printable, according to the current setting of LC_CTYPE. The behavior is undefined for a sort key for which -n also applies. -n Restrict the sort key to an initial numeric string, consisting of optional <blank> characters, optional <hyphen-minus> character, and zero or more digits with an optional radix character and thousands separators (as defined in the current locale), which shall be sorted by arithmetic value. An empty digit string shall be treated as zero. Leading zeros and signs on zeros shall not affect ordering. -r Reverse the sense of comparisons. The treatment of field separators can be altered using the options: -b Ignore leading <blank> characters when determining the starting and ending positions of a restricted sort key. If the -b option is specified before the first -k option, it shall be applied to all -k options. Otherwise, the -b option can be attached independently to each -k field_start or field_end option-argument (see below). -t char Use char as the field separator character; char shall not be considered to be part of a field (although it can be included in a sort key). Each occurrence of char shall be significant (for example, <char><char> delimits an empty field). If -t is not specified, <blank> characters shall be used as default field separators; each maximal non-empty sequence of <blank> characters that follows a non-<blank> shall be a field separator. Sort keys can be specified using the options: -k keydef The keydef argument is a restricted sort key field definition. The format of this definition is: field_start[type][,field_end[type]] where field_start and field_end define a key field restricted to a portion of the line (see the EXTENDED DESCRIPTION section), and type is one or more modifiers from the list of characters 'b', 'd', 'f', 'i', 'n', 'r'. The 'b' modifier shall behave like the -b option, but shall apply only to the field_start or field_end to which it is attached. The other modifiers shall behave like the corresponding options, but shall apply only to the key field to which they are attached; they shall have this effect if specified with field_start, field_end, or both. If any modifier is attached to a field_start or to a field_end, no option shall apply to either. Implementations shall support at least nine occurrences of the -k option, which shall be significant in command line order. If no -k option is specified, a default sort key of the entire line shall be used. When there are multiple key fields, later keys shall be compared only after all earlier keys compare equal. Except when the -u option is specified, lines that otherwise compare equal shall be ordered as if none of the options -d, -f, -i, -n, or -k were present (but with -r still in effect, if it was specified) and with all bytes in the lines significant to the comparison. The order in which lines that still compare equal are written is unspecified.
# sort > Sort lines of text files. More information: > https://www.gnu.org/software/coreutils/sort. * Sort a file in ascending order: `sort {{path/to/file}}` * Sort a file in descending order: `sort --reverse {{path/to/file}}` * Sort a file in case-insensitive way: `sort --ignore-case {{path/to/file}}` * Sort a file using numeric rather than alphabetic order: `sort --numeric-sort {{path/to/file}}` * Sort `/etc/passwd` by the 3rd field of each line numerically, using ":" as a field separator: `sort --field-separator={{:}} --key={{3n}} {{/etc/passwd}}` * Sort a file preserving only unique lines: `sort --unique {{path/to/file}}` * Sort a file, printing the output to the specified output file (can be used to sort a file in-place): `sort --output={{path/to/file}} {{path/to/file}}` * Sort numbers with exponents: `sort --general-numeric-sort {{path/to/file}}`
lex
The lex utility shall generate C programs to be used in lexical processing of character input, and that can be used as an interface to yacc. The C programs shall be generated from lex source code and conform to the ISO C standard, without depending on any undefined, unspecified, or implementation-defined behavior, except in cases where the code is copied directly from the supplied source, or in cases that are documented by the implementation. Usually, the lex utility shall write the program it generates to the file lex.yy.c; the state of this file is unspecified if lex exits with a non-zero exit status. See the EXTENDED DESCRIPTION section for a complete description of the lex input language. The lex utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for Guideline 9. The following options shall be supported: -n Suppress the summary of statistics usually written with the -v option. If no table sizes are specified in the lex source code and the -v option is not specified, then -n is implied. -t Write the resulting program to standard output instead of lex.yy.c. -v Write a summary of lex statistics to the standard output. (See the discussion of lex table sizes in Definitions in lex.) If the -t option is specified and -n is not specified, this report shall be written to standard error. If table sizes are specified in the lex source code, and if the -n option is not specified, the -v option may be enabled.
# lex > Lexical analyzer generator. Given the specification for a lexical analyzer, > generates C code implementing it. More information: > https://keith.github.io/xcode-man-pages/lex.1.html. * Generate an analyzer from a Lex file: `lex {{analyzer.l}}` * Specify the output file: `lex {{analyzer.l}} --outfile {{analyzer.c}}` * Compile a C file generated by Lex: `cc {{path/to/lex.yy.c}} --output {{executable}}`
ulimit
The ulimit utility shall set or report the file-size writing limit imposed on files written by the shell and its child processes (files of any size may be read). Only a process with appropriate privileges can increase the limit. The ulimit utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -f Set (or report, if no blocks operand is present), the file size limit in blocks. The -f option shall also be the default case.
# ulimit > Get and set user limits. More information: https://manned.org/ulimit. * Get the properties of all the user limits: `ulimit -a` * Get hard limit for the number of simultaneously opened files: `ulimit -H -n` * Get soft limit for the number of simultaneously opened files: `ulimit -S -n` * Set max per-user process limit: `ulimit -u 30`
chfn
chfn is used to change your finger information. This information is stored in the /etc/passwd file, and is displayed by the finger program. The Linux finger command will display four pieces of information that can be changed by chfn: your real name, your work room and phone, and your home phone. Any of the four pieces of information can be specified on the command line. If no information is given on the command line, chfn enters interactive mode. In interactive mode, chfn will prompt for each field. At a prompt, you can enter the new information, or just press return to leave the field unchanged. Enter the keyword "none" to make the field blank. chfn supports non-local entries (kerberos, LDAP, etc.) if linked with libuser, otherwise use ypchfn(1), lchfn(1) or any other implementation for non-local entries. -f, --full-name full-name Specify your real name. -o, --office office Specify your office room number. -p, --office-phone office-phone Specify your office phone number. -h, --home-phone home-phone Specify your home phone number. -u, --help Display help text and exit. -V, --version Print version and exit. The short options -V have been used since version 2.39; old versions use deprecated -v. -h, --help Display help text and exit. -V, --version Print version and exit.
# chfn > Update `finger` info for a user. More information: https://manned.org/chfn. * Update a user's "Name" field in the output of `finger`: `chfn -f {{new_display_name}} {{username}}` * Update a user's "Office Room Number" field for the output of `finger`: `chfn -o {{new_office_room_number}} {{username}}` * Update a user's "Office Phone Number" field for the output of `finger`: `chfn -p {{new_office_telephone_number}} {{username}}` * Update a user's "Home Phone Number" field for the output of `finger`: `chfn -h {{new_home_telephone_number}} {{username}}`
nice
Run COMMAND with an adjusted niceness, which affects process scheduling. With no COMMAND, print the current niceness. Niceness values range from -20 (most favorable to the process) to 19 (least favorable to the process). Mandatory arguments to long options are mandatory for short options too. -n, --adjustment=N add integer N to the niceness (default 10) --help display this help and exit --version output version information and exit NOTE: your shell may have its own version of nice, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports. Exit status: 125 if the nice command itself fails 126 if COMMAND is found but cannot be invoked 127 if COMMAND cannot be found - the exit status of COMMAND otherwise
# nice > Execute a program with a custom scheduling priority (niceness). Niceness > values range from -20 (the highest priority) to 19 (the lowest). More > information: https://www.gnu.org/software/coreutils/nice. * Launch a program with altered priority: `nice -n {{niceness_value}} {{command}}`
tail
The tail utility shall copy its input file to the standard output beginning at a designated place. Copying shall begin at the point in the file indicated by the -c number or -n number options. The option-argument number shall be counted in units of lines or bytes, according to the options -n and -c. Both line and byte counts start from 1. Tails relative to the end of the file may be saved in an internal buffer, and thus may be limited in length. Such a buffer, if any, shall be no smaller than {LINE_MAX}*10 bytes. The tail utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that '+' may be recognized as an option delimiter as well as '-'. The following options shall be supported: -c number The application shall ensure that the number option- argument is a decimal integer, optionally including a sign. The sign shall affect the location in the file, measured in bytes, to begin the copying: ┌─────┬────────────────────────────────────────┐ │Sign │ Copying Starts │ ├─────┼────────────────────────────────────────┤ │ + │ Relative to the beginning of the file. │ │ - │ Relative to the end of the file. │ │none │ Relative to the end of the file. │ └─────┴────────────────────────────────────────┘ The application shall ensure that if the sign of the number option-argument is '+', the number option- argument is a non-zero decimal integer. The origin for counting shall be 1; that is, -c +1 represents the first byte of the file, -c -1 the last. -f If the input file is a regular file or if the file operand specifies a FIFO, do not terminate after the last line of the input file has been copied, but read and copy further bytes from the input file when they become available. If no file operand is specified and standard input is a pipe or FIFO, the -f option shall be ignored. If the input file is not a FIFO, pipe, or regular file, it is unspecified whether or not the -f option shall be ignored. -n number This option shall be equivalent to -c number, except the starting location in the file shall be measured in lines instead of bytes. The origin for counting shall be 1; that is, -n +1 represents the first line of the file, -n -1 the last. If neither -c nor -n is specified, -n 10 shall be assumed.
# tail > Display the last part of a file. See also: `head`. More information: > https://manned.org/man/freebsd-13.0/tail.1. * Show last 'count' lines in file: `tail -n {{8}} {{path/to/file}}` * Print a file from a specific line number: `tail -n +{{8}} {{path/to/file}}` * Print a specific count of bytes from the end of a given file: `tail -c {{8}} {{path/to/file}}` * Print the last lines of a given file and keep reading file until `Ctrl + C`: `tail -f {{path/to/file}}` * Keep reading file until `Ctrl + C`, even if the file is inaccessible: `tail -F {{path/to/file}}` * Show last 'count' lines in 'file' and refresh every 'seconds' seconds: `tail -n {{8}} -s {{10}} -f {{path/to/file}}`
ctags
The ctags utility shall be provided on systems that support the the Software Development Utilities option, and either or both of the C-Language Development Utilities option and FORTRAN Development Utilities option. On other systems, it is optional. The ctags utility shall write a tagsfile or an index of objects from C-language or FORTRAN source files specified by the pathname operands. The tagsfile shall list the locators of language- specific objects within the source files. A locator consists of a name, pathname, and either a search pattern or a line number that can be used in searching for the object definition. The objects that shall be recognized are specified in the EXTENDED DESCRIPTION section. The ctags utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -a Append to tagsfile. -f tagsfile Write the object locator lists into tagsfile instead of the default file named tags in the current directory. -x Produce a list of object names, the line number, and filename in which each is defined, as well as the text of that line, and write this to the standard output. A tagsfile shall not be created when -x is specified.
# ctags > Generates an index (or tag) file of language objects found in source files > for many popular programming languages. More information: https://ctags.io/. * Generate tags for a single file, and output them to a file named "tags" in the current directory, overwriting the file if it exists: `ctags {{path/to/file}}` * Generate tags for all files in the current directory, and output them to a specific file, overwriting the file if it exists: `ctags -f {{path/to/file}} *` * Generate tags for all files in the current directory and all subdirectories: `ctags --recurse` * Generate tags for a single file, and output them with start line number and end line number in JSON format: `ctags --fields=+ne --output-format=json {{path/to/file}}`
mkdir
The mkdir utility shall create the directories specified by the operands, in the order specified. For each dir operand, the mkdir utility shall perform actions equivalent to the mkdir() function defined in the System Interfaces volume of POSIX.1‐2017, called with the following arguments: 1. The dir operand is used as the path argument. 2. The value of the bitwise-inclusive OR of S_IRWXU, S_IRWXG, and S_IRWXO is used as the mode argument. (If the -m option is specified, the value of the mkdir() mode argument is unspecified, but the directory shall at no time have permissions less restrictive than the -m mode option- argument.) The mkdir utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -m mode Set the file permission bits of the newly-created directory to the specified mode value. The mode option- argument shall be the same as the mode operand defined for the chmod utility. In the symbolic_mode strings, the op characters '+' and '-' shall be interpreted relative to an assumed initial mode of a=rwx; '+' shall add permissions to the default mode, '-' shall delete permissions from the default mode. -p Create any missing intermediate pathname components. For each dir operand that does not name an existing directory, before performing the actions described in the DESCRIPTION above, the mkdir utility shall create any pathname components of the path prefix of dir that do not name an existing directory by performing actions equivalent to first calling the mkdir() function with the following arguments: 1. A pathname naming the missing pathname component, ending with a trailing <slash> character, as the path argument 2. The value zero as the mode argument and then calling the chmod() function with the following arguments: 1. The same path argument as in the mkdir() call 2. The value (S_IWUSR|S_IXUSR|~filemask)&0777 as the mode argument, where filemask is the file mode creation mask of the process (see the System Interfaces volume of POSIX.1‐2017, umask(3p)) Each dir operand that names an existing directory shall be ignored without error.
# mkdir > Create directories and set their permissions. More information: > https://www.gnu.org/software/coreutils/mkdir. * Create specific directories: `mkdir {{path/to/directory1 path/to/directory2 ...}}` * Create specific directories and their [p]arents if needed: `mkdir -p {{path/to/directory1 path/to/directory2 ...}}` * Create directories with specific permissions: `mkdir -m {{rwxrw-r--}} {{path/to/directory1 path/to/directory2 ...}}`
test
The test utility shall evaluate the expression and indicate the result of the evaluation by its exit status. An exit status of zero indicates that the expression evaluated as true and an exit status of 1 indicates that the expression evaluated as false. In the second form of the utility, where the utility name used is [ rather than test, the application shall ensure that the closing square bracket is a separate argument. The test and [ utilities may be implemented as a single linked utility which examines the basename of the zeroth command line argument to determine whether to behave as the test or [ variant. Applications using the exec() family of functions to execute these utilities shall ensure that the argument passed in arg0 or argv[0] is '[' when executing the [ utility and has a basename of "test" when executing the test utility. The test utility shall not recognize the "--" argument in the manner specified by Guideline 10 in the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. No options shall be supported.
# test > Check file types and compare values. Returns 0 if the condition evaluates to > true, 1 if it evaluates to false. More information: > https://www.gnu.org/software/coreutils/test. * Test if a given variable is equal to a given string: `test "{{$MY_VAR}}" == "{{/bin/zsh}}"` * Test if a given variable is empty: `test -z "{{$GIT_BRANCH}}"` * Test if a file exists: `test -f "{{path/to/file_or_directory}}"` * Test if a directory does not exist: `test ! -d "{{path/to/directory}}"` * If A is true, then do B, or C in the case of an error (notice that C may run even if A fails): `test {{condition}} && {{echo "true"}} || {{echo "false"}}`
uptime
Print the current time, the length of time the system has been up, the number of users on the system, and the average number of jobs in the run queue over the last 1, 5 and 15 minutes. Processes in an uninterruptible sleep state also contribute to the load average. If FILE is not specified, use /var/run/utmp. /var/log/wtmp as FILE is common. --help display this help and exit --version output version information and exit
# uptime > Tell how long the system has been running and other information. More > information: https://ss64.com/osx/uptime.html. * Print current time, uptime, number of logged-in users and other information: `uptime`
sha384sum
Print or check SHA384 (384-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in FIPS-180-2. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems.
# sha384sum > Calculate SHA384 cryptographic checksums. More information: > https://www.gnu.org/software/coreutils/manual/html_node/sha2-utilities.html. * Calculate the SHA384 checksum for one or more files: `sha384sum {{path/to/file1 path/to/file2 ...}}` * Calculate and save the list of SHA384 checksums to a file: `sha384sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.sha384}}` * Calculate a SHA384 checksum from `stdin`: `{{command}} | sha384sum` * Read a file of SHA384 sums and filenames and verify all files have matching checksums: `sha384sum --check {{path/to/file.sha384}}` * Only show a message for missing files or when verification fails: `sha384sum --check --quiet {{path/to/file.sha384}}` * Only show a message when verification fails, ignoring missing files: `sha384sum --ignore-missing --check --quiet {{path/to/file.sha384}}`
file
This manual page documents version 5.44 of the file command. file tests each argument in an attempt to classify it. There are three sets of tests, performed in this order: filesystem tests, magic tests, and language tests. The first test that succeeds causes the file type to be printed. The type printed will usually contain one of the words text (the file contains only printing characters and a few common control characters and is probably safe to read on an ASCII terminal), executable (the file contains the result of compiling a program in a form understandable to some UNIX kernel or another), or data meaning anything else (data is usually “binary” or non-printable). Exceptions are well-known file formats (core files, tar archives) that are known to contain binary data. When modifying magic files or the program itself, make sure to preserve these keywords. Users depend on knowing that all the readable files in a directory have the word “text” printed. Don't do as Berkeley did and change “shell commands text” to “shell script”. The filesystem tests are based on examining the return from a stat(2) system call. The program checks to see if the file is empty, or if it's some sort of special file. Any known file types appropriate to the system you are running on (sockets, symbolic links, or named pipes (FIFOs) on those systems that implement them) are intuited if they are defined in the system header file <sys/stat.h>. The magic tests are used to check for files with data in particular fixed formats. The canonical example of this is a binary executable (compiled program) a.out file, whose format is defined in <elf.h>, <a.out.h> and possibly <exec.h> in the standard include directory. These files have a “magic number” stored in a particular place near the beginning of the file that tells the UNIX operating system that the file is a binary executable, and which of several types thereof. The concept of a “magic number” has been applied by extension to data files. Any file with some invariant identifier at a small fixed offset into the file can usually be described in this way. The information identifying these files is read from the compiled magic file /usr/local/share/misc/magic.mgc, or the files in the directory /usr/local/share/misc/magic if the compiled file does not exist. In addition, if $HOME/.magic.mgc or $HOME/.magic exists, it will be used in preference to the system magic files. If a file does not match any of the entries in the magic file, it is examined to see if it seems to be a text file. ASCII, ISO-8859-x, non-ISO 8-bit extended-ASCII character sets (such as those used on Macintosh and IBM PC systems), UTF-8-encoded Unicode, UTF-16-encoded Unicode, and EBCDIC character sets can be distinguished by the different ranges and sequences of bytes that constitute printable text in each set. If a file passes any of these tests, its character set is reported. ASCII, ISO-8859-x, UTF-8, and extended-ASCII files are identified as “text” because they will be mostly readable on nearly any terminal; UTF-16 and EBCDIC are only “character data” because, while they contain text, it is text that will require translation before it can be read. In addition, file will attempt to determine other characteristics of text-type files. If the lines of a file are terminated by CR, CRLF, or NEL, instead of the Unix-standard LF, this will be reported. Files that contain embedded escape sequences or overstriking will also be identified. Once file has determined the character set used in a text-type file, it will attempt to determine in what language the file is written. The language tests look for particular strings (cf. <names.h>) that can appear anywhere in the first few blocks of a file. For example, the keyword .br indicates that the file is most likely a troff(1) input file, just as the keyword struct indicates a C program. These tests are less reliable than the previous two groups, so they are performed last. The language test routines also test for some miscellany (such as tar(1) archives, JSON files). Any file that cannot be identified as having been written in any of the character sets listed above is simply said to be “data”. --apple Causes the file command to output the file type and creator code as used by older MacOS versions. The code consists of eight letters, the first describing the file type, the latter the creator. This option works properly only for file formats that have the apple-style output defined. -b, --brief Do not prepend filenames to output lines (brief mode). -C, --compile Write a magic.mgc output file that contains a pre-parsed version of the magic file or directory. -c, --checking-printout Cause a checking printout of the parsed form of the magic file. This is usually used in conjunction with the -m option to debug a new magic file before installing it. -d Prints internal debugging information to stderr. -E On filesystem errors (file not found etc), instead of handling the error as regular output as POSIX mandates and keep going, issue an error message and exit. -e, --exclude testname Exclude the test named in testname from the list of tests made to determine the file type. Valid test names are: apptype EMX application type (only on EMX). ascii Various types of text files (this test will try to guess the text encoding, irrespective of the setting of the ‘encoding’ option). encoding Different text encodings for soft magic tests. tokens Ignored for backwards compatibility. cdf Prints details of Compound Document Files. compress Checks for, and looks inside, compressed files. csv Checks Comma Separated Value files. elf Prints ELF file details, provided soft magic tests are enabled and the elf magic is found. json Examines JSON (RFC-7159) files by parsing them for compliance. soft Consults magic files. simh Examines SIMH tape files. tar Examines tar files by verifying the checksum of the 512 byte tar header. Excluding this test can provide more detailed content description by using the soft magic method. text A synonym for ‘ascii’. --exclude-quiet Like --exclude but ignore tests that file does not know about. This is intended for compatibility with older versions of file. --extension Print a slash-separated list of valid extensions for the file type found. -F, --separator separator Use the specified string as the separator between the filename and the file result returned. Defaults to ‘:’. -f, --files-from namefile Read the names of the files to be examined from namefile (one per line) before the argument list. Either namefile or at least one filename argument must be present; to test the standard input, use ‘-’ as a filename argument. Please note that namefile is unwrapped and the enclosed filenames are processed when this option is encountered and before any further options processing is done. This allows one to process multiple lists of files with different command line arguments on the same file invocation. Thus if you want to set the delimiter, you need to do it before you specify the list of files, like: “-F @ -f namefile”, instead of: “-f namefile -F @”. -h, --no-dereference This option causes symlinks not to be followed (on systems that support symbolic links). This is the default if the environment variable POSIXLY_CORRECT is not defined. -i, --mime Causes the file command to output mime type strings rather than the more traditional human readable ones. Thus it may say ‘text/plain; charset=us-ascii’ rather than “ASCII text”. --mime-type, --mime-encoding Like -i, but print only the specified element(s). -k, --keep-going Don't stop at the first match, keep going. Subsequent matches will be have the string ‘\012- ’ prepended. (If you want a newline, see the -r option.) The magic pattern with the highest strength (see the -l option) comes first. -l, --list Shows a list of patterns and their strength sorted descending by magic(4) strength which is used for the matching (see also the -k option). -L, --dereference This option causes symlinks to be followed, as the like- named option in ls(1) (on systems that support symbolic links). This is the default if the environment variable POSIXLY_CORRECT is defined. -m, --magic-file magicfiles Specify an alternate list of files and directories containing magic. This can be a single item, or a colon- separated list. If a compiled magic file is found alongside a file or directory, it will be used instead. -N, --no-pad Don't pad filenames so that they align in the output. -n, --no-buffer Force stdout to be flushed after checking each file. This is only useful if checking a list of files. It is intended to be used by programs that want filetype output from a pipe. -p, --preserve-date On systems that support utime(3) or utimes(2), attempt to preserve the access time of files analyzed, to pretend that file never read them. -P, --parameter name=value Set various parameter limits. Name Default Explanation bytes 1M max number of bytes to read from file elf_notes 256 max ELF notes processed elf_phnum 2K max ELF program sections processed elf_shnum 32K max ELF sections processed elf_shsize 128MB max ELF section size processed encoding 65K max number of bytes to determine encoding indir 50 recursion limit for indirect magic name 50 use count limit for name/use magic regex 8K length limit for regex searches -r, --raw Don't translate unprintable characters to \ooo. Normally file translates unprintable characters to their octal representation. -s, --special-files Normally, file only attempts to read and determine the type of argument files which stat(2) reports are ordinary files. This prevents problems, because reading special files may have peculiar consequences. Specifying the -s option causes file to also read argument files which are block or character special files. This is useful for determining the filesystem types of the data in raw disk partitions, which are block special files. This option also causes file to disregard the file size as reported by stat(2) since on some systems it reports a zero size for raw disk partitions. -S, --no-sandbox On systems where libseccomp (https://github.com/seccomp/libseccomp ) is available, the -S option disables sandboxing which is enabled by default. This option is needed for file to execute external decompressing programs, i.e. when the -z option is specified and the built-in decompressors are not available. On systems where sandboxing is not available, this option has no effect. -v, --version Print the version of the program and exit. -z, --uncompress Try to look inside compressed files. -Z, --uncompress-noreport Try to look inside compressed files, but report information about the contents only not the compression. -0, --print0 Output a null character ‘\0’ after the end of the filename. Nice to cut(1) the output. This does not affect the separator, which is still printed. If this option is repeated more than once, then file prints just the filename followed by a NUL followed by the description (or ERROR: text) followed by a second NUL for each entry. --help Print a help message and exit.
# file > Determine file type. More information: https://manned.org/file. * Give a description of the type of the specified file. Works fine for files with no file extension: `file {{path/to/file}}` * Look inside a zipped file and determine the file type(s) inside: `file -z {{foo.zip}}` * Allow file to work with special or device files: `file -s {{path/to/file}}` * Don't stop at first file type match; keep going until the end of the file: `file -k {{path/to/file}}` * Determine the MIME encoding type of a file: `file -i {{path/to/file}}`
rm
The rm utility shall remove the directory entry specified by each file argument. If either of the files dot or dot-dot are specified as the basename portion of an operand (that is, the final pathname component) or if an operand resolves to the root directory, rm shall write a diagnostic message to standard error and do nothing more with such operands. For each file the following steps shall be taken: 1. If the file does not exist: a. If the -f option is not specified, rm shall write a diagnostic message to standard error. b. Go on to any remaining files. 2. If file is of type directory, the following steps shall be taken: a. If neither the -R option nor the -r option is specified, rm shall write a diagnostic message to standard error, do nothing more with file, and go on to any remaining files. b. If file is an empty directory, rm may skip to step 2d. If the -f option is not specified, and either the permissions of file do not permit writing and the standard input is a terminal or the -i option is specified, rm shall write a prompt to standard error and read a line from the standard input. If the response is not affirmative, rm shall do nothing more with the current file and go on to any remaining files. c. For each entry contained in file, other than dot or dot- dot, the four steps listed here (1 to 4) shall be taken with the entry as if it were a file operand. The rm utility shall not traverse directories by following symbolic links into other parts of the hierarchy, but shall remove the links themselves. d. If the -i option is specified, rm shall write a prompt to standard error and read a line from the standard input. If the response is not affirmative, rm shall do nothing more with the current file, and go on to any remaining files. 3. If file is not of type directory, the -f option is not specified, and either the permissions of file do not permit writing and the standard input is a terminal or the -i option is specified, rm shall write a prompt to the standard error and read a line from the standard input. If the response is not affirmative, rm shall do nothing more with the current file and go on to any remaining files. 4. If the current file is a directory, rm shall perform actions equivalent to the rmdir() function defined in the System Interfaces volume of POSIX.1‐2017 called with a pathname of the current file used as the path argument. If the current file is not a directory, rm shall perform actions equivalent to the unlink() function defined in the System Interfaces volume of POSIX.1‐2017 called with a pathname of the current file used as the path argument. If this fails for any reason, rm shall write a diagnostic message to standard error, do nothing more with the current file, and go on to any remaining files. The rm utility shall be able to descend to arbitrary depths in a file hierarchy, and shall not fail due to path length limitations (unless an operand specified by the user exceeds system limitations). The rm utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -f Do not prompt for confirmation. Do not write diagnostic messages or modify the exit status in the case of no file operands, or in the case of operands that do not exist. Any previous occurrences of the -i option shall be ignored. -i Prompt for confirmation as described previously. Any previous occurrences of the -f option shall be ignored. -R Remove file hierarchies. See the DESCRIPTION. -r Equivalent to -R.
# rm > Remove files or directories. See also: `rmdir`. More information: > https://www.gnu.org/software/coreutils/rm. * Remove specific files: `rm {{path/to/file1 path/to/file2 ...}}` * Remove specific files ignoring nonexistent ones: `rm -f {{path/to/file1 path/to/file2 ...}}` * Remove specific files [i]nteractively prompting before each removal: `rm -i {{path/to/file1 path/to/file2 ...}}` * Remove specific files printing info about each removal: `rm -v {{path/to/file1 path/to/file2 ...}}` * Remove specific files and directories [r]ecursively: `rm -r {{path/to/file_or_directory1 path/to/file_or_directory2 ...}}`
git-update-ref
Given two arguments, stores the <newvalue> in the <ref>, possibly dereferencing the symbolic refs. E.g. git update-ref HEAD <newvalue> updates the current branch head to the new object. Given three arguments, stores the <newvalue> in the <ref>, possibly dereferencing the symbolic refs, after verifying that the current value of the <ref> matches <oldvalue>. E.g. git update-ref refs/heads/master <newvalue> <oldvalue> updates the master branch head to <newvalue> only if its current value is <oldvalue>. You can specify 40 "0" or an empty string as <oldvalue> to make sure that the ref you are creating does not exist. It also allows a "ref" file to be a symbolic pointer to another ref file by starting with the four-byte header sequence of "ref:". More importantly, it allows the update of a ref file to follow these symbolic pointers, whether they are symlinks or these "regular file symbolic refs". It follows real symlinks only if they start with "refs/": otherwise it will just try to read them and update them as a regular file (i.e. it will allow the filesystem to follow them, but will overwrite such a symlink to somewhere else with a regular filename). If --no-deref is given, <ref> itself is overwritten, rather than the result of following the symbolic pointers. In general, using git update-ref HEAD "$head" should be a lot safer than doing echo "$head" > "$GIT_DIR/HEAD" both from a symlink following standpoint and an error checking standpoint. The "refs/" rule for symlinks means that symlinks that point to "outside" the tree are safe: they’ll be followed for reading but not for writing (so we’ll never write through a ref symlink to some other tree, if you have copied a whole archive by creating a symlink tree). With -d flag, it deletes the named <ref> after verifying it still contains <oldvalue>. With --stdin, update-ref reads instructions from standard input and performs all modifications together. Specify commands of the form: update SP <ref> SP <newvalue> [SP <oldvalue>] LF create SP <ref> SP <newvalue> LF delete SP <ref> [SP <oldvalue>] LF verify SP <ref> [SP <oldvalue>] LF option SP <opt> LF start LF prepare LF commit LF abort LF With --create-reflog, update-ref will create a reflog for each ref even if one would not ordinarily be created. Quote fields containing whitespace as if they were strings in C source code; i.e., surrounded by double-quotes and with backslash escapes. Use 40 "0" characters or the empty string to specify a zero value. To specify a missing value, omit the value and its preceding SP entirely. Alternatively, use -z to specify in NUL-terminated format, without quoting: update SP <ref> NUL <newvalue> NUL [<oldvalue>] NUL create SP <ref> NUL <newvalue> NUL delete SP <ref> NUL [<oldvalue>] NUL verify SP <ref> NUL [<oldvalue>] NUL option SP <opt> NUL start NUL prepare NUL commit NUL abort NUL In this format, use 40 "0" to specify a zero value, and use the empty string to specify a missing value. In either format, values can be specified in any form that Git recognizes as an object name. Commands in any other format or a repeated <ref> produce an error. Command meanings are: update Set <ref> to <newvalue> after verifying <oldvalue>, if given. Specify a zero <newvalue> to ensure the ref does not exist after the update and/or a zero <oldvalue> to make sure the ref does not exist before the update. create Create <ref> with <newvalue> after verifying it does not exist. The given <newvalue> may not be zero. delete Delete <ref> after verifying it exists with <oldvalue>, if given. If given, <oldvalue> may not be zero. verify Verify <ref> against <oldvalue> but do not change it. If <oldvalue> is zero or missing, the ref must not exist. option Modify behavior of the next command naming a <ref>. The only valid option is no-deref to avoid dereferencing a symbolic ref. start Start a transaction. In contrast to a non-transactional session, a transaction will automatically abort if the session ends without an explicit commit. This command may create a new empty transaction when the current one has been committed or aborted already. prepare Prepare to commit the transaction. This will create lock files for all queued reference updates. If one reference could not be locked, the transaction will be aborted. commit Commit all reference updates queued for the transaction, ending the transaction. abort Abort the transaction, releasing all locks if the transaction is in prepared state. If all <ref>s can be locked with matching <oldvalue>s simultaneously, all modifications are performed. Otherwise, no modifications are performed. Note that while each individual <ref> is updated or deleted atomically, a concurrent reader may still see a subset of the modifications.
# git update-ref > Git command for creating, updating, and deleting Git refs. More information: > https://git-scm.com/docs/git-update-ref. * Delete a ref, useful for soft resetting the first commit: `git update-ref -d {{HEAD}}` * Update ref with a message: `git update-ref -m {{message}} {{HEAD}} {{4e95e05}}`
localectl
localectl may be used to query and change the system locale and keyboard layout settings. It communicates with systemd-localed(8) to modify files such as /etc/locale.conf and /etc/vconsole.conf. The system locale controls the language settings of system services and of the UI before the user logs in, such as the display manager, as well as the default for users after login. The keyboard settings control the keyboard layout used on the text console and of the graphical UI before the user logs in, such as the display manager, as well as the default for users after login. Note that the changes performed using this tool might require the initrd to be rebuilt to take effect during early system boot. The initrd is not rebuilt automatically by localectl, this task has to be performed manually, usually using a tool like dracut(8). Note that systemd-firstboot(1) may be used to initialize the system locale for mounted (but not booted) system images. The following options are understood: --no-ask-password Do not query the user for authentication for privileged operations. --no-convert If set-keymap or set-x11-keymap is invoked and this option is passed, then the keymap will not be converted from the console to X11, or X11 to console, respectively. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager.
# localectl > Control the system locale and keyboard layout settings. More information: > https://www.freedesktop.org/software/systemd/man/localectl.html. * Show the current settings of the system locale and keyboard mapping: `localectl` * List available locales: `localectl list-locales` * Set a system locale variable: `localectl set-locale {{LANG}}={{en_US.UTF-8}}` * List available keymaps: `localectl list-keymaps` * Set the system keyboard mapping for the console and X11: `localectl set-keymap {{us}}`
cat
The cat utility shall read files in sequence and shall write their contents to the standard output in the same sequence. The cat utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -u Write bytes from the input file to the standard output without delay as each is read.
# cat > Print and concatenate files. More information: > https://keith.github.io/xcode-man-pages/cat.1.html. * Print the contents of a file to `stdout`: `cat {{path/to/file}}` * Concatenate several files into an output file: `cat {{path/to/file1 path/to/file2 ...}} > {{path/to/output_file}}` * Append several files to an output file: `cat {{path/to/file1 path/to/file2 ...}} >> {{path/to/output_file}}` * Copy the contents of a file into an output file without buffering: `cat -u {{/dev/tty12}} > {{/dev/tty13}}` * Write `stdin` to a file: `cat - > {{path/to/file}}` * Number all output lines: `cat -n {{path/to/file}}` * Display non-printable and whitespace characters (with `M-` prefix if non-ASCII): `cat -v -t -e {{path/to/file}}`
fc
The fc utility shall list, or shall edit and re-execute, commands previously entered to an interactive sh. The command history list shall reference commands by number. The first number in the list is selected arbitrarily. The relationship of a number to its command shall not change except when the user logs in and no other process is accessing the list, at which time the system may reset the numbering to start the oldest retained command at another number (usually 1). When the number reaches an implementation-defined upper limit, which shall be no smaller than the value in HISTSIZE or 32767 (whichever is greater), the shell may wrap the numbers, starting the next command with a lower number (usually 1). However, despite this optional wrapping of numbers, fc shall maintain the time-ordering sequence of the commands. For example, if four commands in sequence are given the numbers 32766, 32767, 1 (wrapped), and 2 as they are executed, command 32767 is considered the command previous to 1, even though its number is higher. When commands are edited (when the -l option is not specified), the resulting lines shall be entered at the end of the history list and then re-executed by sh. The fc command that caused the editing shall not be entered into the history list. If the editor returns a non-zero exit status, this shall suppress the entry into the history list and the command re-execution. Any command line variable assignments or redirection operators used with fc shall affect both the fc command itself as well as the command that results; for example: fc -s -- -1 2>/dev/null reinvokes the previous command, suppressing standard error for both fc and the previous command. The fc utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -e editor Use the editor named by editor to edit the commands. The editor string is a utility name, subject to search via the PATH variable (see the Base Definitions volume of POSIX.1‐2017, Chapter 8, Environment Variables). The value in the FCEDIT variable shall be used as a default when -e is not specified. If FCEDIT is null or unset, ed shall be used as the editor. -l (The letter ell.) List the commands rather than invoking an editor on them. The commands shall be written in the sequence indicated by the first and last operands, as affected by -r, with each command preceded by the command number. -n Suppress command numbers when listing with -l. -r Reverse the order of the commands listed (with -l) or edited (with neither -l nor -s). -s Re-execute the command without invoking an editor.
# fc > Open the most recent command and edit it. More information: > https://manned.org/fc. * Open in the default system editor: `fc` * Specify an editor to open with: `fc -e {{'emacs'}}` * List recent commands from history: `fc -l` * List recent commands in reverse order: `fc -l -r` * List commands in a given interval: `fc '{{416}}' '{{420}}'`
sum
Print or check BSD (16-bit) checksums. With no FILE, or when FILE is -, read standard input. -r use BSD sum algorithm (the default), use 1K blocks -s, --sysv use System V sum algorithm, use 512 bytes blocks --help display this help and exit --version output version information and exit
# sum > Compute checksums and the number of blocks for a file. A predecessor to the > more modern `cksum`. More information: > https://www.gnu.org/software/coreutils/sum. * Compute a checksum with BSD-compatible algorithm and 1024-byte blocks: `sum {{path/to/file}}` * Compute a checksum with System V-compatible algorithm and 512-byte blocks: `sum --sysv {{path/to/file}}`
sha256sum
Print or check SHA256 (256-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in FIPS-180-2. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems.
# sha256sum > Calculate SHA256 cryptographic checksums. More information: > https://www.gnu.org/software/coreutils/manual/html_node/sha2-utilities.html. * Calculate the SHA256 checksum for one or more files: `sha256sum {{path/to/file1 path/to/file2 ...}}` * Calculate and save the list of SHA256 checksums to a file: `sha256sum {{path/to/file1 path/to/file2 ...}} > {{path/to/file.sha256}}` * Calculate a SHA256 checksum from `stdin`: `{{command}} | sha256sum` * Read a file of SHA256 sums and filenames and verify all files have matching checksums: `sha256sum --check {{path/to/file.sha256}}` * Only show a message for missing files or when verification fails: `sha256sum --check --quiet {{path/to/file.sha256}}` * Only show a message when verification fails, ignoring missing files: `sha256sum --ignore-missing --check --quiet {{path/to/file.sha256}}`
runcon
Run COMMAND with completely-specified CONTEXT, or with current or transitioned security context modified by one or more of LEVEL, ROLE, TYPE, and USER. If none of -c, -t, -u, -r, or -l, is specified, the first argument is used as the complete context. Any additional arguments after COMMAND are interpreted as arguments to the command. Note that only carefully-chosen contexts are likely to successfully run. Run a program in a different SELinux security context. With neither CONTEXT nor COMMAND, print the current security context. Mandatory arguments to long options are mandatory for short options too. CONTEXT Complete security context -c, --compute compute process transition context before modifying -t, --type=TYPE type (for same role as parent) -u, --user=USER user identity -r, --role=ROLE role -l, --range=RANGE levelrange --help display this help and exit --version output version information and exit Exit status: 125 if the runcon command itself fails 126 if COMMAND is found but cannot be invoked 127 if COMMAND cannot be found - the exit status of COMMAND otherwise
# runcon > Run a program in a different SELinux security context. With neither context > nor command, print the current security context. More information: > https://www.gnu.org/software/coreutils/runcon. * Determine the current domain: `runcon` * Specify the domain to run a command in: `runcon -t {{domain}}_t {{command}}` * Specify the context role to run a command with: `runcon -r {{role}}_r {{command}}` * Specify the full context to run a command with: `runcon {{user}}_u:{{role}}_r:{{domain}}_t {{command}}`
curl
curl is a tool for transferring data from or to a server. It supports these protocols: DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS. The command is designed to work without user interaction. curl offers a busload of useful tricks like proxy support, user authentication, FTP upload, HTTP post, SSL connections, cookies, file transfer resume and more. As you will see below, the number of features will make your head spin. curl is powered by libcurl for all transfer-related features. See libcurl(3) for details. Options start with one or two dashes. Many of the options require an additional value next to them. The short "single-dash" form of the options, -d for example, may be used with or without a space between it and its value, although a space is a recommended separator. The long "double- dash" form, -d, --data for example, requires a space between it and its value. Short version options that do not need any additional values can be used immediately next to each other, like for example you can specify all the options -O, -L and -v at once as -OLv. In general, all boolean options are enabled with --option and yet again disabled with --no-option. That is, you use the same option name but prefix it with "no-". However, in this list we mostly only list and show the --option version of them. When -:, --next is used, it resets the parser state and you start again with a clean option state, except for the options that are "global". Global options will retain their values and meaning even after -:, --next. The following options are global: --fail-early, --libcurl, --parallel-immediate, -Z, --parallel, -#, --progress-bar, --rate, -S, --show-error, --stderr, --styled-output, --trace-ascii, --trace-ids, --trace-time, --trace and -v, --verbose. --abstract-unix-socket <path> (HTTP) Connect through an abstract Unix domain socket, instead of using the network. Note: netstat shows the path of an abstract socket prefixed with '@', however the <path> argument should not have this leading character. If --abstract-unix-socket is provided several times, the last set value will be used. Example: curl --abstract-unix-socket socketpath https://example.com See also --unix-socket. Added in 7.53.0. --alt-svc <file name> (HTTPS) This option enables the alt-svc parser in curl. If the file name points to an existing alt-svc cache file, that will be used. After a completed transfer, the cache will be saved to the file name again if it has been modified. Specify a "" file name (zero length) to avoid loading/saving and make curl just handle the cache in memory. If this option is used several times, curl will load contents from all the files but the last one will be used for saving. --alt-svc can be used several times in a command line Example: curl --alt-svc svc.txt https://example.com See also --resolve and --connect-to. Added in 7.64.1. --anyauth (HTTP) Tells curl to figure out authentication method by itself, and use the most secure one the remote site claims to support. This is done by first doing a request and checking the response-headers, thus possibly inducing an extra network round-trip. This is used instead of setting a specific authentication method, which you can do with --basic, --digest, --ntlm, and --negotiate. Using --anyauth is not recommended if you do uploads from stdin, since it may require data to be sent twice and then the client must be able to rewind. If the need should arise when uploading from stdin, the upload operation will fail. Used together with -u, --user. Providing --anyauth multiple times has no extra effect. Example: curl --anyauth --user me:pwd https://example.com See also --proxy-anyauth, --basic and --digest. -a, --append (FTP SFTP) When used in an upload, this makes curl append to the target file instead of overwriting it. If the remote file does not exist, it will be created. Note that this flag is ignored by some SFTP servers (including OpenSSH). Providing -a, --append multiple times has no extra effect. Disable it again with --no-append. Example: curl --upload-file local --append ftp://example.com/ See also -r, --range and -C, --continue-at. --aws-sigv4 <provider1[:provider2[:region[:service]]]> Use AWS V4 signature authentication in the transfer. The provider argument is a string that is used by the algorithm when creating outgoing authentication headers. The region argument is a string that points to a geographic area of a resources collection (region-code) when the region name is omitted from the endpoint. The service argument is a string that points to a function provided by a cloud (service-code) when the service name is omitted from the endpoint. If --aws-sigv4 is provided several times, the last set value will be used. Example: curl --aws-sigv4 "aws:amz:us-east-2:es" --user "key:secret" https://example.com See also --basic and -u, --user. Added in 7.75.0. --basic (HTTP) Tells curl to use HTTP Basic authentication with the remote host. This is the default and this option is usually pointless, unless you use it to override a previously set option that sets a different authentication method (such as --ntlm, --digest, or --negotiate). Used together with -u, --user. Providing --basic multiple times has no extra effect. Example: curl -u name:password --basic https://example.com See also --proxy-basic. --ca-native (TLS) Tells curl to use the CA store from the native operating system to verify the peer. By default, curl will otherwise use a CA store provided in a single file or directory, but when using this option it will interface the operating system's own vault. This option only works for curl on Windows when built to use OpenSSL. When curl on Windows is built to use Schannel, this feature is implied and curl then only uses the native CA store. Providing --ca-native multiple times has no extra effect. Disable it again with --no-ca-native. Example: curl --ca-native https://example.com See also --cacert, --capath and -k, --insecure. Added in 8.2.0. --cacert <file> (TLS) Tells curl to use the specified certificate file to verify the peer. The file may contain multiple CA certificates. The certificate(s) must be in PEM format. Normally curl is built to use a default file for this, so this option is typically used to alter that default file. curl recognizes the environment variable named 'CURL_CA_BUNDLE' if it is set, and uses the given path as a path to a CA cert bundle. This option overrides that variable. The windows version of curl will automatically look for a CA certs file named 'curl-ca-bundle.crt', either in the same directory as curl.exe, or in the Current Working Directory, or in any folder along your PATH. If curl is built against the NSS SSL library, the NSS PEM PKCS#11 module (libnsspem.so) needs to be available for this option to work properly. (iOS and macOS only) If curl is built against Secure Transport, then this option is supported for backward compatibility with other SSL engines, but it should not be set. If the option is not set, then curl will use the certificates in the system and user Keychain to verify the peer, which is the preferred method of verifying the peer's certificate chain. (Schannel only) This option is supported for Schannel in Windows 7 or later with libcurl 7.60 or later. This option is supported for backward compatibility with other SSL engines; instead it is recommended to use Windows' store of root certificates (the default for Schannel). If --cacert is provided several times, the last set value will be used. Example: curl --cacert CA-file.txt https://example.com See also --capath and -k, --insecure. --capath <dir> (TLS) Tells curl to use the specified certificate directory to verify the peer. Multiple paths can be provided by separating them with ":" (e.g. "path1:path2:path3"). The certificates must be in PEM format, and if curl is built against OpenSSL, the directory must have been processed using the c_rehash utility supplied with OpenSSL. Using --capath can allow OpenSSL-powered curl to make SSL-connections much more efficiently than using --cacert if the --cacert file contains many CA certificates. If this option is set, the default capath value will be ignored. If --capath is provided several times, the last set value will be used. Example: curl --capath /local/directory https://example.com See also --cacert and -k, --insecure. --cert-status (TLS) Tells curl to verify the status of the server certificate by using the Certificate Status Request (aka. OCSP stapling) TLS extension. If this option is enabled and the server sends an invalid (e.g. expired) response, if the response suggests that the server certificate has been revoked, or no response at all is received, the verification fails. This is currently only implemented in the OpenSSL, GnuTLS and NSS backends. Providing --cert-status multiple times has no extra effect. Disable it again with --no-cert-status. Example: curl --cert-status https://example.com See also --pinnedpubkey. Added in 7.41.0. --cert-type <type> (TLS) Tells curl what type the provided client certificate is using. PEM, DER, ENG and P12 are recognized types. The default type depends on the TLS backend and is usually PEM, however for Secure Transport and Schannel it is P12. If -E, --cert is a pkcs11: URI then ENG is the default type. If --cert-type is provided several times, the last set value will be used. Example: curl --cert-type PEM --cert file https://example.com See also -E, --cert, --key and --key-type. -E, --cert <certificate[:password]> (TLS) Tells curl to use the specified client certificate file when getting a file with HTTPS, FTPS or another SSL- based protocol. The certificate must be in PKCS#12 format if using Secure Transport, or PEM format if using any other engine. If the optional password is not specified, it will be queried for on the terminal. Note that this option assumes a certificate file that is the private key and the client certificate concatenated. See -E, --cert and --key to specify them independently. In the <certificate> portion of the argument, you must escape the character ":" as "\:" so that it is not recognized as the password delimiter. Similarly, you must escape the character "\" as "\\" so that it is not recognized as an escape character. If curl is built against the NSS SSL library then this option can tell curl the nickname of the certificate to use within the NSS database defined by the environment variable SSL_DIR (or by default /etc/pki/nssdb). If the NSS PEM PKCS#11 module (libnsspem.so) is available then PEM files may be loaded. If you provide a path relative to the current directory, you must prefix the path with "./" in order to avoid confusion with an NSS database nickname. If curl is built against OpenSSL library, and the engine pkcs11 is available, then a PKCS#11 URI (RFC 7512) can be used to specify a certificate located in a PKCS#11 device. A string beginning with "pkcs11:" will be interpreted as a PKCS#11 URI. If a PKCS#11 URI is provided, then the --engine option will be set as "pkcs11" if none was provided and the --cert-type option will be set as "ENG" if none was provided. (iOS and macOS only) If curl is built against Secure Transport, then the certificate string can either be the name of a certificate/private key in the system or user keychain, or the path to a PKCS#12-encoded certificate and private key. If you want to use a file from the current directory, please precede it with "./" prefix, in order to avoid confusion with a nickname. (Schannel only) Client certificates must be specified by a path expression to a certificate store. (Loading PFX is not supported; you can import it to a store first). You can use "<store location>\<store name>\<thumbprint>" to refer to a certificate in the system certificates store, for example, "CurrentUser\MY\934a7ac6f8a5d579285a74fa61e19f23ddfe8d7a". Thumbprint is usually a SHA-1 hex string which you can see in certificate details. Following store locations are supported: CurrentUser, LocalMachine, CurrentService, Services, CurrentUserGroupPolicy, LocalMachineGroupPolicy, LocalMachineEnterprise. If -E, --cert is provided several times, the last set value will be used. Example: curl --cert certfile --key keyfile https://example.com See also --cert-type, --key and --key-type. --ciphers <list of ciphers> (TLS) Specifies which ciphers to use in the connection. The list of ciphers must specify valid ciphers. Read up on SSL cipher list details on this URL: https://curl.se/docs/ssl-ciphers.html If --ciphers is provided several times, the last set value will be used. Example: curl --ciphers ECDHE-ECDSA-AES256-CCM8 https://example.com See also --tlsv1.3. --compressed-ssh (SCP SFTP) Enables built-in SSH compression. This is a request, not an order; the server may or may not do it. Providing --compressed-ssh multiple times has no extra effect. Disable it again with --no-compressed-ssh. Example: curl --compressed-ssh sftp://example.com/ See also --compressed. Added in 7.56.0. --compressed (HTTP) Request a compressed response using one of the algorithms curl supports, and automatically decompress the content. Response headers are not modified when saved, so if they are "interpreted" separately again at a later point they might appear to be saying that the content is (still) compressed; while in fact it has already been decompressed. If this option is used and the server sends an unsupported encoding, curl will report an error. This is a request, not an order; the server may or may not deliver data compressed. Providing --compressed multiple times has no extra effect. Disable it again with --no-compressed. Example: curl --compressed https://example.com See also --compressed-ssh. -K, --config <file> Specify a text file to read curl arguments from. The command line arguments found in the text file will be used as if they were provided on the command line. Options and their parameters must be specified on the same line in the file, separated by whitespace, colon, or the equals sign. Long option names can optionally be given in the config file without the initial double dashes and if so, the colon or equals characters can be used as separators. If the option is specified with one or two dashes, there can be no colon or equals character between the option and its parameter. If the parameter contains whitespace (or starts with : or =), the parameter must be enclosed within quotes. Within double quotes, the following escape sequences are available: \\, \", \t, \n, \r and \v. A backslash preceding any other letter is ignored. If the first column of a config line is a '#' character, the rest of the line will be treated as a comment. Only write one option per physical line in the config file. Specify the filename to -K, --config as '-' to make curl read the file from stdin. Note that to be able to specify a URL in the config file, you need to specify it using the --url option, and not by simply writing the URL on its own line. So, it could look similar to this: url = "https://curl.se/docs/" # --- Example file --- # this is a comment url = "example.com" output = "curlhere.html" user-agent = "superagent/1.0" # and fetch another URL too url = "example.com/docs/manpage.html" -O referer = "http://nowhereatall.example.com/" # --- End of example file --- When curl is invoked, it (unless -q, --disable is used) checks for a default config file and uses it if found, even when -K, --config is used. The default config file is checked for in the following places in this order: 1) "$CURL_HOME/.curlrc" 2) "$XDG_CONFIG_HOME/curlrc" (Added in 7.73.0) 3) "$HOME/.curlrc" 4) Windows: "%USERPROFILE%\.curlrc" 5) Windows: "%APPDATA%\.curlrc" 6) Windows: "%USERPROFILE%\Application Data\.curlrc" 7) Non-Windows: use getpwuid to find the home directory 8) On Windows, if it finds no .curlrc file in the sequence described above, it checks for one in the same dir the curl executable is placed. On Windows two filenames are checked per location: .curlrc and _curlrc, preferring the former. Older versions on Windows checked for _curlrc only. -K, --config can be used several times in a command line Example: curl --config file.txt https://example.com See also -q, --disable. --connect-timeout <fractional seconds> Maximum time in seconds that you allow curl's connection to take. This only limits the connection phase, so if curl connects within the given period it will continue - if not it will exit. Since version 7.32.0, this option accepts decimal values. The "connection phase" is considered complete when the DNS lookup and requested TCP, TLS or QUIC handshakes are done. The decimal value needs to provided using a dot (.) as decimal separator - not the local version even if it might be using another separator. If --connect-timeout is provided several times, the last set value will be used. Examples: curl --connect-timeout 20 https://example.com curl --connect-timeout 3.14 https://example.com See also -m, --max-time. --connect-to <HOST1:PORT1:HOST2:PORT2> For a request to the given HOST1:PORT1 pair, connect to HOST2:PORT2 instead. This option is suitable to direct requests at a specific server, e.g. at a specific cluster node in a cluster of servers. This option is only used to establish the network connection. It does NOT affect the hostname/port that is used for TLS/SSL (e.g. SNI, certificate verification) or for the application protocols. "HOST1" and "PORT1" may be the empty string, meaning "any host/port". "HOST2" and "PORT2" may also be the empty string, meaning "use the request's original host/port". A "host" specified to this option is compared as a string, so it needs to match the name used in request URL. It can be either numerical such as "127.0.0.1" or the full host name such as "example.org". --connect-to can be used several times in a command line Example: curl --connect-to example.com:443:example.net:8443 https://example.com See also --resolve and -H, --header. Added in 7.49.0. -C, --continue-at <offset> Continue/Resume a previous file transfer at the given offset. The given offset is the exact number of bytes that will be skipped, counting from the beginning of the source file before it is transferred to the destination. If used with uploads, the FTP server command SIZE will not be used by curl. Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out. If -C, --continue-at is provided several times, the last set value will be used. Examples: curl -C - https://example.com curl -C 400 https://example.com See also -r, --range. -c, --cookie-jar <filename> (HTTP) Specify to which file you want curl to write all cookies after a completed operation. Curl writes all cookies from its in-memory cookie storage to the given file at the end of operations. If no cookies are known, no data will be written. The file will be written using the Netscape cookie file format. If you set the file name to a single dash, "-", the cookies will be written to stdout. This command line option will activate the cookie engine that makes curl record and use cookies. Another way to activate it is to use the -b, --cookie option. If the cookie jar cannot be created or written to, the whole curl operation will not fail or even report an error clearly. Using -v, --verbose will get a warning displayed, but that is the only visible feedback you get about this possibly lethal situation. If -c, --cookie-jar is provided several times, the last set value will be used. Examples: curl -c store-here.txt https://example.com curl -c store-here.txt -b read-these https://example.com See also -b, --cookie. -b, --cookie <data|filename> (HTTP) Pass the data to the HTTP server in the Cookie header. It is supposedly the data previously received from the server in a "Set-Cookie:" line. The data should be in the format "NAME1=VALUE1; NAME2=VALUE2". This makes curl use the cookie header with this content explicitly in all outgoing request(s). If multiple requests are done due to authentication, followed redirects or similar, they will all get this cookie passed on. If no '=' symbol is used in the argument, it is instead treated as a filename to read previously stored cookie from. This option also activates the cookie engine which will make curl record incoming cookies, which may be handy if you are using this in combination with the -L, --location option or do multiple URL transfers on the same invoke. If the file name is exactly a minus ("-"), curl will instead read the contents from stdin. The file format of the file to read cookies from should be plain HTTP headers (Set-Cookie style) or the Netscape/Mozilla cookie file format. The file specified with -b, --cookie is only used as input. No cookies will be written to the file. To store cookies, use the -c, --cookie-jar option. If you use the Set-Cookie file format and do not specify a domain then the cookie is not sent since the domain will never match. To address this, set a domain in Set-Cookie line (doing that will include sub-domains) or preferably: use the Netscape format. Users often want to both read cookies from a file and write updated cookies back to a file, so using both -b, --cookie and -c, --cookie-jar in the same command line is common. -b, --cookie can be used several times in a command line Examples: curl -b cookiefile https://example.com curl -b cookiefile -c cookiefile https://example.com See also -c, --cookie-jar and -j, --junk-session-cookies. --create-dirs When used in conjunction with the -o, --output option, curl will create the necessary local directory hierarchy as needed. This option creates the directories mentioned with the -o, --output option, nothing else. If the -o, --output file name uses no directory, or if the directories it mentions already exist, no directories will be created. Created dirs are made with mode 0750 on unix style file systems. To create remote directories when using FTP or SFTP, try --ftp-create-dirs. Providing --create-dirs multiple times has no extra effect. Disable it again with --no-create-dirs. Example: curl --create-dirs --output local/dir/file https://example.com See also --ftp-create-dirs and --output-dir. --create-file-mode <mode> (SFTP SCP FILE) When curl is used to create files remotely using one of the supported protocols, this option allows the user to set which 'mode' to set on the file at creation time, instead of the default 0644. This option takes an octal number as argument. If --create-file-mode is provided several times, the last set value will be used. Example: curl --create-file-mode 0777 -T localfile sftp://example.com/new See also --ftp-create-dirs. Added in 7.75.0. --crlf (FTP SMTP) Convert LF to CRLF in upload. Useful for MVS (OS/390). (SMTP added in 7.40.0) Providing --crlf multiple times has no extra effect. Disable it again with --no-crlf. Example: curl --crlf -T file ftp://example.com/ See also -B, --use-ascii. --crlfile <file> (TLS) Provide a file using PEM format with a Certificate Revocation List that may specify peer certificates that are to be considered revoked. If --crlfile is provided several times, the last set value will be used. Example: curl --crlfile rejects.txt https://example.com See also --cacert and --capath. --curves <algorithm list> (TLS) Tells curl to request specific curves to use during SSL session establishment according to RFC 8422, 5.1. Multiple algorithms can be provided by separating them with ":" (e.g. "X25519:P-521"). The parameter is available identically in the "openssl s_client/s_server" utilities. --curves allows a OpenSSL powered curl to make SSL- connections with exactly the (EC) curve requested by the client, avoiding nontransparent client/server negotiations. If this option is set, the default curves list built into openssl will be ignored. If --curves is provided several times, the last set value will be used. Example: curl --curves X25519 https://example.com See also --ciphers. Added in 7.73.0. --data-ascii <data> (HTTP) This is just an alias for -d, --data. --data-ascii can be used several times in a command line Example: curl --data-ascii @file https://example.com See also --data-binary, --data-raw and --data-urlencode. --data-binary <data> (HTTP) This posts data exactly as specified with no extra processing whatsoever. If you start the data with the letter @, the rest should be a filename. Data is posted in a similar manner as -d, --data does, except that newlines and carriage returns are preserved and conversions are never done. Like -d, --data the default content-type sent to the server is application/x-www-form-urlencoded. If you want the data to be treated as arbitrary binary data by the server then set the content-type to octet-stream: -H "Content-Type: application/octet-stream". If this option is used several times, the ones following the first will append data as described in -d, --data. --data-binary can be used several times in a command line Example: curl --data-binary @filename https://example.com See also --data-ascii. --data-raw <data> (HTTP) This posts data similarly to -d, --data but without the special interpretation of the @ character. --data-raw can be used several times in a command line Examples: curl --data-raw "hello" https://example.com curl --data-raw "@at@at@" https://example.com See also -d, --data. Added in 7.43.0. --data-urlencode <data> (HTTP) This posts data, similar to the other -d, --data options with the exception that this performs URL- encoding. To be CGI-compliant, the <data> part should begin with a name followed by a separator and a content specification. The <data> part can be passed to curl using one of the following syntaxes: content This will make curl URL-encode the content and pass that on. Just be careful so that the content does not contain any = or @ symbols, as that will then make the syntax match one of the other cases below! =content This will make curl URL-encode the content and pass that on. The preceding = symbol is not included in the data. name=content This will make curl URL-encode the content part and pass that on. Note that the name part is expected to be URL-encoded already. @filename This will make curl load data from the given file (including any newlines), URL-encode that data and pass it on in the POST. name@filename This will make curl load data from the given file (including any newlines), URL-encode that data and pass it on in the POST. The name part gets an equal sign appended, resulting in name=urlencoded-file- content. Note that the name is expected to be URL- encoded already. --data-urlencode can be used several times in a command line Examples: curl --data-urlencode name=val https://example.com curl --data-urlencode =encodethis https://example.com curl --data-urlencode name@file https://example.com curl --data-urlencode @fileonly https://example.com See also -d, --data and --data-raw. -d, --data <data> (HTTP MQTT) Sends the specified data in a POST request to the HTTP server, in the same way that a browser does when a user has filled in an HTML form and presses the submit button. This will cause curl to pass the data to the server using the content-type application/x-www-form- urlencoded. Compare to -F, --form. --data-raw is almost the same but does not have a special interpretation of the @ character. To post data purely binary, you should instead use the --data-binary option. To URL-encode the value of a form field you may use --data-urlencode. If any of these options is used more than once on the same command line, the data pieces specified will be merged with a separating &-symbol. Thus, using '-d name=daniel -d skill=lousy' would generate a post chunk that looks like 'name=daniel&skill=lousy'. If you start the data with the letter @, the rest should be a file name to read the data from, or - if you want curl to read the data from stdin. Posting data from a file named 'foobar' would thus be done with -d, --data @foobar. When -d, --data is told to read from a file like that, carriage returns and newlines will be stripped out. If you do not want the @ character to have a special interpretation use --data-raw instead. The data for this option is passed on to the server exactly as provided on the command line. curl will not convert it, change it or improve it. It is up to the user to provide the data in the correct form. -d, --data can be used several times in a command line Examples: curl -d "name=curl" https://example.com curl -d "name=curl" -d "tool=cmdline" https://example.com curl -d @filename https://example.com See also --data-binary, --data-urlencode and --data-raw. This option is mutually exclusive to -F, --form and -I, --head and -T, --upload-file. --delegation <LEVEL> (GSS/kerberos) Set LEVEL to tell the server what it is allowed to delegate when it comes to user credentials. none Do not allow any delegation. policy Delegates if and only if the OK-AS-DELEGATE flag is set in the Kerberos service ticket, which is a matter of realm policy. always Unconditionally allow the server to delegate. If --delegation is provided several times, the last set value will be used. Example: curl --delegation "none" https://example.com See also -k, --insecure and --ssl. --digest (HTTP) Enables HTTP Digest authentication. This is an authentication scheme that prevents the password from being sent over the wire in clear text. Use this in combination with the normal -u, --user option to set user name and password. Providing --digest multiple times has no extra effect. Disable it again with --no-digest. Example: curl -u name:password --digest https://example.com See also -u, --user, --proxy-digest and --anyauth. This option is mutually exclusive to --basic and --ntlm and --negotiate. --disable-eprt (FTP) Tell curl to disable the use of the EPRT and LPRT commands when doing active FTP transfers. Curl will normally always first attempt to use EPRT, then LPRT before using PORT, but with this option, it will use PORT right away. EPRT and LPRT are extensions to the original FTP protocol, and may not work on all servers, but they enable more functionality in a better way than the traditional PORT command. --eprt can be used to explicitly enable EPRT again and --no-eprt is an alias for --disable-eprt. If the server is accessed using IPv6, this option will have no effect as EPRT is necessary then. Disabling EPRT only changes the active behavior. If you want to switch to passive mode you need to not use -P, --ftp-port or force it with --ftp-pasv. Providing --disable-eprt multiple times has no extra effect. Disable it again with --no-disable-eprt. Example: curl --disable-eprt ftp://example.com/ See also --disable-epsv and -P, --ftp-port. --disable-epsv (FTP) Tell curl to disable the use of the EPSV command when doing passive FTP transfers. Curl will normally always first attempt to use EPSV before PASV, but with this option, it will not try using EPSV. --epsv can be used to explicitly enable EPSV again and --no-epsv is an alias for --disable-epsv. If the server is an IPv6 host, this option will have no effect as EPSV is necessary then. Disabling EPSV only changes the passive behavior. If you want to switch to active mode you need to use -P, --ftp- port. Providing --disable-epsv multiple times has no extra effect. Disable it again with --no-disable-epsv. Example: curl --disable-epsv ftp://example.com/ See also --disable-eprt and -P, --ftp-port. -q, --disable If used as the first parameter on the command line, the curlrc config file will not be read and used. See the -K, --config for details on the default config file search path. Providing -q, --disable multiple times has no extra effect. Disable it again with --no-disable. Example: curl -q https://example.com See also -K, --config. --disallow-username-in-url (HTTP) This tells curl to exit if passed a URL containing a username. This is probably most useful when the URL is being provided at runtime or similar. Providing --disallow-username-in-url multiple times has no extra effect. Disable it again with --no-disallow- username-in-url. Example: curl --disallow-username-in-url https://example.com See also --proto. Added in 7.61.0. --dns-interface <interface> (DNS) Tell curl to send outgoing DNS requests through <interface>. This option is a counterpart to --interface (which does not affect DNS). The supplied string must be an interface name (not an address). If --dns-interface is provided several times, the last set value will be used. Example: curl --dns-interface eth0 https://example.com See also --dns-ipv4-addr and --dns-ipv6-addr. --dns- interface requires that the underlying libcurl was built to support c-ares. Added in 7.33.0. --dns-ipv4-addr <address> (DNS) Tell curl to bind to <ip-address> when making IPv4 DNS requests, so that the DNS requests originate from this address. The argument should be a single IPv4 address. If --dns-ipv4-addr is provided several times, the last set value will be used. Example: curl --dns-ipv4-addr 10.1.2.3 https://example.com See also --dns-interface and --dns-ipv6-addr. --dns- ipv4-addr requires that the underlying libcurl was built to support c-ares. Added in 7.33.0. --dns-ipv6-addr <address> (DNS) Tell curl to bind to <ip-address> when making IPv6 DNS requests, so that the DNS requests originate from this address. The argument should be a single IPv6 address. If --dns-ipv6-addr is provided several times, the last set value will be used. Example: curl --dns-ipv6-addr 2a04:4e42::561 https://example.com See also --dns-interface and --dns-ipv4-addr. --dns- ipv6-addr requires that the underlying libcurl was built to support c-ares. Added in 7.33.0. --dns-servers <addresses> Set the list of DNS servers to be used instead of the system default. The list of IP addresses should be separated with commas. Port numbers may also optionally be given as :<port-number> after each IP address. If --dns-servers is provided several times, the last set value will be used. Example: curl --dns-servers 192.168.0.1,192.168.0.2 https://example.com See also --dns-interface and --dns-ipv4-addr. --dns- servers requires that the underlying libcurl was built to support c-ares. Added in 7.33.0. --doh-cert-status Same as --cert-status but used for DoH (DNS-over-HTTPS). Providing --doh-cert-status multiple times has no extra effect. Disable it again with --no-doh-cert-status. Example: curl --doh-cert-status --doh-url https://doh.example https://example.com See also --doh-insecure. Added in 7.76.0. --doh-insecure Same as -k, --insecure but used for DoH (DNS-over-HTTPS). Providing --doh-insecure multiple times has no extra effect. Disable it again with --no-doh-insecure. Example: curl --doh-insecure --doh-url https://doh.example https://example.com See also --doh-url. Added in 7.76.0. --doh-url <URL> Specifies which DNS-over-HTTPS (DoH) server to use to resolve hostnames, instead of using the default name resolver mechanism. The URL must be HTTPS. Some SSL options that you set for your transfer will apply to DoH since the name lookups take place over SSL. However, the certificate verification settings are not inherited and can be controlled separately via --doh- insecure and --doh-cert-status. This option is unset if an empty string "" is used as the URL. (Added in 7.85.0) If --doh-url is provided several times, the last set value will be used. Example: curl --doh-url https://doh.example https://example.com See also --doh-insecure. Added in 7.62.0. -D, --dump-header <filename> (HTTP FTP) Write the received protocol headers to the specified file. If no headers are received, the use of this option will create an empty file. When used in FTP, the FTP server response lines are considered being "headers" and thus are saved there. Having multiple transfers in one set of operations (i.e. the URLs in one -:, --next clause), will append them to the same file, separated by a blank line. If -D, --dump-header is provided several times, the last set value will be used. Example: curl --dump-header store.txt https://example.com See also -o, --output. --egd-file <file> (TLS) Deprecated option. This option is ignored by curl since 7.84.0. Prior to that it only had an effect on curl if built to use old versions of OpenSSL. Specify the path name to the Entropy Gathering Daemon socket. The socket is used to seed the random engine for SSL connections. If --egd-file is provided several times, the last set value will be used. Example: curl --egd-file /random/here https://example.com See also --random-file. --engine <name> (TLS) Select the OpenSSL crypto engine to use for cipher operations. Use --engine list to print a list of build- time supported engines. Note that not all (and possibly none) of the engines may be available at runtime. If --engine is provided several times, the last set value will be used. Example: curl --engine flavor https://example.com See also --ciphers and --curves. --etag-compare <file> (HTTP) This option makes a conditional HTTP request for the specific ETag read from the given file by sending a custom If-None-Match header using the stored ETag. For correct results, make sure that the specified file contains only a single line with the desired ETag. An empty file is parsed as an empty ETag. Use the option --etag-save to first save the ETag from a response, and then use this option to compare against the saved ETag in a subsequent request. If --etag-compare is provided several times, the last set value will be used. Example: curl --etag-compare etag.txt https://example.com See also --etag-save and -z, --time-cond. Added in 7.68.0. --etag-save <file> (HTTP) This option saves an HTTP ETag to the specified file. An ETag is a caching related header, usually returned in a response. If no ETag is sent by the server, an empty file is created. If --etag-save is provided several times, the last set value will be used. Example: curl --etag-save storetag.txt https://example.com See also --etag-compare. Added in 7.68.0. --expect100-timeout <seconds> (HTTP) Maximum time in seconds that you allow curl to wait for a 100-continue response when curl emits an Expects: 100-continue header in its request. By default curl will wait one second. This option accepts decimal values! When curl stops waiting, it will continue as if the response has been received. The decimal value needs to provided using a dot (.) as decimal separator - not the local version even if it might be using another separator. If --expect100-timeout is provided several times, the last set value will be used. Example: curl --expect100-timeout 2.5 -T file https://example.com See also --connect-timeout. Added in 7.47.0. --fail-early Fail and exit on the first detected transfer error. When curl is used to do multiple transfers on the command line, it will attempt to operate on each given URL, one by one. By default, it will ignore errors if there are more URLs given and the last URL's success will determine the error code curl returns. So early failures will be "hidden" by subsequent successful transfers. Using this option, curl will instead return an error on the first transfer that fails, independent of the amount of URLs that are given on the command line. This way, no transfer failures go undetected by scripts and similar. This option does not imply -f, --fail, which causes transfers to fail due to the server's HTTP status code. You can combine the two options, however note -f, --fail is not global and is therefore contained by -:, --next. This option is global and does not need to be specified for each use of --next. Providing --fail-early multiple times has no extra effect. Disable it again with --no-fail-early. Example: curl --fail-early https://example.com https://two.example See also -f, --fail and --fail-with-body. Added in 7.52.0. --fail-with-body (HTTP) Return an error on server errors where the HTTP response code is 400 or greater). In normal cases when an HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag will still allow curl to output and save that content but also to return error 22. This is an alternative option to -f, --fail which makes curl fail for the same circumstances but without saving the content. Providing --fail-with-body multiple times has no extra effect. Disable it again with --no-fail-with-body. Example: curl --fail-with-body https://example.com See also -f, --fail. This option is mutually exclusive to -f, --fail. Added in 7.76.0. -f, --fail (HTTP) Fail fast with no output at all on server errors. This is useful to enable scripts and users to better deal with failed attempts. In normal cases when an HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag will prevent curl from outputting that and return error 22. This method is not fail-safe and there are occasions where non-successful response codes will slip through, especially when authentication is involved (response codes 401 and 407). Providing -f, --fail multiple times has no extra effect. Disable it again with --no-fail. Example: curl --fail https://example.com See also --fail-with-body. This option is mutually exclusive to --fail-with-body. --false-start (TLS) Tells curl to use false start during the TLS handshake. False start is a mode where a TLS client will start sending application data before verifying the server's Finished message, thus saving a round trip when performing a full handshake. This is currently only implemented in the NSS and Secure Transport (on iOS 7.0 or later, or OS X 10.9 or later) backends. Providing --false-start multiple times has no extra effect. Disable it again with --no-false-start. Example: curl --false-start https://example.com See also --tcp-fastopen. Added in 7.42.0. --form-escape (HTTP) Tells curl to pass on names of multipart form fields and files using backslash-escaping instead of percent-encoding. If --form-escape is provided several times, the last set value will be used. Example: curl --form-escape -F 'field\name=curl' -F 'file=@load"this' https://example.com See also -F, --form. Added in 7.81.0. --form-string <name=string> (HTTP SMTP IMAP) Similar to -F, --form except that the value string for the named parameter is used literally. Leading '@' and '<' characters, and the ';type=' string in the value have no special meaning. Use this in preference to -F, --form if there's any possibility that the string value may accidentally trigger the '@' or '<' features of -F, --form. --form-string can be used several times in a command line Example: curl --form-string "data" https://example.com See also -F, --form. -F, --form <name=content> (HTTP SMTP IMAP) For HTTP protocol family, this lets curl emulate a filled-in form in which a user has pressed the submit button. This causes curl to POST data using the Content-Type multipart/form-data according to RFC 2388. For SMTP and IMAP protocols, this is the means to compose a multipart mail message to transmit. This enables uploading of binary files etc. To force the 'content' part to be a file, prefix the file name with an @ sign. To just get the content part from a file, prefix the file name with the symbol <. The difference between @ and < is then that @ makes a file get attached in the post as a file upload, while the < makes a text field and just get the contents for that text field from a file. Tell curl to read content from stdin instead of a file by using - as filename. This goes for both @ and < constructs. When stdin is used, the contents is buffered in memory first by curl to determine its size and allow a possible resend. Defining a part's data from a named non- regular file (such as a named pipe or similar) is unfortunately not subject to buffering and will be effectively read at transmission time; since the full size is unknown before the transfer starts, such data is sent as chunks by HTTP and rejected by IMAP. Example: send an image to an HTTP server, where 'profile' is the name of the form-field to which the file portrait.jpg will be the input: curl -F profile=@portrait.jpg https://example.com/upload.cgi Example: send your name and shoe size in two text fields to the server: curl -F name=John -F shoesize=11 https://example.com/ Example: send your essay in a text field to the server. Send it as a plain text field, but get the contents for it from a local file: curl -F "story=<hugefile.txt" https://example.com/ You can also tell curl what Content-Type to use by using 'type=', in a manner similar to: curl -F "web=@index.html;type=text/html" example.com or curl -F "name=daniel;type=text/foo" example.com You can also explicitly change the name field of a file upload part by setting filename=, like this: curl -F "file=@localfile;filename=nameinpost" example.com If filename/path contains ',' or ';', it must be quoted by double-quotes like: curl -F "file=@\"local,file\";filename=\"name;in;post\"" example.com or curl -F 'file=@"local,file";filename="name;in;post"' example.com Note that if a filename/path is quoted by double-quotes, any double-quote or backslash within the filename must be escaped by backslash. Quoting must also be applied to non-file data if it contains semicolons, leading/trailing spaces or leading double quotes: curl -F 'colors="red; green; blue";type=text/x-myapp' example.com You can add custom headers to the field by setting headers=, like curl -F "submit=OK;headers=\"X-submit-type: OK\"" example.com or curl -F "submit=OK;headers=@headerfile" example.com The headers= keyword may appear more that once and above notes about quoting apply. When headers are read from a file, Empty lines and lines starting with '#' are comments and ignored; each header can be folded by splitting between two words and starting the continuation line with a space; embedded carriage-returns and trailing spaces are stripped. Here is an example of a header file contents: # This file contain two headers. X-header-1: this is a header # The following header is folded. X-header-2: this is another header To support sending multipart mail messages, the syntax is extended as follows: - name can be omitted: the equal sign is the first character of the argument, - if data starts with '(', this signals to start a new multipart: it can be followed by a content type specification. - a multipart can be terminated with a '=)' argument. Example: the following command sends an SMTP mime email consisting in an inline part in two alternative formats: plain text and HTML. It attaches a text file: curl -F '=(;type=multipart/alternative' \ -F '=plain text message' \ -F '= <body>HTML message</body>;type=text/html' \ -F '=)' -F '=@textfile.txt' ... smtp://example.com Data can be encoded for transfer using encoder=. Available encodings are binary and 8bit that do nothing else than adding the corresponding Content-Transfer-Encoding header, 7bit that only rejects 8-bit characters with a transfer error, quoted-printable and base64 that encodes data according to the corresponding schemes, limiting lines length to 76 characters. Example: send multipart mail with a quoted-printable text message and a base64 attached file: curl -F '=text message;encoder=quoted-printable' \ -F '=@localfile;encoder=base64' ... smtp://example.com See further examples and details in the MANUAL. -F, --form can be used several times in a command line Example: curl --form "name=curl" --form "file=@loadthis" https://example.com See also -d, --data, --form-string and --form-escape. This option is mutually exclusive to -d, --data and -I, --head and -T, --upload-file. --ftp-account <data> (FTP) When an FTP server asks for "account data" after user name and password has been provided, this data is sent off using the ACCT command. If --ftp-account is provided several times, the last set value will be used. Example: curl --ftp-account "mr.robot" ftp://example.com/ See also -u, --user. --ftp-alternative-to-user <command> (FTP) If authenticating with the USER and PASS commands fails, send this command. When connecting to Tumbleweed's Secure Transport server over FTPS using a client certificate, using "SITE AUTH" will tell the server to retrieve the username from the certificate. If --ftp-alternative-to-user is provided several times, the last set value will be used. Example: curl --ftp-alternative-to-user "U53r" ftp://example.com See also --ftp-account and -u, --user. --ftp-create-dirs (FTP SFTP) When an FTP or SFTP URL/operation uses a path that does not currently exist on the server, the standard behavior of curl is to fail. Using this option, curl will instead attempt to create missing directories. Providing --ftp-create-dirs multiple times has no extra effect. Disable it again with --no-ftp-create-dirs. Example: curl --ftp-create-dirs -T file ftp://example.com/remote/path/file See also --create-dirs. --ftp-method <method> (FTP) Control what method curl should use to reach a file on an FTP(S) server. The method argument should be one of the following alternatives: multicwd curl does a single CWD operation for each path part in the given URL. For deep hierarchies this means many commands. This is how RFC 1738 says it should be done. This is the default but the slowest behavior. nocwd curl does no CWD at all. curl will do SIZE, RETR, STOR etc and give a full path to the server for all these commands. This is the fastest behavior. singlecwd curl does one CWD with the full target directory and then operates on the file "normally" (like in the multicwd case). This is somewhat more standards compliant than 'nocwd' but without the full penalty of 'multicwd'. If --ftp-method is provided several times, the last set value will be used. Examples: curl --ftp-method multicwd ftp://example.com/dir1/dir2/file curl --ftp-method nocwd ftp://example.com/dir1/dir2/file curl --ftp-method singlecwd ftp://example.com/dir1/dir2/file See also -l, --list-only. --ftp-pasv (FTP) Use passive mode for the data connection. Passive is the internal default behavior, but using this option can be used to override a previous -P, --ftp-port option. Reversing an enforced passive really is not doable but you must then instead enforce the correct -P, --ftp-port again. Passive mode means that curl will try the EPSV command first and then PASV, unless --disable-epsv is used. Providing --ftp-pasv multiple times has no extra effect. Disable it again with --no-ftp-pasv. Example: curl --ftp-pasv ftp://example.com/ See also --disable-epsv. -P, --ftp-port <address> (FTP) Reverses the default initiator/listener roles when connecting with FTP. This option makes curl use active mode. curl then tells the server to connect back to the client's specified address and port, while passive mode asks the server to setup an IP address and port for it to connect to. <address> should be one of: interface e.g. "eth0" to specify which interface's IP address you want to use (Unix only) IP address e.g. "192.168.10.1" to specify the exact IP address host name e.g. "my.host.domain" to specify the machine - make curl pick the same IP address that is already used for the control connection Disable the use of PORT with --ftp-pasv. Disable the attempt to use the EPRT command instead of PORT by using --disable-eprt. EPRT is really PORT++. You can also append ":[start]-[end]" to the right of the address, to tell curl what TCP port range to use. That means you specify a port range, from a lower to a higher number. A single number works as well, but do note that it increases the risk of failure since the port may not be available. If -P, --ftp-port is provided several times, the last set value will be used. Examples: curl -P - ftp:/example.com curl -P eth0 ftp:/example.com curl -P 192.168.0.2 ftp:/example.com See also --ftp-pasv and --disable-eprt. --ftp-pret (FTP) Tell curl to send a PRET command before PASV (and EPSV). Certain FTP servers, mainly drftpd, require this non-standard command for directory listings as well as up and downloads in PASV mode. Providing --ftp-pret multiple times has no extra effect. Disable it again with --no-ftp-pret. Example: curl --ftp-pret ftp://example.com/ See also -P, --ftp-port and --ftp-pasv. --ftp-skip-pasv-ip (FTP) Tell curl to not use the IP address the server suggests in its response to curl's PASV command when curl connects the data connection. Instead curl will re-use the same IP address it already uses for the control connection. Since curl 7.74.0 this option is enabled by default. This option has no effect if PORT, EPRT or EPSV is used instead of PASV. Providing --ftp-skip-pasv-ip multiple times has no extra effect. Disable it again with --no-ftp-skip-pasv-ip. Example: curl --ftp-skip-pasv-ip ftp://example.com/ See also --ftp-pasv. --ftp-ssl-ccc-mode <active/passive> (FTP) Sets the CCC mode. The passive mode will not initiate the shutdown, but instead wait for the server to do it, and will not reply to the shutdown from the server. The active mode initiates the shutdown and waits for a reply from the server. Providing --ftp-ssl-ccc-mode multiple times has no extra effect. Disable it again with --no-ftp-ssl-ccc-mode. Example: curl --ftp-ssl-ccc-mode active --ftp-ssl-ccc ftps://example.com/ See also --ftp-ssl-ccc. --ftp-ssl-ccc (FTP) Use CCC (Clear Command Channel) Shuts down the SSL/TLS layer after authenticating. The rest of the control channel communication will be unencrypted. This allows NAT routers to follow the FTP transaction. The default mode is passive. Providing --ftp-ssl-ccc multiple times has no extra effect. Disable it again with --no-ftp-ssl-ccc. Example: curl --ftp-ssl-ccc ftps://example.com/ See also --ssl and --ftp-ssl-ccc-mode. --ftp-ssl-control (FTP) Require SSL/TLS for the FTP login, clear for transfer. Allows secure authentication, but non-encrypted data transfers for efficiency. Fails the transfer if the server does not support SSL/TLS. Providing --ftp-ssl-control multiple times has no extra effect. Disable it again with --no-ftp-ssl-control. Example: curl --ftp-ssl-control ftp://example.com See also --ssl. -G, --get When used, this option will make all data specified with -d, --data, --data-binary or --data-urlencode to be used in an HTTP GET request instead of the POST request that otherwise would be used. The data will be appended to the URL with a '?' separator. If used in combination with -I, --head, the POST data will instead be appended to the URL with a HEAD request. Providing -G, --get multiple times has no extra effect. Disable it again with --no-get. Examples: curl --get https://example.com curl --get -d "tool=curl" -d "age=old" https://example.com curl --get -I -d "tool=curl" https://example.com See also -d, --data and -X, --request. -g, --globoff This option switches off the "URL globbing parser". When you set this option, you can specify URLs that contain the letters {}[] without having curl itself interpret them. Note that these letters are not normal legal URL contents but they should be encoded according to the URI standard. Providing -g, --globoff multiple times has no extra effect. Disable it again with --no-globoff. Example: curl -g "https://example.com/{[]}}}}" See also -K, --config and -q, --disable. --happy-eyeballs-timeout-ms <milliseconds> Happy Eyeballs is an algorithm that attempts to connect to both IPv4 and IPv6 addresses for dual-stack hosts, giving IPv6 a head-start of the specified number of milliseconds. If the IPv6 address cannot be connected to within that time, then a connection attempt is made to the IPv4 address in parallel. The first connection to be established is the one that is used. The range of suggested useful values is limited. Happy Eyeballs RFC 6555 says "It is RECOMMENDED that connection attempts be paced 150-250 ms apart to balance human factors against network load." libcurl currently defaults to 200 ms. Firefox and Chrome currently default to 300 ms. If --happy-eyeballs-timeout-ms is provided several times, the last set value will be used. Example: curl --happy-eyeballs-timeout-ms 500 https://example.com See also -m, --max-time and --connect-timeout. Added in 7.59.0. --haproxy-clientip (HTTP) Sets a client IP in HAProxy PROXY protocol v1 header at the beginning of the connection. For valid requests, IPv4 addresses must be indicated as a series of exactly 4 integers in the range [0..255] inclusive written in decimal representation separated by exactly one dot between each other. Heading zeroes are not permitted in front of numbers in order to avoid any possible confusion with octal numbers. IPv6 addresses must be indicated as series of 4 hexadecimal digits (upper or lower case) delimited by colons between each other, with the acceptance of one double colon sequence to replace the largest acceptable range of consecutive zeroes. The total number of decoded bits must exactly be 128. Otherwise, any string can be accepted for the client IP and will be sent. It replaces `--haproxy-protocol` if used, it is not necessary to specify both flags. This option is primarily useful when sending test requests to verify a service is working as intended. If --haproxy-clientip is provided several times, the last set value will be used. Example: curl --haproxy-clientip $IP See also -x, --proxy. Added in 8.2.0. --haproxy-protocol (HTTP) Send a HAProxy PROXY protocol v1 header at the beginning of the connection. This is used by some load balancers and reverse proxies to indicate the client's true IP address and port. This option is primarily useful when sending test requests to a service that expects this header. Providing --haproxy-protocol multiple times has no extra effect. Disable it again with --no-haproxy-protocol. Example: curl --haproxy-protocol https://example.com See also -x, --proxy. Added in 7.60.0. -I, --head (HTTP FTP FILE) Fetch the headers only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on an FTP or FILE file, curl displays the file size and last modification time only. Providing -I, --head multiple times has no extra effect. Disable it again with --no-head. Example: curl -I https://example.com See also -G, --get, -v, --verbose and --trace-ascii. -H, --header <header/@file> (HTTP IMAP SMTP) Extra header to include in information sent. When used within an HTTP request, it is added to the regular request headers. For an IMAP or SMTP MIME uploaded mail built with -F, --form options, it is prepended to the resulting MIME document, effectively including it at the mail global level. It does not affect raw uploaded mails (Added in 7.56.0). You may specify any number of extra headers. Note that if you should add a custom header that has the same name as one of the internal ones curl would use, your externally set header will be used instead of the internal one. This allows you to make even trickier stuff than curl would normally do. You should not replace internally set headers without knowing perfectly well what you are doing. Remove an internal header by giving a replacement without content on the right side of the colon, as in: -H "Host:". If you send the custom header with no-value then its header must be terminated with a semicolon, such as -H "X-Custom- Header;" to send "X-Custom-Header:". curl will make sure that each header you add/replace is sent with the proper end-of-line marker, you should thus not add that as a part of the header content: do not add newlines or carriage returns, they will only mess things up for you. This option can take an argument in @filename style, which then adds a header for each line in the input file. Using @- will make curl read the header file from stdin. Added in 7.55.0. Please note that most anti-spam utilities check the presence and value of several MIME mail headers: these are "From:", "To:", "Date:" and "Subject:" among others and should be added with this option. You need --proxy-header to send custom headers intended for an HTTP proxy. Added in 7.37.0. Passing on a "Transfer-Encoding: chunked" header when doing an HTTP request with a request body, will make curl send the data using chunked encoding. WARNING: headers set with this option will be set in all HTTP requests - even after redirects are followed, like when told with -L, --location. This can lead to the header being sent to other hosts than the original host, so sensitive headers should be used with caution combined with following redirects. -H, --header can be used several times in a command line Examples: curl -H "X-First-Name: Joe" https://example.com curl -H "User-Agent: yes-please/2000" https://example.com curl -H "Host:" https://example.com curl -H @headers.txt https://example.com See also -A, --user-agent and -e, --referer. -h, --help <category> Usage help. This lists all commands of the <category>. If no arg was provided, curl will display the most important command line arguments. If the argument "all" was provided, curl will display all options available. If the argument "category" was provided, curl will display all categories and their meanings. Example: curl --help all See also -v, --verbose. --hostpubmd5 <md5> (SFTP SCP) Pass a string containing 32 hexadecimal digits. The string should be the 128 bit MD5 checksum of the remote host's public key, curl will refuse the connection with the host unless the md5sums match. If --hostpubmd5 is provided several times, the last set value will be used. Example: curl --hostpubmd5 e5c1c49020640a5ab0f2034854c321a8 sftp://example.com/ See also --hostpubsha256. --hostpubsha256 <sha256> (SFTP SCP) Pass a string containing a Base64-encoded SHA256 hash of the remote host's public key. Curl will refuse the connection with the host unless the hashes match. This feature requires libcurl to be built with libssh2 and does not work with other SSH backends. If --hostpubsha256 is provided several times, the last set value will be used. Example: curl --hostpubsha256 NDVkMTQxMGQ1ODdmMjQ3MjczYjAyOTY5MmRkMjVmNDQ= sftp://example.com/ See also --hostpubmd5. Added in 7.80.0. --hsts <file name> (HTTPS) This option enables HSTS for the transfer. If the file name points to an existing HSTS cache file, that will be used. After a completed transfer, the cache will be saved to the file name again if it has been modified. If curl is told to use HTTP:// for a transfer involving a host name that exists in the HSTS cache, it upgrades the transfer to use HTTPS. Each HSTS cache entry has an individual life time after which the upgrade is no longer performed. Specify a "" file name (zero length) to avoid loading/saving and make curl just handle HSTS in memory. If this option is used several times, curl will load contents from all the files but the last one will be used for saving. --hsts can be used several times in a command line Example: curl --hsts cache.txt https://example.com See also --proto. Added in 7.74.0. --http0.9 (HTTP) Tells curl to be fine with HTTP version 0.9 response. HTTP/0.9 is a completely headerless response and therefore you can also connect with this to non-HTTP servers and still get a response since curl will simply transparently downgrade - if allowed. Since curl 7.66.0, HTTP/0.9 is disabled by default. Providing --http0.9 multiple times has no extra effect. Disable it again with --no-http0.9. Example: curl --http0.9 https://example.com See also --http1.1, --http2 and --http3. Added in 7.64.0. -0, --http1.0 (HTTP) Tells curl to use HTTP version 1.0 instead of using its internally preferred HTTP version. Providing -0, --http1.0 multiple times has no extra effect. Example: curl --http1.0 https://example.com See also --http0.9 and --http1.1. This option is mutually exclusive to --http1.1 and --http2 and --http2-prior- knowledge and --http3. --http1.1 (HTTP) Tells curl to use HTTP version 1.1. Providing --http1.1 multiple times has no extra effect. Example: curl --http1.1 https://example.com See also -0, --http1.0 and --http0.9. This option is mutually exclusive to -0, --http1.0 and --http2 and --http2-prior-knowledge and --http3. Added in 7.33.0. --http2-prior-knowledge (HTTP) Tells curl to issue its non-TLS HTTP requests using HTTP/2 without HTTP/1.1 Upgrade. It requires prior knowledge that the server supports HTTP/2 straight away. HTTPS requests will still do HTTP/2 the standard way with negotiated protocol version in the TLS handshake. Providing --http2-prior-knowledge multiple times has no extra effect. Disable it again with --no-http2-prior- knowledge. Example: curl --http2-prior-knowledge https://example.com See also --http2 and --http3. --http2-prior-knowledge requires that the underlying libcurl was built to support HTTP/2. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2 and --http3. Added in 7.49.0. --http2 (HTTP) Tells curl to use HTTP version 2. For HTTPS, this means curl will attempt to negotiate HTTP/2 in the TLS handshake. curl does this by default. For HTTP, this means curl will attempt to upgrade the request to HTTP/2 using the Upgrade: request header. When curl uses HTTP/2 over HTTPS, it does not itself insist on TLS 1.2 or higher even though that is required by the specification. A user can add this version requirement with --tlsv1.2. Providing --http2 multiple times has no extra effect. Example: curl --http2 https://example.com See also --http1.1, --http3 and --no-alpn. --http2 requires that the underlying libcurl was built to support HTTP/2. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2-prior-knowledge and --http3. Added in 7.33.0. --http3-only (HTTP) **WARNING**: this option is experimental. Do not use in production. Instructs curl to use HTTP/3 to the host in the URL, with no fallback to earlier HTTP versions. HTTP/3 can only be used for HTTPS and not for HTTP URLs. For HTTP, this option will trigger an error. This option allows a user to avoid using the Alt-Svc method of upgrading to HTTP/3 when you know that the target speaks HTTP/3 on the given host and port. This option will make curl fail if a QUIC connection cannot be established, it will not attempt any other HTTP version on its own. Use --http3 for similar functionality with a fallback. Providing --http3-only multiple times has no extra effect. Example: curl --http3-only https://example.com See also --http1.1, --http2 and --http3. --http3-only requires that the underlying libcurl was built to support HTTP/3. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2 and --http2-prior-knowledge and --http3. Added in 7.88.0. --http3 (HTTP) **WARNING**: this option is experimental. Do not use in production. Tells curl to try HTTP/3 to the host in the URL, but fallback to earlier HTTP versions if the HTTP/3 connection establishment fails. HTTP/3 is only available for HTTPS and not for HTTP URLs. This option allows a user to avoid using the Alt-Svc method of upgrading to HTTP/3 when you know that the target speaks HTTP/3 on the given host and port. When asked to use HTTP/3, curl will issue a separate attempt to use older HTTP versions with a slight delay, so if the HTTP/3 transfer fails or is very slow, curl will still try to proceed with an older HTTP version. Use --http3-only for similar functionality without a fallback. Providing --http3 multiple times has no extra effect. Example: curl --http3 https://example.com See also --http1.1 and --http2. --http3 requires that the underlying libcurl was built to support HTTP/3. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2 and --http2-prior-knowledge and --http3-only. Added in 7.66.0. --ignore-content-length (FTP HTTP) For HTTP, Ignore the Content-Length header. This is particularly useful for servers running Apache 1.x, which will report incorrect Content-Length for files larger than 2 gigabytes. For FTP (since 7.46.0), skip the RETR command to figure out the size before downloading a file. This option does not work for HTTP if libcurl was built to use hyper. Providing --ignore-content-length multiple times has no extra effect. Disable it again with --no-ignore-content- length. Example: curl --ignore-content-length https://example.com See also --ftp-skip-pasv-ip. -i, --include Include the HTTP response headers in the output. The HTTP response headers can include things like server name, cookies, date of the document, HTTP version and more... To view the request headers, consider the -v, --verbose option. Providing -i, --include multiple times has no extra effect. Disable it again with --no-include. Example: curl -i https://example.com See also -v, --verbose. -k, --insecure (TLS SFTP SCP) By default, every secure connection curl makes is verified to be secure before the transfer takes place. This option makes curl skip the verification step and proceed without checking. When this option is not used for protocols using TLS, curl verifies the server's TLS certificate before it continues: that the certificate contains the right name which matches the host name used in the URL and that the certificate has been signed by a CA certificate present in the cert store. See this online resource for further details: https://curl.se/docs/sslcerts.html For SFTP and SCP, this option makes curl skip the known_hosts verification. known_hosts is a file normally stored in the user's home directory in the ".ssh" subdirectory, which contains host names and their public keys. WARNING: using this option makes the transfer insecure. When curl uses secure protocols it trusts responses and allows for example HSTS and Alt-Svc information to be stored and used subsequently. Using -k, --insecure can make curl trust and use such information from malicious servers. Providing -k, --insecure multiple times has no extra effect. Disable it again with --no-insecure. Example: curl --insecure https://example.com See also --proxy-insecure, --cacert and --capath. --interface <name> Perform an operation using a specified interface. You can enter interface name, IP address or host name. An example could look like: curl --interface eth0:1 https://www.example.com/ On Linux it can be used to specify a VRF, but the binary needs to either have CAP_NET_RAW or to be run as root. More information about Linux VRF: https://www.kernel.org/doc/Documentation/networking/vrf.txt If --interface is provided several times, the last set value will be used. Example: curl --interface eth0 https://example.com See also --dns-interface. -4, --ipv4 This option tells curl to use IPv4 addresses only when resolving host names, and not for example try IPv6. Providing -4, --ipv4 multiple times has no extra effect. Example: curl --ipv4 https://example.com See also --http1.1 and --http2. This option is mutually exclusive to -6, --ipv6. -6, --ipv6 This option tells curl to use IPv6 addresses only when resolving host names, and not for example try IPv4. Providing -6, --ipv6 multiple times has no extra effect. Example: curl --ipv6 https://example.com See also --http1.1 and --http2. This option is mutually exclusive to -4, --ipv4. --json <data> (HTTP) Sends the specified JSON data in a POST request to the HTTP server. --json works as a shortcut for passing on these three options: --data [arg] --header "Content-Type: application/json" --header "Accept: application/json" There is no verification that the passed in data is actual JSON or that the syntax is correct. If you start the data with the letter @, the rest should be a file name to read the data from, or a single dash (-) if you want curl to read the data from stdin. Posting data from a file named 'foobar' would thus be done with --json @foobar and to instead read the data from stdin, use --json @-. If this option is used more than once on the same command line, the additional data pieces will be concatenated to the previous before sending. The headers this option sets can be overridden with -H, --header as usual. --json can be used several times in a command line Examples: curl --json '{ "drink": "coffe" }' https://example.com curl --json '{ "drink":' --json ' "coffe" }' https://example.com curl --json @prepared https://example.com curl --json @- https://example.com < json.txt See also --data-binary and --data-raw. This option is mutually exclusive to -F, --form and -I, --head and -T, --upload-file. Added in 7.82.0. -j, --junk-session-cookies (HTTP) When curl is told to read cookies from a given file, this option will make it discard all "session cookies". This will basically have the same effect as if a new session is started. Typical browsers always discard session cookies when they are closed down. Providing -j, --junk-session-cookies multiple times has no extra effect. Disable it again with --no-junk-session- cookies. Example: curl --junk-session-cookies -b cookies.txt https://example.com See also -b, --cookie and -c, --cookie-jar. --keepalive-time <seconds> This option sets the time a connection needs to remain idle before sending keepalive probes and the time between individual keepalive probes. It is currently effective on operating systems offering the TCP_KEEPIDLE and TCP_KEEPINTVL socket options (meaning Linux, recent AIX, HP-UX and more). Keepalives are used by the TCP stack to detect broken networks on idle connections. The number of missed keepalive probes before declaring the connection down is OS dependent and is commonly 9 or 10. This option has no effect if --no-keepalive is used. If unspecified, the option defaults to 60 seconds. If --keepalive-time is provided several times, the last set value will be used. Example: curl --keepalive-time 20 https://example.com See also --no-keepalive and -m, --max-time. --key-type <type> (TLS) Private key file type. Specify which type your --key provided private key is. DER, PEM, and ENG are supported. If not specified, PEM is assumed. If --key-type is provided several times, the last set value will be used. Example: curl --key-type DER --key here https://example.com See also --key. --key <key> (TLS SSH) Private key file name. Allows you to provide your private key in this separate file. For SSH, if not specified, curl tries the following candidates in order: '~/.ssh/id_rsa', '~/.ssh/id_dsa', './id_rsa', './id_dsa'. If curl is built against OpenSSL library, and the engine pkcs11 is available, then a PKCS#11 URI (RFC 7512) can be used to specify a private key located in a PKCS#11 device. A string beginning with "pkcs11:" will be interpreted as a PKCS#11 URI. If a PKCS#11 URI is provided, then the --engine option will be set as "pkcs11" if none was provided and the --key-type option will be set as "ENG" if none was provided. If curl is built against Secure Transport or Schannel then this option is ignored for TLS protocols (HTTPS, etc). Those backends expect the private key to be already present in the keychain or PKCS#12 file containing the certificate. If --key is provided several times, the last set value will be used. Example: curl --cert certificate --key here https://example.com See also --key-type and -E, --cert. --krb <level> (FTP) Enable Kerberos authentication and use. The level must be entered and should be one of 'clear', 'safe', 'confidential', or 'private'. Should you use a level that is not one of these, 'private' will instead be used. If --krb is provided several times, the last set value will be used. Example: curl --krb clear ftp://example.com/ See also --delegation and --ssl. --krb requires that the underlying libcurl was built to support Kerberos. --libcurl <file> Append this option to any ordinary curl command line, and you will get libcurl-using C source code written to the file that does the equivalent of what your command-line operation does! This option is global and does not need to be specified for each use of --next. If --libcurl is provided several times, the last set value will be used. Example: curl --libcurl client.c https://example.com See also -v, --verbose. --limit-rate <speed> Specify the maximum transfer rate you want curl to use - for both downloads and uploads. This feature is useful if you have a limited pipe and you would like your transfer not to use your entire bandwidth. To make it slower than it otherwise would be. The given speed is measured in bytes/second, unless a suffix is appended. Appending 'k' or 'K' will count the number as kilobytes, 'm' or 'M' makes it megabytes, while 'g' or 'G' makes it gigabytes. The suffixes (k, M, G, T, P) are 1024 based. For example 1k is 1024. Examples: 200K, 3m and 1G. The rate limiting logic works on averaging the transfer speed to no more than the set threshold over a period of multiple seconds. If you also use the -Y, --speed-limit option, that option will take precedence and might cripple the rate-limiting slightly, to help keeping the speed-limit logic working. If --limit-rate is provided several times, the last set value will be used. Examples: curl --limit-rate 100K https://example.com curl --limit-rate 1000 https://example.com curl --limit-rate 10M https://example.com See also --rate, -Y, --speed-limit and -y, --speed-time. -l, --list-only (FTP POP3) (FTP) When listing an FTP directory, this switch forces a name-only view. This is especially useful if the user wants to machine-parse the contents of an FTP directory since the normal directory view does not use a standard look or format. When used like this, the option causes an NLST command to be sent to the server instead of LIST. Note: Some FTP servers list only files in their response to NLST; they do not include sub-directories and symbolic links. (POP3) When retrieving a specific email from POP3, this switch forces a LIST command to be performed instead of RETR. This is particularly useful if the user wants to see if a specific message-id exists on the server and what size it is. Note: When combined with -X, --request, this option can be used to send a UIDL command instead, so the user may use the email's unique identifier rather than its message-id to make the request. Providing -l, --list-only multiple times has no extra effect. Disable it again with --no-list-only. Example: curl --list-only ftp://example.com/dir/ See also -Q, --quote and -X, --request. --local-port <num/range> Set a preferred single number or range (FROM-TO) of local port numbers to use for the connection(s). Note that port numbers by nature are a scarce resource that will be busy at times so setting this range to something too narrow might cause unnecessary connection setup failures. If --local-port is provided several times, the last set value will be used. Example: curl --local-port 1000-3000 https://example.com See also -g, --globoff. --location-trusted (HTTP) Like -L, --location, but will allow sending the name + password to all hosts that the site may redirect to. This may or may not introduce a security breach if the site redirects you to a site to which you will send your authentication info (which is plaintext in the case of HTTP Basic authentication). Providing --location-trusted multiple times has no extra effect. Disable it again with --no-location-trusted. Example: curl --location-trusted -u user:password https://example.com See also -u, --user. -L, --location (HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place. If used together with -i, --include or -I, --head, headers from all requested pages will be shown. When authentication is used, curl only sends its credentials to the initial host. If a redirect takes curl to a different host, it will not be able to intercept the user+password. See also --location-trusted on how to change this. You can limit the amount of redirects to follow by using the --max- redirs option. When curl follows a redirect and if the request is a POST, it will send the following request with a GET if the HTTP response was 301, 302, or 303. If the response code was any other 3xx code, curl will re-send the following request using the same unmodified method. You can tell curl to not change POST requests to GET after a 30x response by using the dedicated options for that: --post301, --post302 and --post303. The method set with -X, --request overrides the method curl would otherwise select to use. Providing -L, --location multiple times has no extra effect. Disable it again with --no-location. Example: curl -L https://example.com See also --resolve and --alt-svc. --login-options <options> (IMAP LDAP POP3 SMTP) Specify the login options to use during server authentication. You can use login options to specify protocol specific options that may be used during authentication. At present only IMAP, POP3 and SMTP support login options. For more information about login options please see RFC 2384, RFC 5092 and IETF draft draft-earhart-url-smtp-00.txt Since 8.2.0, IMAP supports the login option "AUTH=+LOGIN". With this option, curl uses the plain (not SASL) LOGIN IMAP command even if the server advertises SASL authentication. Care should be taken in using this option, as it will send out your password in plain text. This will not work if the IMAP server disables the plain LOGIN (e.g. to prevent password snooping). If --login-options is provided several times, the last set value will be used. Example: curl --login-options 'AUTH=*' imap://example.com See also -u, --user. Added in 7.34.0. --mail-auth <address> (SMTP) Specify a single address. This will be used to specify the authentication address (identity) of a submitted message that is being relayed to another server. If --mail-auth is provided several times, the last set value will be used. Example: curl --mail-auth user@example.come -T mail smtp://example.com/ See also --mail-rcpt and --mail-from. --mail-from <address> (SMTP) Specify a single address that the given mail should get sent from. If --mail-from is provided several times, the last set value will be used. Example: curl --mail-from user@example.com -T mail smtp://example.com/ See also --mail-rcpt and --mail-auth. --mail-rcpt-allowfails (SMTP) When sending data to multiple recipients, by default curl will abort SMTP conversation if at least one of the recipients causes RCPT TO command to return an error. The default behavior can be changed by passing --mail- rcpt-allowfails command-line option which will make curl ignore errors and proceed with the remaining valid recipients. If all recipients trigger RCPT TO failures and this flag is specified, curl will still abort the SMTP conversation and return the error received from to the last RCPT TO command. Providing --mail-rcpt-allowfails multiple times has no extra effect. Disable it again with --no-mail-rcpt- allowfails. Example: curl --mail-rcpt-allowfails --mail-rcpt dest@example.com smtp://example.com See also --mail-rcpt. Added in 7.69.0. --mail-rcpt <address> (SMTP) Specify a single email address, user name or mailing list name. Repeat this option several times to send to multiple recipients. When performing an address verification (VRFY command), the recipient should be specified as the user name or user name and domain (as per Section 3.5 of RFC5321). (Added in 7.34.0) When performing a mailing list expand (EXPN command), the recipient should be specified using the mailing list name, such as "Friends" or "London-Office". (Added in 7.34.0) --mail-rcpt can be used several times in a command line Example: curl --mail-rcpt user@example.net smtp://example.com See also --mail-rcpt-allowfails. -M, --manual Manual. Display the huge help text. Example: curl --manual See also -v, --verbose, --libcurl and --trace. --max-filesize <bytes> (FTP HTTP MQTT) Specify the maximum size (in bytes) of a file to download. If the file requested is larger than this value, the transfer will not start and curl will return with exit code 63. A size modifier may be used. For example, Appending 'k' or 'K' will count the number as kilobytes, 'm' or 'M' makes it megabytes, while 'g' or 'G' makes it gigabytes. Examples: 200K, 3m and 1G. (Added in 7.58.0) NOTE: The file size is not always known prior to download, and for such files this option has no effect even if the file transfer ends up being larger than this given limit. If --max-filesize is provided several times, the last set value will be used. Example: curl --max-filesize 100K https://example.com See also --limit-rate. --max-redirs <num> (HTTP) Set maximum number of redirections to follow. When -L, --location is used, to prevent curl from following too many redirects, by default, the limit is set to 50 redirects. Set this option to -1 to make it unlimited. If --max-redirs is provided several times, the last set value will be used. Example: curl --max-redirs 3 --location https://example.com See also -L, --location. -m, --max-time <fractional seconds> Maximum time in seconds that you allow each transfer to take. This is useful for preventing your batch jobs from hanging for hours due to slow networks or links going down. Since 7.32.0, this option accepts decimal values, but the actual timeout will decrease in accuracy as the specified timeout increases in decimal precision. If you enable retrying the transfer (--retry) then the maximum time counter is reset each time the transfer is retried. You can use --retry-max-time to limit the retry time. The decimal value needs to provided using a dot (.) as decimal separator - not the local version even if it might be using another separator. If -m, --max-time is provided several times, the last set value will be used. Examples: curl --max-time 10 https://example.com curl --max-time 2.92 https://example.com See also --connect-timeout and --retry-max-time. --metalink This option was previously used to specify a metalink resource. Metalink support has been disabled in curl since 7.78.0 for security reasons. If --metalink is provided several times, the last set value will be used. Example: curl --metalink file https://example.com See also -Z, --parallel. --negotiate (HTTP) Enables Negotiate (SPNEGO) authentication. This option requires a library built with GSS-API or SSPI support. Use -V, --version to see if your curl supports GSS-API/SSPI or SPNEGO. When using this option, you must also provide a fake -u, --user option to activate the authentication code properly. Sending a '-u :' is enough as the user name and password from the -u, --user option are not actually used. If this option is used several times, only the first one is used. Providing --negotiate multiple times has no extra effect. Example: curl --negotiate -u : https://example.com See also --basic, --ntlm, --anyauth and --proxy-negotiate. --netrc-file <filename> This option is similar to -n, --netrc, except that you provide the path (absolute or relative) to the netrc file that curl should use. You can only specify one netrc file per invocation. It will abide by --netrc-optional if specified. If --netrc-file is provided several times, the last set value will be used. Example: curl --netrc-file netrc https://example.com See also -n, --netrc, -u, --user and -K, --config. This option is mutually exclusive to -n, --netrc. --netrc-optional Similar to -n, --netrc, but this option makes the .netrc usage optional and not mandatory as the -n, --netrc option does. Providing --netrc-optional multiple times has no extra effect. Disable it again with --no-netrc-optional. Example: curl --netrc-optional https://example.com See also --netrc-file. This option is mutually exclusive to -n, --netrc. -n, --netrc Makes curl scan the .netrc (_netrc on Windows) file in the user's home directory for login name and password. This is typically used for FTP on Unix. If used with HTTP, curl will enable user authentication. See netrc(5) and ftp(1) for details on the file format. Curl will not complain if that file does not have the right permissions (it should be neither world- nor group-readable). The environment variable "HOME" is used to find the home directory. A quick and simple example of how to setup a .netrc to allow curl to FTP to the machine host.domain.com with user name 'myself' and password 'secret' could look similar to: machine host.domain.com login myself password secret Providing -n, --netrc multiple times has no extra effect. Disable it again with --no-netrc. Example: curl --netrc https://example.com See also --netrc-file, -K, --config and -u, --user. This option is mutually exclusive to --netrc-file and --netrc- optional. -:, --next Tells curl to use a separate operation for the following URL and associated options. This allows you to send several URL requests, each with their own specific options, for example, such as different user names or custom requests for each. -:, --next will reset all local options and only global ones will have their values survive over to the operation following the -:, --next instruction. Global options include -v, --verbose, --trace, --trace-ascii and --fail- early. For example, you can do both a GET and a POST in a single command line: curl www1.example.com --next -d postthis www2.example.com -:, --next can be used several times in a command line Examples: curl https://example.com --next -d postthis www2.example.com curl -I https://example.com --next https://example.net/ See also -Z, --parallel and -K, --config. Added in 7.36.0. --no-alpn (HTTPS) Disable the ALPN TLS extension. ALPN is enabled by default if libcurl was built with an SSL library that supports ALPN. ALPN is used by a libcurl that supports HTTP/2 to negotiate HTTP/2 support with the server during https sessions. Providing --no-alpn multiple times has no extra effect. Disable it again with --alpn. Example: curl --no-alpn https://example.com See also --no-npn and --http2. --no-alpn requires that the underlying libcurl was built to support TLS. Added in 7.36.0. -N, --no-buffer Disables the buffering of the output stream. In normal work situations, curl will use a standard buffered output stream that will have the effect that it will output the data in chunks, not necessarily exactly when the data arrives. Using this option will disable that buffering. Providing -N, --no-buffer multiple times has no extra effect. Disable it again with --buffer. Example: curl --no-buffer https://example.com See also -#, --progress-bar. --no-clobber When used in conjunction with the -o, --output, -J, --remote-header-name, -O, --remote-name, or --remote-name- all options, curl avoids overwriting files that already exist. Instead, a dot and a number gets appended to the name of the file that would be created, up to filename.100 after which it will not create any file. Note that this is the negated option name documented. You can thus use --clobber to enforce the clobbering, even if -J, --remote-header-name is specified. Providing --no-clobber multiple times has no extra effect. Disable it again with --clobber. Example: curl --no-clobber --output local/dir/file https://example.com See also -o, --output and -O, --remote-name. Added in 7.83.0. --no-keepalive Disables the use of keepalive messages on the TCP connection. curl otherwise enables them by default. Note that this is the negated option name documented. You can thus use --keepalive to enforce keepalive. Providing --no-keepalive multiple times has no extra effect. Disable it again with --keepalive. Example: curl --no-keepalive https://example.com See also --keepalive-time. --no-npn (HTTPS) In curl 7.86.0 and later, curl never uses NPN. Disable the NPN TLS extension. NPN is enabled by default if libcurl was built with an SSL library that supports NPN. NPN is used by a libcurl that supports HTTP/2 to negotiate HTTP/2 support with the server during https sessions. Providing --no-npn multiple times has no extra effect. Disable it again with --npn. Example: curl --no-npn https://example.com See also --no-alpn and --http2. --no-npn requires that the underlying libcurl was built to support TLS. Added in 7.36.0. --no-progress-meter Option to switch off the progress meter output without muting or otherwise affecting warning and informational messages like -s, --silent does. Note that this is the negated option name documented. You can thus use --progress-meter to enable the progress meter again. Providing --no-progress-meter multiple times has no extra effect. Disable it again with --progress-meter. Example: curl --no-progress-meter -o store https://example.com See also -v, --verbose and -s, --silent. Added in 7.67.0. --no-sessionid (TLS) Disable curl's use of SSL session-ID caching. By default all transfers are done using the cache. Note that while nothing should ever get hurt by attempting to reuse SSL session-IDs, there seem to be broken SSL implementations in the wild that may require you to disable this in order for you to succeed. Note that this is the negated option name documented. You can thus use --sessionid to enforce session-ID caching. Providing --no-sessionid multiple times has no extra effect. Disable it again with --sessionid. Example: curl --no-sessionid https://example.com See also -k, --insecure. --noproxy <no-proxy-list> Comma-separated list of hosts for which not to use a proxy, if one is specified. The only wildcard is a single * character, which matches all hosts, and effectively disables the proxy. Each name in this list is matched as either a domain which contains the hostname, or the hostname itself. For example, local.com would match local.com, local.com:80, and www.local.com, but not www.notlocal.com. Since 7.53.0, This option overrides the environment variables that disable the proxy ('no_proxy' and 'NO_PROXY'). If there's an environment variable disabling a proxy, you can set the noproxy list to "" to override it. Since 7.86.0, IP addresses specified to this option can be provided using CIDR notation: an appended slash and number specifies the number of "network bits" out of the address to use in the comparison. For example "192.168.0.0/16" would match all addresses starting with "192.168". If --noproxy is provided several times, the last set value will be used. Example: curl --noproxy "www.example" https://example.com See also -x, --proxy. --ntlm-wb (HTTP) Enables NTLM much in the style --ntlm does, but hand over the authentication to the separate binary ntlmauth application that is executed when needed. Providing --ntlm-wb multiple times has no extra effect. Example: curl --ntlm-wb -u user:password https://example.com See also --ntlm and --proxy-ntlm. --ntlm (HTTP) Enables NTLM authentication. The NTLM authentication method was designed by Microsoft and is used by IIS web servers. It is a proprietary protocol, reverse-engineered by clever people and implemented in curl based on their efforts. This kind of behavior should not be endorsed, you should encourage everyone who uses NTLM to switch to a public and documented authentication method instead, such as Digest. If you want to enable NTLM for your proxy authentication, then use --proxy-ntlm. If this option is used several times, only the first one is used. Providing --ntlm multiple times has no extra effect. Example: curl --ntlm -u user:password https://example.com See also --proxy-ntlm. --ntlm requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to --basic and --negotiate and --digest and --anyauth. --oauth2-bearer <token> (IMAP LDAP POP3 SMTP HTTP) Specify the Bearer Token for OAUTH 2.0 server authentication. The Bearer Token is used in conjunction with the user name which can be specified as part of the --url or -u, --user options. The Bearer Token and user name are formatted according to RFC 6750. If --oauth2-bearer is provided several times, the last set value will be used. Example: curl --oauth2-bearer "mF_9.B5f-4.1JqM" https://example.com See also --basic, --ntlm and --digest. Added in 7.33.0. --output-dir <dir> This option specifies the directory in which files should be stored, when -O, --remote-name or -o, --output are used. The given output directory is used for all URLs and output options on the command line, up until the first -:, --next. If the specified target directory does not exist, the operation will fail unless --create-dirs is also used. If --output-dir is provided several times, the last set value will be used. Example: curl --output-dir "tmp" -O https://example.com See also -O, --remote-name and -J, --remote-header-name. Added in 7.73.0. -o, --output <file> Write output to <file> instead of stdout. If you are using {} or [] to fetch multiple documents, you should quote the URL and you can use '#' followed by a number in the <file> specifier. That variable will be replaced with the current string for the URL being fetched. Like in: curl "http://{one,two}.example.com" -o "file_#1.txt" or use several variables like: curl "http://{site,host}.host[1-5].com" -o "#1_#2" You may use this option as many times as the number of URLs you have. For example, if you specify two URLs on the same command line, you can use it like this: curl -o aa example.com -o bb example.net and the order of the -o options and the URLs does not matter, just that the first -o is for the first URL and so on, so the above command line can also be written as curl example.com example.net -o aa -o bb See also the --create-dirs option to create the local directories dynamically. Specifying the output as '-' (a single dash) will force the output to be done to stdout. To suppress response bodies, you can redirect output to /dev/null: curl example.com -o /dev/null Or for Windows use nul: curl example.com -o nul -o, --output can be used several times in a command line Examples: curl -o file https://example.com curl "http://{one,two}.example.com" -o "file_#1.txt" curl "http://{site,host}.host[1-5].com" -o "#1_#2" curl -o file https://example.com -o file2 https://example.net See also -O, --remote-name, --remote-name-all and -J, --remote-header-name. --parallel-immediate When doing parallel transfers, this option will instruct curl that it should rather prefer opening up more connections in parallel at once rather than waiting to see if new transfers can be added as multiplexed streams on another connection. This option is global and does not need to be specified for each use of --next. Providing --parallel-immediate multiple times has no extra effect. Disable it again with --no-parallel-immediate. Example: curl --parallel-immediate -Z https://example.com -o file1 https://example.com -o file2 See also -Z, --parallel and --parallel-max. Added in 7.68.0. --parallel-max <num> When asked to do parallel transfers, using -Z, --parallel, this option controls the maximum amount of transfers to do simultaneously. This option is global and does not need to be specified for each use of -:, --next. The default is 50. If --parallel-max is provided several times, the last set value will be used. Example: curl --parallel-max 100 -Z https://example.com ftp://example.com/ See also -Z, --parallel. Added in 7.66.0. -Z, --parallel Makes curl perform its transfers in parallel as compared to the regular serial manner. This option is global and does not need to be specified for each use of --next. Providing -Z, --parallel multiple times has no extra effect. Disable it again with --no-parallel. Example: curl --parallel https://example.com -o file1 https://example.com -o file2 See also -:, --next and -v, --verbose. Added in 7.66.0. --pass <phrase> (SSH TLS) Passphrase for the private key. If --pass is provided several times, the last set value will be used. Example: curl --pass secret --key file https://example.com See also --key and -u, --user. --path-as-is Tell curl to not handle sequences of /../ or /./ in the given URL path. Normally curl will squash or merge them according to standards but with this option set you tell it not to do that. Providing --path-as-is multiple times has no extra effect. Disable it again with --no-path-as-is. Example: curl --path-as-is https://example.com/../../etc/passwd See also --request-target. Added in 7.42.0. --pinnedpubkey <hashes> (TLS) Tells curl to use the specified public key file (or hashes) to verify the peer. This can be a path to a file which contains a single public key in PEM or DER format, or any number of base64 encoded sha256 hashes preceded by 'sha256//' and separated by ';'. When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this certificate and if it does not exactly match the public key provided to this option, curl will abort the connection before sending or receiving any data. PEM/DER support: 7.39.0: OpenSSL, GnuTLS and GSKit 7.43.0: NSS and wolfSSL 7.47.0: mbedtls sha256 support: 7.44.0: OpenSSL, GnuTLS, NSS and wolfSSL 7.47.0: mbedtls Other SSL backends not supported. If --pinnedpubkey is provided several times, the last set value will be used. Examples: curl --pinnedpubkey keyfile https://example.com curl --pinnedpubkey 'sha256//ce118b51897f4452dc' https://example.com See also --hostpubsha256. Added in 7.39.0. --post301 (HTTP) Tells curl to respect RFC 7231/6.4.2 and not convert POST requests into GET requests when following a 301 redirection. The non-RFC behavior is ubiquitous in web browsers, so curl does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection. This option is meaningful only when using -L, --location. Providing --post301 multiple times has no extra effect. Disable it again with --no-post301. Example: curl --post301 --location -d "data" https://example.com See also --post302, --post303 and -L, --location. --post302 (HTTP) Tells curl to respect RFC 7231/6.4.3 and not convert POST requests into GET requests when following a 302 redirection. The non-RFC behavior is ubiquitous in web browsers, so curl does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection. This option is meaningful only when using -L, --location. Providing --post302 multiple times has no extra effect. Disable it again with --no-post302. Example: curl --post302 --location -d "data" https://example.com See also --post301, --post303 and -L, --location. --post303 (HTTP) Tells curl to violate RFC 7231/6.4.4 and not convert POST requests into GET requests when following 303 redirections. A server may require a POST to remain a POST after a 303 redirection. This option is meaningful only when using -L, --location. Providing --post303 multiple times has no extra effect. Disable it again with --no-post303. Example: curl --post303 --location -d "data" https://example.com See also --post302, --post301 and -L, --location. --preproxy [protocol://]host[:port] Use the specified SOCKS proxy before connecting to an HTTP or HTTPS -x, --proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. Hence pre proxy. The pre proxy string should be specified with a protocol:// prefix to specify alternative proxy protocols. Use socks4://, socks4a://, socks5:// or socks5h:// to request the specific SOCKS version to be used. No protocol specified will make curl default to SOCKS4. If the port number is not specified in the proxy string, it is assumed to be 1080. User and password that might be provided in the proxy string are URL decoded by curl. This allows you to pass in special characters such as @ by using %40 or pass in a colon with %3a. If --preproxy is provided several times, the last set value will be used. Example: curl --preproxy socks5://proxy.example -x http://http.example https://example.com See also -x, --proxy and --socks5. Added in 7.52.0. -#, --progress-bar Make curl display transfer progress as a simple progress bar instead of the standard, more informational, meter. This progress bar draws a single line of '#' characters across the screen and shows a percentage if the transfer size is known. For transfers without a known size, there will be space ship (-=o=-) that moves back and forth but only while data is being transferred, with a set of flying hash sign symbols on top. This option is global and does not need to be specified for each use of --next. Providing -#, --progress-bar multiple times has no extra effect. Disable it again with --no-progress-bar. Example: curl -# -O https://example.com See also --styled-output. --proto-default <protocol> Tells curl to use protocol for any URL missing a scheme name. An unknown or unsupported protocol causes error CURLE_UNSUPPORTED_PROTOCOL (1). This option does not change the default proxy protocol (http). Without this option set, curl guesses protocol based on the host name, see --url for details. If --proto-default is provided several times, the last set value will be used. Example: curl --proto-default https ftp.example.com See also --proto and --proto-redir. Added in 7.45.0. --proto-redir <protocols> Tells curl to limit what protocols it may use on redirect. Protocols denied by --proto are not overridden by this option. See --proto for how protocols are represented. Example, allow only HTTP and HTTPS on redirect: curl --proto-redir -all,http,https http://example.com By default curl will only allow HTTP, HTTPS, FTP and FTPS on redirect (since 7.65.2). Specifying all or +all enables all protocols on redirects, which is not good for security. If --proto-redir is provided several times, the last set value will be used. Example: curl --proto-redir =http,https https://example.com See also --proto. --proto <protocols> Tells curl to limit what protocols it may use for transfers. Protocols are evaluated left to right, are comma separated, and are each a protocol name or 'all', optionally prefixed by zero or more modifiers. Available modifiers are: + Permit this protocol in addition to protocols already permitted (this is the default if no modifier is used). - Deny this protocol, removing it from the list of protocols already permitted. = Permit only this protocol (ignoring the list already permitted), though subject to later modification by subsequent entries in the comma separated list. For example: --proto -ftps uses the default protocols, but disables ftps --proto -all,https,+http only enables http and https --proto =http,https also only enables http and https Unknown and disabled protocols produce a warning. This allows scripts to safely rely on being able to disable potentially dangerous protocols, without relying upon support for that protocol being built into curl to avoid an error. This option can be used multiple times, in which case the effect is the same as concatenating the protocols into one instance of the option. If --proto is provided several times, the last set value will be used. Example: curl --proto =http,https,sftp https://example.com See also --proto-redir and --proto-default. --proxy-anyauth Tells curl to pick a suitable authentication method when communicating with the given HTTP proxy. This might cause an extra request/response round-trip. Providing --proxy-anyauth multiple times has no extra effect. Example: curl --proxy-anyauth --proxy-user user:passwd -x proxy https://example.com See also -x, --proxy, --proxy-basic and --proxy-digest. --proxy-basic Tells curl to use HTTP Basic authentication when communicating with the given proxy. Use --basic for enabling HTTP Basic with a remote host. Basic is the default authentication method curl uses with proxies. Providing --proxy-basic multiple times has no extra effect. Example: curl --proxy-basic --proxy-user user:passwd -x proxy https://example.com See also -x, --proxy, --proxy-anyauth and --proxy-digest. --proxy-ca-native (TLS) Tells curl to use the CA store from the native operating system to verify the HTTPS proxy. By default, curl will otherwise use a CA store provided in a single file or directory, but when using this option it will interface the operating system's own vault. This option only works for curl on Windows when built to use OpenSSL. When curl on Windows is built to use Schannel, this feature is implied and curl then only uses the native CA store. Providing --proxy-ca-native multiple times has no extra effect. Disable it again with --no-proxy-ca-native. Example: curl --ca-native https://example.com See also --cacert, --capath and -k, --insecure. Added in 8.2.0. --proxy-cacert <file> Same as --cacert but used in HTTPS proxy context. If --proxy-cacert is provided several times, the last set value will be used. Example: curl --proxy-cacert CA-file.txt -x https://proxy https://example.com See also --proxy-capath, --cacert, --capath and -x, --proxy. Added in 7.52.0. --proxy-capath <dir> Same as --capath but used in HTTPS proxy context. If --proxy-capath is provided several times, the last set value will be used. Example: curl --proxy-capath /local/directory -x https://proxy https://example.com See also --proxy-cacert, -x, --proxy and --capath. Added in 7.52.0. --proxy-cert-type <type> Same as --cert-type but used in HTTPS proxy context. If --proxy-cert-type is provided several times, the last set value will be used. Example: curl --proxy-cert-type PEM --proxy-cert file -x https://proxy https://example.com See also --proxy-cert. Added in 7.52.0. --proxy-cert <cert[:passwd]> Same as -E, --cert but used in HTTPS proxy context. If --proxy-cert is provided several times, the last set value will be used. Example: curl --proxy-cert file -x https://proxy https://example.com See also --proxy-cert-type. Added in 7.52.0. --proxy-ciphers <list> Same as --ciphers but used in HTTPS proxy context. If --proxy-ciphers is provided several times, the last set value will be used. Example: curl --proxy-ciphers ECDHE-ECDSA-AES256-CCM8 -x https://proxy https://example.com See also --ciphers, --curves and -x, --proxy. Added in 7.52.0. --proxy-crlfile <file> Same as --crlfile but used in HTTPS proxy context. If --proxy-crlfile is provided several times, the last set value will be used. Example: curl --proxy-crlfile rejects.txt -x https://proxy https://example.com See also --crlfile and -x, --proxy. Added in 7.52.0. --proxy-digest Tells curl to use HTTP Digest authentication when communicating with the given proxy. Use --digest for enabling HTTP Digest with a remote host. Providing --proxy-digest multiple times has no extra effect. Example: curl --proxy-digest --proxy-user user:passwd -x proxy https://example.com See also -x, --proxy, --proxy-anyauth and --proxy-basic. --proxy-header <header/@file> (HTTP) Extra header to include in the request when sending HTTP to a proxy. You may specify any number of extra headers. This is the equivalent option to -H, --header but is for proxy communication only like in CONNECT requests when you want a separate header sent to the proxy to what is sent to the actual remote host. curl will make sure that each header you add/replace is sent with the proper end-of-line marker, you should thus not add that as a part of the header content: do not add newlines or carriage returns, they will only mess things up for you. Headers specified with this option will not be included in requests that curl knows will not be sent to a proxy. Starting in 7.55.0, this option can take an argument in @filename style, which then adds a header for each line in the input file. Using @- will make curl read the header file from stdin. This option can be used multiple times to add/replace/remove multiple headers. --proxy-header can be used several times in a command line Examples: curl --proxy-header "X-First-Name: Joe" -x http://proxy https://example.com curl --proxy-header "User-Agent: surprise" -x http://proxy https://example.com curl --proxy-header "Host:" -x http://proxy https://example.com See also -x, --proxy. Added in 7.37.0. --proxy-http2 (HTTP) Tells curl to try negotiate HTTP version 2 with an HTTPS proxy. The proxy might still only offer HTTP/1 and then curl will stick to using that version. This has no effect for any other kinds of proxies. Providing --proxy-http2 multiple times has no extra effect. Disable it again with --no-proxy-http2. Example: curl --proxy-http2 -x proxy https://example.com See also -x, --proxy. --proxy-http2 requires that the underlying libcurl was built to support HTTP/2. Added in 8.1.0. --proxy-insecure Same as -k, --insecure but used in HTTPS proxy context. Providing --proxy-insecure multiple times has no extra effect. Disable it again with --no-proxy-insecure. Example: curl --proxy-insecure -x https://proxy https://example.com See also -x, --proxy and -k, --insecure. Added in 7.52.0. --proxy-key-type <type> Same as --key-type but used in HTTPS proxy context. If --proxy-key-type is provided several times, the last set value will be used. Example: curl --proxy-key-type DER --proxy-key here -x https://proxy https://example.com See also --proxy-key and -x, --proxy. Added in 7.52.0. --proxy-key <key> Same as --key but used in HTTPS proxy context. If --proxy-key is provided several times, the last set value will be used. Example: curl --proxy-key here -x https://proxy https://example.com See also --proxy-key-type and -x, --proxy. Added in 7.52.0. --proxy-negotiate Tells curl to use HTTP Negotiate (SPNEGO) authentication when communicating with the given proxy. Use --negotiate for enabling HTTP Negotiate (SPNEGO) with a remote host. Providing --proxy-negotiate multiple times has no extra effect. Example: curl --proxy-negotiate --proxy-user user:passwd -x proxy https://example.com See also --proxy-anyauth and --proxy-basic. --proxy-ntlm Tells curl to use HTTP NTLM authentication when communicating with the given proxy. Use --ntlm for enabling NTLM with a remote host. Providing --proxy-ntlm multiple times has no extra effect. Example: curl --proxy-ntlm --proxy-user user:passwd -x http://proxy https://example.com See also --proxy-negotiate and --proxy-anyauth. --proxy-pass <phrase> Same as --pass but used in HTTPS proxy context. If --proxy-pass is provided several times, the last set value will be used. Example: curl --proxy-pass secret --proxy-key here -x https://proxy https://example.com See also -x, --proxy and --proxy-key. Added in 7.52.0. --proxy-pinnedpubkey <hashes> (TLS) Tells curl to use the specified public key file (or hashes) to verify the proxy. This can be a path to a file which contains a single public key in PEM or DER format, or any number of base64 encoded sha256 hashes preceded by 'sha256//' and separated by ';'. When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this certificate and if it does not exactly match the public key provided to this option, curl will abort the connection before sending or receiving any data. If --proxy-pinnedpubkey is provided several times, the last set value will be used. Examples: curl --proxy-pinnedpubkey keyfile https://example.com curl --proxy-pinnedpubkey 'sha256//ce118b51897f4452dc' https://example.com See also --pinnedpubkey and -x, --proxy. Added in 7.59.0. --proxy-service-name <name> This option allows you to change the service name for proxy negotiation. If --proxy-service-name is provided several times, the last set value will be used. Example: curl --proxy-service-name "shrubbery" -x proxy https://example.com See also --service-name and -x, --proxy. Added in 7.43.0. --proxy-ssl-allow-beast Same as --ssl-allow-beast but used in HTTPS proxy context. Providing --proxy-ssl-allow-beast multiple times has no extra effect. Disable it again with --no-proxy-ssl-allow- beast. Example: curl --proxy-ssl-allow-beast -x https://proxy https://example.com See also --ssl-allow-beast and -x, --proxy. Added in 7.52.0. --proxy-ssl-auto-client-cert Same as --ssl-auto-client-cert but used in HTTPS proxy context. Providing --proxy-ssl-auto-client-cert multiple times has no extra effect. Disable it again with --no-proxy-ssl- auto-client-cert. Example: curl --proxy-ssl-auto-client-cert -x https://proxy https://example.com See also --ssl-auto-client-cert and -x, --proxy. Added in 7.77.0. --proxy-tls13-ciphers <ciphersuite list> (TLS) Specifies which cipher suites to use in the connection to your HTTPS proxy when it negotiates TLS 1.3. The list of ciphers suites must specify valid ciphers. Read up on TLS 1.3 cipher suite details on this URL: https://curl.se/docs/ssl-ciphers.html This option is currently used only when curl is built to use OpenSSL 1.1.1 or later. If you are using a different SSL backend you can try setting TLS 1.3 cipher suites by using the --proxy-ciphers option. If --proxy-tls13-ciphers is provided several times, the last set value will be used. Example: curl --proxy-tls13-ciphers TLS_AES_128_GCM_SHA256 -x proxy https://example.com See also --tls13-ciphers and --curves. Added in 7.61.0. --proxy-tlsauthtype <type> Same as --tlsauthtype but used in HTTPS proxy context. If --proxy-tlsauthtype is provided several times, the last set value will be used. Example: curl --proxy-tlsauthtype SRP -x https://proxy https://example.com See also -x, --proxy and --proxy-tlsuser. Added in 7.52.0. --proxy-tlspassword <string> Same as --tlspassword but used in HTTPS proxy context. If --proxy-tlspassword is provided several times, the last set value will be used. Example: curl --proxy-tlspassword passwd -x https://proxy https://example.com See also -x, --proxy and --proxy-tlsuser. Added in 7.52.0. --proxy-tlsuser <name> Same as --tlsuser but used in HTTPS proxy context. If --proxy-tlsuser is provided several times, the last set value will be used. Example: curl --proxy-tlsuser smith -x https://proxy https://example.com See also -x, --proxy and --proxy-tlspassword. Added in 7.52.0. --proxy-tlsv1 Same as -1, --tlsv1 but used in HTTPS proxy context. Providing --proxy-tlsv1 multiple times has no extra effect. Example: curl --proxy-tlsv1 -x https://proxy https://example.com See also -x, --proxy. Added in 7.52.0. -U, --proxy-user <user:password> Specify the user name and password to use for proxy authentication. If you use a Windows SSPI-enabled curl binary and do either Negotiate or NTLM authentication then you can tell curl to select the user name and password from your environment by specifying a single colon with this option: "-U :". On systems where it works, curl will hide the given option argument from process listings. This is not enough to protect credentials from possibly getting seen by other users on the same system as they will still be visible for a moment before cleared. Such sensitive data should be retrieved from a file instead or similar and never used in clear text in a command line. If -U, --proxy-user is provided several times, the last set value will be used. Example: curl --proxy-user name:pwd -x proxy https://example.com See also --proxy-pass. -x, --proxy [protocol://]host[:port] Use the specified proxy. The proxy string can be specified with a protocol:// prefix. No protocol specified or http:// will be treated as HTTP proxy. Use socks4://, socks4a://, socks5:// or socks5h:// to request a specific SOCKS version to be used. Unix domain sockets are supported for socks proxy. Set localhost for the host part. e.g. socks5h://localhost/path/to/socket.sock HTTPS proxy support via https:// protocol prefix was added in 7.52.0 for OpenSSL, GnuTLS and NSS. Since 7.87.0, it also works for BearSSL, mbedTLS, rustls, Schannel, Secure Transport and wolfSSL. Unrecognized and unsupported proxy protocols cause an error since 7.52.0. Prior versions may ignore the protocol and use http:// instead. If the port number is not specified in the proxy string, it is assumed to be 1080. This option overrides existing environment variables that set the proxy to use. If there's an environment variable setting a proxy, you can set proxy to "" to override it. All operations that are performed over an HTTP proxy will transparently be converted to HTTP. It means that certain protocol specific operations might not be available. This is not the case if you can tunnel through the proxy, as one with the -p, --proxytunnel option. User and password that might be provided in the proxy string are URL decoded by curl. This allows you to pass in special characters such as @ by using %40 or pass in a colon with %3a. The proxy host can be specified the same way as the proxy environment variables, including the protocol prefix (http://) and the embedded user + password. When a proxy is used, the active FTP mode as set with -P, --ftp-port, cannot be used. If -x, --proxy is provided several times, the last set value will be used. Example: curl --proxy http://proxy.example https://example.com See also --socks5 and --proxy-basic. --proxy1.0 <host[:port]> Use the specified HTTP 1.0 proxy. If the port number is not specified, it is assumed at port 1080. The only difference between this and the HTTP proxy option -x, --proxy, is that attempts to use CONNECT through the proxy will specify an HTTP 1.0 protocol instead of the default HTTP 1.1. Providing --proxy1.0 multiple times has no extra effect. Example: curl --proxy1.0 -x http://proxy https://example.com See also -x, --proxy, --socks5 and --preproxy. -p, --proxytunnel When an HTTP proxy is used -x, --proxy, this option will make curl tunnel through the proxy. The tunnel approach is made with the HTTP proxy CONNECT request and requires that the proxy allows direct connect to the remote port number curl wants to tunnel through to. To suppress proxy CONNECT response headers when curl is set to output headers use --suppress-connect-headers. Providing -p, --proxytunnel multiple times has no extra effect. Disable it again with --no-proxytunnel. Example: curl --proxytunnel -x http://proxy https://example.com See also -x, --proxy. --pubkey <key> (SFTP SCP) Public key file name. Allows you to provide your public key in this separate file. (As of 7.39.0, curl attempts to automatically extract the public key from the private key file, so passing this option is generally not required. Note that this public key extraction requires libcurl to be linked against a copy of libssh2 1.2.8 or higher that is itself linked against OpenSSL.) If --pubkey is provided several times, the last set value will be used. Example: curl --pubkey file.pub sftp://example.com/ See also --pass. -Q, --quote <command> (FTP SFTP) Send an arbitrary command to the remote FTP or SFTP server. Quote commands are sent BEFORE the transfer takes place (just after the initial PWD command in an FTP transfer, to be exact). To make commands take place after a successful transfer, prefix them with a dash '-'. (FTP only) To make commands be sent after curl has changed the working directory, just before the file transfer command(s), prefix the command with a '+'. This is not performed when a directory listing is performed. You may specify any number of commands. By default curl will stop at first failure. To make curl continue even if the command fails, prefix the command with an asterisk (*). Otherwise, if the server returns failure for one of the commands, the entire operation will be aborted. You must send syntactically correct FTP commands as RFC 959 defines to FTP servers, or one of the commands listed below to SFTP servers. SFTP is a binary protocol. Unlike for FTP, curl interprets SFTP quote commands itself before sending them to the server. File names may be quoted shell-style to embed spaces or special characters. Following is the list of all supported SFTP quote commands: atime date file The atime command sets the last access time of the file named by the file operand. The <date expression> can be all sorts of date strings, see the curl_getdate(3) man page for date expression details. (Added in 7.73.0) chgrp group file The chgrp command sets the group ID of the file named by the file operand to the group ID specified by the group operand. The group operand is a decimal integer group ID. chmod mode file The chmod command modifies the file mode bits of the specified file. The mode operand is an octal integer mode number. chown user file The chown command sets the owner of the file named by the file operand to the user ID specified by the user operand. The user operand is a decimal integer user ID. ln source_file target_file The ln and symlink commands create a symbolic link at the target_file location pointing to the source_file location. mkdir directory_name The mkdir command creates the directory named by the directory_name operand. mtime date file The mtime command sets the last modification time of the file named by the file operand. The <date expression> can be all sorts of date strings, see the curl_getdate(3) man page for date expression details. (Added in 7.73.0) pwd The pwd command returns the absolute pathname of the current working directory. rename source target The rename command renames the file or directory named by the source operand to the destination path named by the target operand. rm file The rm command removes the file specified by the file operand. rmdir directory The rmdir command removes the directory entry specified by the directory operand, provided it is empty. symlink source_file target_file See ln. -Q, --quote can be used several times in a command line Example: curl --quote "DELE file" ftp://example.com/foo See also -X, --request. --random-file <file> Deprecated option. This option is ignored by curl since 7.84.0. Prior to that it only had an effect on curl if built to use old versions of OpenSSL. Specify the path name to file containing what will be considered as random data. The data may be used to seed the random engine for SSL connections. If --random-file is provided several times, the last set value will be used. Example: curl --random-file rubbish https://example.com See also --egd-file. -r, --range <range> (HTTP FTP SFTP FILE) Retrieve a byte range (i.e. a partial document) from an HTTP/1.1, FTP or SFTP server or a local FILE. Ranges can be specified in a number of ways. 0-499 specifies the first 500 bytes 500-999 specifies the second 500 bytes -500 specifies the last 500 bytes 9500- specifies the bytes from offset 9500 and forward 0-0,-1 specifies the first and last byte only(*)(HTTP) 100-199,500-599 specifies two separate 100-byte ranges(*) (HTTP) (*) = NOTE that this will cause the server to reply with a multipart response, which will be returned as-is by curl! Parsing or otherwise transforming this response is the responsibility of the caller. Only digit characters (0-9) are valid in the 'start' and 'stop' fields of the 'start-stop' range syntax. If a non- digit character is given in the range, the server's response will be unspecified, depending on the server's configuration. You should also be aware that many HTTP/1.1 servers do not have this feature enabled, so that when you attempt to get a range, you will instead get the whole document. FTP and SFTP range downloads only support the simple 'start-stop' syntax (optionally with one of the numbers omitted). FTP use depends on the extended FTP command SIZE. If -r, --range is provided several times, the last set value will be used. Example: curl --range 22-44 https://example.com See also -C, --continue-at and -a, --append. --rate <max request rate> Specify the maximum transfer frequency you allow curl to use - in number of transfer starts per time unit (sometimes called request rate). Without this option, curl will start the next transfer as fast as possible. If given several URLs and a transfer completes faster than the allowed rate, curl will wait until the next transfer is started to maintain the requested rate. This option has no effect when -Z, --parallel is used. The request rate is provided as "N/U" where N is an integer number and U is a time unit. Supported units are 's' (second), 'm' (minute), 'h' (hour) and 'd' /(day, as in a 24 hour unit). The default time unit, if no "/U" is provided, is number of transfers per hour. If curl is told to allow 10 requests per minute, it will not start the next request until 6 seconds have elapsed since the previous transfer was started. This function uses millisecond resolution. If the allowed frequency is set more than 1000 per second, it will instead run unrestricted. When retrying transfers, enabled with --retry, the separate retry delay logic is used and not this setting. This option is global and does not need to be specified for each use of --next. If --rate is provided several times, the last set value will be used. Examples: curl --rate 2/s https://example.com ... curl --rate 3/h https://example.com ... curl --rate 14/m https://example.com ... See also --limit-rate and --retry-delay. Added in 7.84.0. --raw (HTTP) When used, it disables all internal HTTP decoding of content or transfer encodings and instead makes them passed on unaltered, raw. Providing --raw multiple times has no extra effect. Disable it again with --no-raw. Example: curl --raw https://example.com See also --tr-encoding. -e, --referer <URL> (HTTP) Sends the "Referrer Page" information to the HTTP server. This can also be set with the -H, --header flag of course. When used with -L, --location you can append ";auto" to the -e, --referer URL to make curl automatically set the previous URL when it follows a Location: header. The ";auto" string can be used alone, even if you do not set an initial -e, --referer. If -e, --referer is provided several times, the last set value will be used. Examples: curl --referer "https://fake.example" https://example.com curl --referer "https://fake.example;auto" -L https://example.com curl --referer ";auto" -L https://example.com See also -A, --user-agent and -H, --header. -J, --remote-header-name (HTTP) This option tells the -O, --remote-name option to use the server-specified Content-Disposition filename instead of extracting a filename from the URL. If the server-provided file name contains a path, that will be stripped off before the file name is used. The file is saved in the current directory, or in the directory specified with --output-dir. If the server specifies a file name and a file with that name already exists in the destination directory, it will not be overwritten and an error will occur - unless you allow it by using the --clobber option. If the server does not specify a file name then this option has no effect. There's no attempt to decode %-sequences (yet) in the provided file name, so this option may provide you with rather unexpected file names. This feature uses the name from the "filename" field, it does not yet support the "filename*" field (filenames with explicit character sets). WARNING: Exercise judicious use of this option, especially on Windows. A rogue server could send you the name of a DLL or other file that could be loaded automatically by Windows or some third party software. Providing -J, --remote-header-name multiple times has no extra effect. Disable it again with --no-remote-header- name. Example: curl -OJ https://example.com/file See also -O, --remote-name. --remote-name-all This option changes the default action for all given URLs to be dealt with as if -O, --remote-name were used for each one. So if you want to disable that for a specific URL after --remote-name-all has been used, you must use "-o -" or --no-remote-name. Providing --remote-name-all multiple times has no extra effect. Disable it again with --no-remote-name-all. Example: curl --remote-name-all ftp://example.com/file1 ftp://example.com/file2 See also -O, --remote-name. -O, --remote-name Write output to a local file named like the remote file we get. (Only the file part of the remote file is used, the path is cut off.) The file will be saved in the current working directory. If you want the file saved in a different directory, make sure you change the current working directory before invoking curl with this option or use --output-dir. The remote file name to use for saving is extracted from the given URL, nothing else, and if it already exists it will be overwritten. If you want the server to be able to choose the file name refer to -J, --remote-header-name which can be used in addition to this option. If the server chooses a file name and that name already exists it will not be overwritten. There is no URL decoding done on the file name. If it has %20 or other URL encoded parts of the name, they will end up as-is as file name. You may use this option as many times as the number of URLs you have. -O, --remote-name can be used several times in a command line Example: curl -O https://example.com/filename See also --remote-name-all, --output-dir and -J, --remote- header-name. -R, --remote-time When used, this will make curl attempt to figure out the timestamp of the remote file, and if that is available make the local file get that same timestamp. Providing -R, --remote-time multiple times has no extra effect. Disable it again with --no-remote-time. Example: curl --remote-time -o foo https://example.com See also -O, --remote-name and -z, --time-cond. --remove-on-error When curl returns an error when told to save output in a local file, this option removes that saved file before exiting. This prevents curl from leaving a partial file in the case of an error during transfer. If the output is not a file, this option has no effect. Providing --remove-on-error multiple times has no extra effect. Disable it again with --no-remove-on-error. Example: curl --remove-on-error -o output https://example.com See also -f, --fail. Added in 7.83.0. --request-target <path> (HTTP) Tells curl to use an alternative "target" (path) instead of using the path as provided in the URL. Particularly useful when wanting to issue HTTP requests without leading slash or other data that does not follow the regular URL pattern, like "OPTIONS *". If --request-target is provided several times, the last set value will be used. Example: curl --request-target "*" -X OPTIONS https://example.com See also -X, --request. Added in 7.55.0. -X, --request <method> (HTTP) Specifies a custom request method to use when communicating with the HTTP server. The specified request method will be used instead of the method otherwise used (which defaults to GET). Read the HTTP 1.1 specification for details and explanations. Common additional HTTP requests include PUT and DELETE, but related technologies like WebDAV offers PROPFIND, COPY, MOVE and more. Normally you do not need this option. All sorts of GET, HEAD, POST and PUT requests are rather invoked by using dedicated command line options. This option only changes the actual word used in the HTTP request, it does not alter the way curl behaves. So for example if you want to make a proper HEAD request, using -X HEAD will not suffice. You need to use the -I, --head option. The method string you set with -X, --request will be used for all requests, which if you for example use -L, --location may cause unintended side-effects when curl does not change request method according to the HTTP 30x response codes - and similar. (FTP) Specifies a custom FTP command to use instead of LIST when doing file lists with FTP. (POP3) Specifies a custom POP3 command to use instead of LIST or RETR. (IMAP) Specifies a custom IMAP command to use instead of LIST. (Added in 7.30.0) (SMTP) Specifies a custom SMTP command to use instead of HELP or VRFY. (Added in 7.34.0) If -X, --request is provided several times, the last set value will be used. Examples: curl -X "DELETE" https://example.com curl -X NLST ftp://example.com/ See also --request-target. --resolve <[+]host:port:addr[,addr]...> Provide a custom address for a specific host and port pair. Using this, you can make the curl requests(s) use a specified address and prevent the otherwise normally resolved address to be used. Consider it a sort of /etc/hosts alternative provided on the command line. The port number should be the number used for the specific protocol the host will be used for. It means you need several entries if you want to provide address for the same host but different ports. By specifying '*' as host you can tell curl to resolve any host and specific port pair to the specified address. Wildcard is resolved last so any --resolve with a specific host and port will be used first. The provided address set by this option will be used even if -4, --ipv4 or -6, --ipv6 is set to make curl use another IP version. By prefixing the host with a '+' you can make the entry time out after curl's default timeout (1 minute). Note that this will only make sense for long running parallel transfers with a lot of files. In such cases, if this option is used curl will try to resolve the host as it normally would once the timeout has expired. Support for providing the IP address within [brackets] was added in 7.57.0. Support for providing multiple IP addresses per entry was added in 7.59.0. Support for resolving with wildcard was added in 7.64.0. Support for the '+' prefix was was added in 7.75.0. This option can be used many times to add many host names to resolve. --resolve can be used several times in a command line Example: curl --resolve example.com:443:127.0.0.1 https://example.com See also --connect-to and --alt-svc. --retry-all-errors Retry on any error. This option is used together with --retry. This option is the "sledgehammer" of retrying. Do not use this option by default (eg in curlrc), there may be unintended consequences such as sending or receiving duplicate data. Do not use with redirected input or output. You'd be much better off handling your unique problems in shell script. Please read the example below. WARNING: For server compatibility curl attempts to retry failed flaky transfers as close as possible to how they were started, but this is not possible with redirected input or output. For example, before retrying it removes output data from a failed partial transfer that was written to an output file. However this is not true of data redirected to a | pipe or > file, which are not reset. We strongly suggest you do not parse or record output via redirect in combination with this option, since you may receive duplicate data. By default curl will not error on an HTTP response code that indicates an HTTP error, if the transfer was successful. For example, if a server replies 404 Not Found and the reply is fully received then that is not an error. When --retry is used then curl will retry on some HTTP response codes that indicate transient HTTP errors, but that does not include most 4xx response codes such as 404. If you want to retry on all response codes that indicate HTTP errors (4xx and 5xx) then combine with -f, --fail. Providing --retry-all-errors multiple times has no extra effect. Disable it again with --no-retry-all-errors. Example: curl --retry 5 --retry-all-errors https://example.com See also --retry. Added in 7.71.0. --retry-connrefused In addition to the other conditions, consider ECONNREFUSED as a transient error too for --retry. This option is used together with --retry. Providing --retry-connrefused multiple times has no extra effect. Disable it again with --no-retry-connrefused. Example: curl --retry-connrefused --retry 7 https://example.com See also --retry and --retry-all-errors. Added in 7.52.0. --retry-delay <seconds> Make curl sleep this amount of time before each retry when a transfer has failed with a transient error (it changes the default backoff time algorithm between retries). This option is only interesting if --retry is also used. Setting this delay to zero will make curl use the default backoff time. If --retry-delay is provided several times, the last set value will be used. Example: curl --retry-delay 5 --retry 7 https://example.com See also --retry. --retry-max-time <seconds> The retry timer is reset before the first transfer attempt. Retries will be done as usual (see --retry) as long as the timer has not reached this given limit. Notice that if the timer has not reached the limit, the request will be made and while performing, it may take longer than this given time period. To limit a single request's maximum time, use -m, --max-time. Set this option to zero to not timeout retries. If --retry-max-time is provided several times, the last set value will be used. Example: curl --retry-max-time 30 --retry 10 https://example.com See also --retry. --retry <num> If a transient error is returned when curl tries to perform a transfer, it will retry this number of times before giving up. Setting the number to 0 makes curl do no retries (which is the default). Transient error means either: a timeout, an FTP 4xx response code or an HTTP 408, 429, 500, 502, 503 or 504 response code. When curl is about to retry a transfer, it will first wait one second and then for all forthcoming retries it will double the waiting time until it reaches 10 minutes which then will be the delay between the rest of the retries. By using --retry-delay you disable this exponential backoff algorithm. See also --retry-max-time to limit the total time allowed for retries. Since curl 7.66.0, curl will comply with the Retry-After: response header if one was present to know when to issue the next retry. If --retry is provided several times, the last set value will be used. Example: curl --retry 7 https://example.com See also --retry-max-time. --sasl-authzid <identity> Use this authorization identity (authzid), during SASL PLAIN authentication, in addition to the authentication identity (authcid) as specified by -u, --user. If the option is not specified, the server will derive the authzid from the authcid, but if specified, and depending on the server implementation, it may be used to access another user's inbox, that the user has been granted access to, or a shared mailbox for example. If --sasl-authzid is provided several times, the last set value will be used. Example: curl --sasl-authzid zid imap://example.com/ See also --login-options. Added in 7.66.0. --sasl-ir Enable initial response in SASL authentication. Providing --sasl-ir multiple times has no extra effect. Disable it again with --no-sasl-ir. Example: curl --sasl-ir imap://example.com/ See also --sasl-authzid. Added in 7.31.0. --service-name <name> This option allows you to change the service name for SPNEGO. Examples: --negotiate --service-name sockd would use sockd/server-name. If --service-name is provided several times, the last set value will be used. Example: curl --service-name sockd/server https://example.com See also --negotiate and --proxy-service-name. Added in 7.43.0. -S, --show-error When used with -s, --silent, it makes curl show an error message if it fails. This option is global and does not need to be specified for each use of --next. Providing -S, --show-error multiple times has no extra effect. Disable it again with --no-show-error. Example: curl --show-error --silent https://example.com See also --no-progress-meter. -s, --silent Silent or quiet mode. Do not show progress meter or error messages. Makes Curl mute. It will still output the data you ask for, potentially even to the terminal/stdout unless you redirect it. Use -S, --show-error in addition to this option to disable progress meter but still show error messages. Providing -s, --silent multiple times has no extra effect. Disable it again with --no-silent. Example: curl -s https://example.com See also -v, --verbose, --stderr and --no-progress-meter. --socks4 <host[:port]> Use the specified SOCKS4 proxy. If the port number is not specified, it is assumed at port 1080. Using this socket type make curl resolve the host name and passing the address on to the proxy. To specify proxy on a unix domain socket, use localhost for host, e.g. socks4://localhost/path/to/socket.sock This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks4 proxy with -x, --proxy using a socks4:// protocol prefix. Since 7.52.0, --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If --socks4 is provided several times, the last set value will be used. Example: curl --socks4 hostname:4096 https://example.com See also --socks4a, --socks5 and --socks5-hostname. --socks4a <host[:port]> Use the specified SOCKS4a proxy. If the port number is not specified, it is assumed at port 1080. This asks the proxy to resolve the host name. To specify proxy on a unix domain socket, use localhost for host, e.g. socks4a://localhost/path/to/socket.sock This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks4a proxy with -x, --proxy using a socks4a:// protocol prefix. Since 7.52.0, --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If --socks4a is provided several times, the last set value will be used. Example: curl --socks4a hostname:4096 https://example.com See also --socks4, --socks5 and --socks5-hostname. --socks5-basic Tells curl to use username/password authentication when connecting to a SOCKS5 proxy. The username/password authentication is enabled by default. Use --socks5-gssapi to force GSS-API authentication to SOCKS5 proxies. Providing --socks5-basic multiple times has no extra effect. Example: curl --socks5-basic --socks5 hostname:4096 https://example.com See also --socks5. Added in 7.55.0. --socks5-gssapi-nec As part of the GSS-API negotiation a protection mode is negotiated. RFC 1961 says in section 4.3/4.4 it should be protected, but the NEC reference implementation does not. The option --socks5-gssapi-nec allows the unprotected exchange of the protection mode negotiation. Providing --socks5-gssapi-nec multiple times has no extra effect. Disable it again with --no-socks5-gssapi-nec. Example: curl --socks5-gssapi-nec --socks5 hostname:4096 https://example.com See also --socks5. --socks5-gssapi-service <name> The default service name for a socks server is rcmd/server-fqdn. This option allows you to change it. Examples: --socks5 proxy-name --socks5-gssapi-service sockd would use sockd/proxy-name --socks5 proxy-name --socks5-gssapi-service sockd/real-name would use sockd/real-name for cases where the proxy-name does not match the principal name. If --socks5-gssapi-service is provided several times, the last set value will be used. Example: curl --socks5-gssapi-service sockd --socks5 hostname:4096 https://example.com See also --socks5. --socks5-gssapi Tells curl to use GSS-API authentication when connecting to a SOCKS5 proxy. The GSS-API authentication is enabled by default (if curl is compiled with GSS-API support). Use --socks5-basic to force username/password authentication to SOCKS5 proxies. Providing --socks5-gssapi multiple times has no extra effect. Disable it again with --no-socks5-gssapi. Example: curl --socks5-gssapi --socks5 hostname:4096 https://example.com See also --socks5. Added in 7.55.0. --socks5-hostname <host[:port]> Use the specified SOCKS5 proxy (and let the proxy resolve the host name). If the port number is not specified, it is assumed at port 1080. To specify proxy on a unix domain socket, use localhost for host, e.g. socks5h://localhost/path/to/socket.sock This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks5 hostname proxy with -x, --proxy using a socks5h:// protocol prefix. Since 7.52.0, --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If --socks5-hostname is provided several times, the last set value will be used. Example: curl --socks5-hostname proxy.example:7000 https://example.com See also --socks5 and --socks4a. --socks5 <host[:port]> Use the specified SOCKS5 proxy - but resolve the host name locally. If the port number is not specified, it is assumed at port 1080. To specify proxy on a unix domain socket, use localhost for host, e.g. socks5://localhost/path/to/socket.sock This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks5 proxy with -x, --proxy using a socks5:// protocol prefix. Since 7.52.0, --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. This option (as well as --socks4) does not work with IPV6, FTPS or LDAP. If --socks5 is provided several times, the last set value will be used. Example: curl --socks5 proxy.example:7000 https://example.com See also --socks5-hostname and --socks4a. -Y, --speed-limit <speed> If a transfer is slower than this given speed (in bytes per second) for speed-time seconds it gets aborted. speed- time is set with -y, --speed-time and is 30 if not set. If -Y, --speed-limit is provided several times, the last set value will be used. Example: curl --speed-limit 300 --speed-time 10 https://example.com See also -y, --speed-time, --limit-rate and -m, --max- time. -y, --speed-time <seconds> If a transfer runs slower than speed-limit bytes per second during a speed-time period, the transfer is aborted. If speed-time is used, the default speed-limit will be 1 unless set with -Y, --speed-limit. This option controls transfers (in both directions) but will not affect slow connects etc. If this is a concern for you, try the --connect-timeout option. If -y, --speed-time is provided several times, the last set value will be used. Example: curl --speed-limit 300 --speed-time 10 https://example.com See also -Y, --speed-limit and --limit-rate. --ssl-allow-beast This option tells curl to not work around a security flaw in the SSL3 and TLS1.0 protocols known as BEAST. If this option is not used, the SSL layer may use workarounds known to cause interoperability problems with some older SSL implementations. WARNING: this option loosens the SSL security, and by using this flag you ask for exactly that. Providing --ssl-allow-beast multiple times has no extra effect. Disable it again with --no-ssl-allow-beast. Example: curl --ssl-allow-beast https://example.com See also --proxy-ssl-allow-beast and -k, --insecure. --ssl-auto-client-cert Tell libcurl to automatically locate and use a client certificate for authentication, when requested by the server. This option is only supported for Schannel (the native Windows SSL library). Prior to 7.77.0 this was the default behavior in libcurl with Schannel. Since the server can request any certificate that supports client authentication in the OS certificate store it could be a privacy violation and unexpected. Providing --ssl-auto-client-cert multiple times has no extra effect. Disable it again with --no-ssl-auto-client- cert. Example: curl --ssl-auto-client-cert https://example.com See also --proxy-ssl-auto-client-cert. Added in 7.77.0. --ssl-no-revoke (Schannel) This option tells curl to disable certificate revocation checks. WARNING: this option loosens the SSL security, and by using this flag you ask for exactly that. Providing --ssl-no-revoke multiple times has no extra effect. Disable it again with --no-ssl-no-revoke. Example: curl --ssl-no-revoke https://example.com See also --crlfile. Added in 7.44.0. --ssl-reqd (FTP IMAP POP3 SMTP LDAP) Require SSL/TLS for the connection. Terminates the connection if the transfer cannot be upgraded to use SSL/TLS. This option is handled in LDAP since version 7.81.0. It is fully supported by the OpenLDAP backend and rejected by the generic ldap backend if explicit TLS is required. This option is unnecessary if you use a URL scheme that in itself implies immediate and implicit use of TLS, like for FTPS, IMAPS, POP3S, SMTPS and LDAPS. Such transfers will always fail if the TLS handshake does not work. This option was formerly known as --ftp-ssl-reqd. Providing --ssl-reqd multiple times has no extra effect. Disable it again with --no-ssl-reqd. Example: curl --ssl-reqd ftp://example.com See also --ssl and -k, --insecure. --ssl-revoke-best-effort (Schannel) This option tells curl to ignore certificate revocation checks when they failed due to missing/offline distribution points for the revocation check lists. Providing --ssl-revoke-best-effort multiple times has no extra effect. Disable it again with --no-ssl-revoke-best- effort. Example: curl --ssl-revoke-best-effort https://example.com See also --crlfile and -k, --insecure. Added in 7.70.0. --ssl (FTP IMAP POP3 SMTP LDAP) Warning: this is considered an insecure option. Consider using --ssl-reqd instead to be sure curl upgrades to a secure connection. Try to use SSL/TLS for the connection. Reverts to a non- secure connection if the server does not support SSL/TLS. See also --ftp-ssl-control and --ssl-reqd for different levels of encryption required. This option is handled in LDAP since version 7.81.0. It is fully supported by the OpenLDAP backend and ignored by the generic ldap backend. Please note that a server may close the connection if the negotiation does not succeed. This option was formerly known as --ftp-ssl. That option name can still be used but will be removed in a future version. Providing --ssl multiple times has no extra effect. Disable it again with --no-ssl. Example: curl --ssl pop3://example.com/ See also --ssl-reqd, -k, --insecure and --ciphers. -2, --sslv2 (SSL) This option previously asked curl to use SSLv2, but starting in curl 7.77.0 this instruction is ignored. SSLv2 is widely considered insecure (see RFC 6176). Providing -2, --sslv2 multiple times has no extra effect. Example: curl --sslv2 https://example.com See also --http1.1 and --http2. -2, --sslv2 requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to -3, --sslv3 and -1, --tlsv1 and --tlsv1.1 and --tlsv1.2. -3, --sslv3 (SSL) This option previously asked curl to use SSLv3, but starting in curl 7.77.0 this instruction is ignored. SSLv3 is widely considered insecure (see RFC 7568). Providing -3, --sslv3 multiple times has no extra effect. Example: curl --sslv3 https://example.com See also --http1.1 and --http2. -3, --sslv3 requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to -2, --sslv2 and -1, --tlsv1 and --tlsv1.1 and --tlsv1.2. --stderr <file> Redirect all writes to stderr to the specified file instead. If the file name is a plain '-', it is instead written to stdout. This option is global and does not need to be specified for each use of --next. If --stderr is provided several times, the last set value will be used. Example: curl --stderr output.txt https://example.com See also -v, --verbose and -s, --silent. --styled-output Enables the automatic use of bold font styles when writing HTTP headers to the terminal. Use --no-styled-output to switch them off. Styled output requires a terminal that supports bold fonts. This feature is not present on curl for Windows due to lack of this capability. This option is global and does not need to be specified for each use of --next. Providing --styled-output multiple times has no extra effect. Disable it again with --no-styled-output. Example: curl --styled-output -I https://example.com See also -I, --head and -v, --verbose. Added in 7.61.0. --suppress-connect-headers When -p, --proxytunnel is used and a CONNECT request is made do not output proxy CONNECT response headers. This option is meant to be used with -D, --dump-header or -i, --include which are used to show protocol headers in the output. It has no effect on debug options such as -v, --verbose or --trace, or any statistics. Providing --suppress-connect-headers multiple times has no extra effect. Disable it again with --no-suppress- connect-headers. Example: curl --suppress-connect-headers --include -x proxy https://example.com See also -D, --dump-header, -i, --include and -p, --proxytunnel. Added in 7.54.0. --tcp-fastopen Enable use of TCP Fast Open (RFC7413). Providing --tcp-fastopen multiple times has no extra effect. Disable it again with --no-tcp-fastopen. Example: curl --tcp-fastopen https://example.com See also --false-start. Added in 7.49.0. --tcp-nodelay Turn on the TCP_NODELAY option. See the curl_easy_setopt(3) man page for details about this option. Since 7.50.2, curl sets this option by default and you need to explicitly switch it off if you do not want it on. Providing --tcp-nodelay multiple times has no extra effect. Disable it again with --no-tcp-nodelay. Example: curl --tcp-nodelay https://example.com See also -N, --no-buffer. -t, --telnet-option <opt=val> Pass options to the telnet protocol. Supported options are: TTYPE=<term> Sets the terminal type. XDISPLOC=<X display> Sets the X display location. NEW_ENV=<var,val> Sets an environment variable. -t, --telnet-option can be used several times in a command line Example: curl -t TTYPE=vt100 telnet://example.com/ See also -K, --config. --tftp-blksize <value> (TFTP) Set TFTP BLKSIZE option (must be >512). This is the block size that curl will try to use when transferring data to or from a TFTP server. By default 512 bytes will be used. If --tftp-blksize is provided several times, the last set value will be used. Example: curl --tftp-blksize 1024 tftp://example.com/file See also --tftp-no-options. --tftp-no-options (TFTP) Tells curl not to send TFTP options requests. This option improves interop with some legacy servers that do not acknowledge or properly implement TFTP options. When this option is used --tftp-blksize is ignored. Providing --tftp-no-options multiple times has no extra effect. Disable it again with --no-tftp-no-options. Example: curl --tftp-no-options tftp://192.168.0.1/ See also --tftp-blksize. Added in 7.48.0. -z, --time-cond <time> (HTTP FTP) Request a file that has been modified later than the given time and date, or one that has been modified before that time. The <date expression> can be all sorts of date strings or if it does not match any internal ones, it is taken as a filename and tries to get the modification date (mtime) from <file> instead. See the curl_getdate(3) man pages for date expression details. Start the date expression with a dash (-) to make it request for a document that is older than the given date/time, default is a document that is newer than the specified date/time. If -z, --time-cond is provided several times, the last set value will be used. Examples: curl -z "Wed 01 Sep 2021 12:18:00" https://example.com curl -z "-Wed 01 Sep 2021 12:18:00" https://example.com curl -z file https://example.com See also --etag-compare and -R, --remote-time. --tls-max <VERSION> (SSL) VERSION defines maximum supported TLS version. The minimum acceptable version is set by tlsv1.0, tlsv1.1, tlsv1.2 or tlsv1.3. If the connection is done without TLS, this option has no effect. This includes QUIC-using (HTTP/3) transfers. default Use up to recommended TLS version. 1.0 Use up to TLSv1.0. 1.1 Use up to TLSv1.1. 1.2 Use up to TLSv1.2. 1.3 Use up to TLSv1.3. If --tls-max is provided several times, the last set value will be used. Examples: curl --tls-max 1.2 https://example.com curl --tls-max 1.3 --tlsv1.2 https://example.com See also --tlsv1.0, --tlsv1.1, --tlsv1.2 and --tlsv1.3. --tls-max requires that the underlying libcurl was built to support TLS. Added in 7.54.0. --tls13-ciphers <ciphersuite list> (TLS) Specifies which cipher suites to use in the connection if it negotiates TLS 1.3. The list of ciphers suites must specify valid ciphers. Read up on TLS 1.3 cipher suite details on this URL: https://curl.se/docs/ssl-ciphers.html This option is currently used only when curl is built to use OpenSSL 1.1.1 or later or Schannel. If you are using a different SSL backend you can try setting TLS 1.3 cipher suites by using the --ciphers option. If --tls13-ciphers is provided several times, the last set value will be used. Example: curl --tls13-ciphers TLS_AES_128_GCM_SHA256 https://example.com See also --ciphers and --curves. Added in 7.61.0. --tlsauthtype <type> Set TLS authentication type. Currently, the only supported option is "SRP", for TLS-SRP (RFC 5054). If --tlsuser and --tlspassword are specified but --tlsauthtype is not, then this option defaults to "SRP". This option works only if the underlying libcurl is built with TLS-SRP support, which requires OpenSSL or GnuTLS with TLS-SRP support. If --tlsauthtype is provided several times, the last set value will be used. Example: curl --tlsauthtype SRP https://example.com See also --tlsuser. --tlspassword <string> Set password for use with the TLS authentication method specified with --tlsauthtype. Requires that --tlsuser also be set. This option does not work with TLS 1.3. If --tlspassword is provided several times, the last set value will be used. Example: curl --tlspassword pwd --tlsuser user https://example.com See also --tlsuser. --tlsuser <name> Set username for use with the TLS authentication method specified with --tlsauthtype. Requires that --tlspassword also is set. This option does not work with TLS 1.3. If --tlsuser is provided several times, the last set value will be used. Example: curl --tlspassword pwd --tlsuser user https://example.com See also --tlspassword. --tlsv1.0 (TLS) Forces curl to use TLS version 1.0 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.0. That behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Providing --tlsv1.0 multiple times has no extra effect. Example: curl --tlsv1.0 https://example.com See also --tlsv1.3. Added in 7.34.0. --tlsv1.1 (TLS) Forces curl to use TLS version 1.1 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.1. That behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Providing --tlsv1.1 multiple times has no extra effect. Example: curl --tlsv1.1 https://example.com See also --tlsv1.3 and --tls-max. Added in 7.34.0. --tlsv1.2 (TLS) Forces curl to use TLS version 1.2 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.2. That behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Providing --tlsv1.2 multiple times has no extra effect. Example: curl --tlsv1.2 https://example.com See also --tlsv1.3 and --tls-max. Added in 7.34.0. --tlsv1.3 (TLS) Forces curl to use TLS version 1.3 or later when connecting to a remote TLS server. If the connection is done without TLS, this option has no effect. This includes QUIC-using (HTTP/3) transfers. Note that TLS 1.3 is not supported by all TLS backends. Providing --tlsv1.3 multiple times has no extra effect. Example: curl --tlsv1.3 https://example.com See also --tlsv1.2 and --tls-max. Added in 7.52.0. -1, --tlsv1 (SSL) Tells curl to use at least TLS version 1.x when negotiating with a remote TLS server. That means TLS version 1.0 or higher Providing -1, --tlsv1 multiple times has no extra effect. Example: curl --tlsv1 https://example.com See also --http1.1 and --http2. -1, --tlsv1 requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to --tlsv1.1 and --tlsv1.2 and --tlsv1.3. --tr-encoding (HTTP) Request a compressed Transfer-Encoding response using one of the algorithms curl supports, and uncompress the data while receiving it. Providing --tr-encoding multiple times has no extra effect. Disable it again with --no-tr-encoding. Example: curl --tr-encoding https://example.com See also --compressed. --trace-ascii <file> Enables a full trace dump of all incoming and outgoing data, including descriptive information, to the given output file. Use "-" as filename to have the output sent to stdout. This is similar to --trace, but leaves out the hex part and only shows the ASCII part of the dump. It makes smaller output that might be easier to read for untrained humans. This option is global and does not need to be specified for each use of --next. If --trace-ascii is provided several times, the last set value will be used. Example: curl --trace-ascii log.txt https://example.com See also -v, --verbose and --trace. This option is mutually exclusive to --trace and -v, --verbose. --trace-ids Prepends the transfer and connection identifiers to each trace or verbose line that curl displays. This option is global and does not need to be specified for each use of --next. Providing --trace-ids multiple times has no extra effect. Disable it again with --no-trace-ids. Example: curl --trace-ids --trace-ascii output https://example.com See also --trace and -v, --verbose. Added in 8.2.0. --trace-time Prepends a time stamp to each trace or verbose line that curl displays. This option is global and does not need to be specified for each use of --next. Providing --trace-time multiple times has no extra effect. Disable it again with --no-trace-time. Example: curl --trace-time --trace-ascii output https://example.com See also --trace and -v, --verbose. --trace <file> Enables a full trace dump of all incoming and outgoing data, including descriptive information, to the given output file. Use "-" as filename to have the output sent to stdout. Use "%" as filename to have the output sent to stderr. This option is global and does not need to be specified for each use of --next. If --trace is provided several times, the last set value will be used. Example: curl --trace log.txt https://example.com See also --trace-ascii, --trace-ids and --trace-time. This option is mutually exclusive to -v, --verbose and --trace- ascii. --unix-socket <path> (HTTP) Connect through this Unix domain socket, instead of using the network. If --unix-socket is provided several times, the last set value will be used. Example: curl --unix-socket socket-path https://example.com See also --abstract-unix-socket. Added in 7.40.0. -T, --upload-file <file> This transfers the specified local file to the remote URL. If there is no file part in the specified URL, curl will append the local file name. NOTE that you must use a trailing / on the last directory to really prove to Curl that there is no file name or curl will think that your last directory name is the remote file name to use. That will most likely cause the upload operation to fail. If this is used on an HTTP(S) server, the PUT command will be used. Use the file name "-" (a single dash) to use stdin instead of a given file. Alternately, the file name "." (a single period) may be specified instead of "-" to use stdin in non-blocking mode to allow reading server output while stdin is being uploaded. You can specify one -T, --upload-file for each URL on the command line. Each -T, --upload-file + URL pair specifies what to upload and to where. curl also supports "globbing" of the -T, --upload-file argument, meaning that you can upload multiple files to a single URL by using the same URL globbing style supported in the URL. When uploading to an SMTP server: the uploaded data is assumed to be RFC 5322 formatted. It has to feature the necessary set of headers and mail body formatted correctly by the user as curl will not transcode nor encode it further in any way. -T, --upload-file can be used several times in a command line Examples: curl -T file https://example.com curl -T "img[1-1000].png" ftp://ftp.example.com/ curl --upload-file "{file1,file2}" https://example.com See also -G, --get and -I, --head. --url-query <data> (all) This option adds a piece of data, usually a name + value pair, to the end of the URL query part. The syntax is identical to that used for --data-urlencode with one extension: If the argument starts with a '+' (plus), the rest of the string is provided as-is unencoded. The query part of a URL is the one following the question mark on the right end. --url-query can be used several times in a command line Examples: curl --url-query name=val https://example.com curl --url-query =encodethis http://example.net/foo curl --url-query name@file https://example.com curl --url-query @fileonly https://example.com curl --url-query "+name=%20foo" https://example.com See also --data-urlencode and -G, --get. Added in 7.87.0. --url <url> Specify a URL to fetch. This option is mostly handy when you want to specify URL(s) in a config file. If the given URL is missing a scheme name (such as "http://" or "ftp://" etc) then curl will make a guess based on the host. If the outermost sub-domain name matches DICT, FTP, IMAP, LDAP, POP3 or SMTP then that protocol will be used, otherwise HTTP will be used. Since 7.45.0 guessing can be disabled by setting a default protocol, see --proto-default for details. To control where this URL is written, use the -o, --output or the -O, --remote-name options. WARNING: On Windows, particular file:// accesses can be converted to network accesses by the operating system. Beware! --url can be used several times in a command line Example: curl --url https://example.com See also -:, --next and -K, --config. -B, --use-ascii (FTP LDAP) Enable ASCII transfer. For FTP, this can also be enforced by using a URL that ends with ";type=A". This option causes data sent to stdout to be in text mode for win32 systems. Providing -B, --use-ascii multiple times has no extra effect. Disable it again with --no-use-ascii. Example: curl -B ftp://example.com/README See also --crlf and --data-ascii. -A, --user-agent <name> (HTTP) Specify the User-Agent string to send to the HTTP server. To encode blanks in the string, surround the string with single quote marks. This header can also be set with the -H, --header or the --proxy-header options. If you give an empty argument to -A, --user-agent (""), it will remove the header completely from the request. If you prefer a blank header, you can set it to a single space (" "). If -A, --user-agent is provided several times, the last set value will be used. Example: curl -A "Agent 007" https://example.com See also -H, --header and --proxy-header. -u, --user <user:password> Specify the user name and password to use for server authentication. Overrides -n, --netrc and --netrc- optional. If you simply specify the user name, curl will prompt for a password. The user name and passwords are split up on the first colon, which makes it impossible to use a colon in the user name with this option. The password can, still. On systems where it works, curl will hide the given option argument from process listings. This is not enough to protect credentials from possibly getting seen by other users on the same system as they will still be visible for a moment before cleared. Such sensitive data should be retrieved from a file instead or similar and never used in clear text in a command line. When using Kerberos V5 with a Windows based server you should include the Windows domain name in the user name, in order for the server to successfully obtain a Kerberos Ticket. If you do not, then the initial authentication handshake may fail. When using NTLM, the user name can be specified simply as the user name, without the domain, if there is a single domain and forest in your setup for example. To specify the domain name use either Down-Level Logon Name or UPN (User Principal Name) formats. For example, EXAMPLE\user and user@example.com respectively. If you use a Windows SSPI-enabled curl binary and perform Kerberos V5, Negotiate, NTLM or Digest authentication then you can tell curl to select the user name and password from your environment by specifying a single colon with this option: "-u :". If -u, --user is provided several times, the last set value will be used. Example: curl -u user:secret https://example.com See also -n, --netrc and -K, --config. -v, --verbose Makes curl verbose during the operation. Useful for debugging and seeing what's going on "under the hood". A line starting with '>' means "header data" sent by curl, '<' means "header data" received by curl that is hidden in normal cases, and a line starting with '*' means additional info provided by curl. If you only want HTTP headers in the output, -i, --include or -D, --dump-header might be more suitable options. If you think this option still does not give you enough details, consider using --trace or --trace-ascii instead. This option is global and does not need to be specified for each use of --next. Providing -v, --verbose multiple times has no extra effect. Disable it again with --no-verbose. Example: curl --verbose https://example.com See also -i, --include, -s, --silent, --trace and --trace- ascii. This option is mutually exclusive to --trace and --trace-ascii. -V, --version Displays information about curl and the libcurl version it uses. The first line includes the full version of curl, libcurl and other 3rd party libraries linked with the executable. The second line (starts with "Protocols:") shows all protocols that libcurl reports to support. The third line (starts with "Features:") shows specific features libcurl reports to offer. Available features include: alt-svc Support for the Alt-Svc: header is provided. AsynchDNS This curl uses asynchronous name resolves. Asynchronous name resolves can be done using either the c-ares or the threaded resolver backends. brotli Support for automatic brotli compression over HTTP(S). CharConv curl was built with support for character set conversions (like EBCDIC) Debug This curl uses a libcurl built with Debug. This enables more error-tracking and memory debugging etc. For curl-developers only! gsasl The built-in SASL authentication includes extensions to support SCRAM because libcurl was built with libgsasl. GSS-API GSS-API is supported. HSTS HSTS support is present. HTTP2 HTTP/2 support has been built-in. HTTP3 HTTP/3 support has been built-in. HTTPS-proxy This curl is built to support HTTPS proxy. IDN This curl supports IDN - international domain names. IPv6 You can use IPv6 with this. Kerberos Kerberos V5 authentication is supported. Largefile This curl supports transfers of large files, files larger than 2GB. libz Automatic decompression (via gzip, deflate) of compressed files over HTTP is supported. MultiSSL This curl supports multiple TLS backends. NTLM NTLM authentication is supported. NTLM_WB NTLM delegation to winbind helper is supported. PSL PSL is short for Public Suffix List and means that this curl has been built with knowledge about "public suffixes". SPNEGO SPNEGO authentication is supported. SSL SSL versions of various protocols are supported, such as HTTPS, FTPS, POP3S and so on. SSPI SSPI is supported. TLS-SRP SRP (Secure Remote Password) authentication is supported for TLS. TrackMemory Debug memory tracking is supported. Unicode Unicode support on Windows. UnixSockets Unix sockets support is provided. zstd Automatic decompression (via zstd) of compressed files over HTTP is supported. Example: curl --version See also -h, --help and -M, --manual. -w, --write-out <format> Make curl display information on stdout after a completed transfer. The format is a string that may contain plain text mixed with any number of variables. The format can be specified as a literal "string", or you can have curl read the format from a file with "@filename" and to tell curl to read the format from stdin you write "@-". The variables present in the output format will be substituted by the value or text that curl thinks fit, as described below. All variables are specified as %{variable_name} and to output a normal % you just write them as %%. You can output a newline by using \n, a carriage return with \r and a tab space with \t. The output will be written to standard output, but this can be switched to standard error by using %{stderr}. Output HTTP headers from the most recent request by using %header{name} where name is the case insensitive name of the header (without the trailing colon). The header contents are exactly as sent over the network, with leading and trailing whitespace trimmed. Added in curl 7.84.0. NOTE: In Windows the %-symbol is a special symbol used to expand environment variables. In batch files all occurrences of % must be doubled when using this option to properly escape. If this option is used at the command prompt then the % cannot be escaped and unintended expansion is possible. The variables available are: certs Output the certificate chain with details. Supported only by the OpenSSL, GnuTLS, Schannel, NSS, GSKit and Secure Transport backends. (Added in 7.88.0) content_type The Content-Type of the requested document, if there was any. errormsg The error message. (Added in 7.75.0) exitcode The numerical exitcode of the transfer. (Added in 7.75.0) filename_effective The ultimate filename that curl writes out to. This is only meaningful if curl is told to write to a file with the -O, --remote-name or -o, --output option. It's most useful in combination with the -J, --remote-header-name option. ftp_entry_path The initial path curl ended up in when logging on to the remote FTP server. header_json A JSON object with all HTTP response headers from the recent transfer. Values are provided as arrays, since in the case of multiple headers there can be multiple values. (Added in 7.83.0) The header names provided in lowercase, listed in order of appearance over the wire. Except for duplicated headers. They are grouped on the first occurrence of that header, each value is presented in the JSON array. http_code The numerical response code that was found in the last retrieved HTTP(S) or FTP(s) transfer. http_connect The numerical code that was found in the last response (from a proxy) to a curl CONNECT request. http_version The http version that was effectively used. (Added in 7.50.0) json A JSON object with all available keys. local_ip The IP address of the local end of the most recently done connection - can be either IPv4 or IPv6. local_port The local port number of the most recently done connection. method The http method used in the most recent HTTP request. (Added in 7.72.0) num_certs Number of server certificates received in the TLS handshake. Supported only by the OpenSSL, GnuTLS, Schannel, NSS, GSKit and Secure Transport backends. (Added in 7.88.0) num_connects Number of new connects made in the recent transfer. num_headers The number of response headers in the most recent request (restarted at each redirect). Note that the status line IS NOT a header. (Added in 7.73.0) num_redirects Number of redirects that were followed in the request. onerror The rest of the output is only shown if the transfer returned a non-zero error. (Added in 7.75.0) proxy_ssl_verify_result The result of the HTTPS proxy's SSL peer certificate verification that was requested. 0 means the verification was successful. (Added in 7.52.0) redirect_url When an HTTP request was made without -L, --location to follow redirects (or when --max- redirs is met), this variable will show the actual URL a redirect would have gone to. referer The Referer: header, if there was any. (Added in 7.76.0) remote_ip The remote IP address of the most recently done connection - can be either IPv4 or IPv6. remote_port The remote port number of the most recently done connection. response_code The numerical response code that was found in the last transfer (formerly known as "http_code"). scheme The URL scheme (sometimes called protocol) that was effectively used. (Added in 7.52.0) size_download The total amount of bytes that were downloaded. This is the size of the body/data that was transferred, excluding headers. size_header The total amount of bytes of the downloaded headers. size_request The total amount of bytes that were sent in the HTTP request. size_upload The total amount of bytes that were uploaded. This is the size of the body/data that was transferred, excluding headers. speed_download The average download speed that curl measured for the complete download. Bytes per second. speed_upload The average upload speed that curl measured for the complete upload. Bytes per second. ssl_verify_result The result of the SSL peer certificate verification that was requested. 0 means the verification was successful. stderr From this point on, the -w, --write-out output will be written to standard error. (Added in 7.63.0) stdout From this point on, the -w, --write-out output will be written to standard output. This is the default, but can be used to switch back after switching to stderr. (Added in 7.63.0) time_appconnect The time, in seconds, it took from the start until the SSL/SSH/etc connect/handshake to the remote host was completed. time_connect The time, in seconds, it took from the start until the TCP connect to the remote host (or proxy) was completed. time_namelookup The time, in seconds, it took from the start until the name resolving was completed. time_pretransfer The time, in seconds, it took from the start until the file transfer was just about to begin. This includes all pre-transfer commands and negotiations that are specific to the particular protocol(s) involved. time_redirect The time, in seconds, it took for all redirection steps including name lookup, connect, pretransfer and transfer before the final transaction was started. time_redirect shows the complete execution time for multiple redirections. time_starttransfer The time, in seconds, it took from the start until the first byte was just about to be transferred. This includes time_pretransfer and also the time the server needed to calculate the result. time_total The total time, in seconds, that the full operation lasted. url The URL that was fetched. (Added in 7.75.0) url.scheme The scheme part of the URL that was fetched. (Added in 8.1.0) url.user The user part of the URL that was fetched. (Added in 8.1.0) url.password The password part of the URL that was fetched. (Added in 8.1.0) url.options The options part of the URL that was fetched. (Added in 8.1.0) url.host The host part of the URL that was fetched. (Added in 8.1.0) url.port The port number of the URL that was fetched. If no port number was specified, but the URL scheme is known, that scheme's default port number is shown. (Added in 8.1.0) url.path The path part of the URL that was fetched. (Added in 8.1.0) url.query The query part of the URL that was fetched. (Added in 8.1.0) url.fragment The fragment part of the URL that was fetched. (Added in 8.1.0) url.zoneid The zoneid part of the URL that was fetched. (Added in 8.1.0) urle.scheme The scheme part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.user The user part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.password The password part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.options The options part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.host The host part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.port The port number of the effective (last) URL that was fetched. If no port number was specified, but the URL scheme is known, that scheme's default port number is shown. (Added in 8.1.0) urle.path The path part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.query The query part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.fragment The fragment part of the effective (last) URL that was fetched. (Added in 8.1.0) urle.zoneid The zoneid part of the effective (last) URL that was fetched. (Added in 8.1.0) urlnum The URL index number of this transfer, 0-indexed. De-globbed URLs share the same index number as the origin globbed URL. (Added in 7.75.0) url_effective The URL that was fetched last. This is most meaningful if you have told curl to follow location: headers. If -w, --write-out is provided several times, the last set value will be used. Example: curl -w '%{response_code}\n' https://example.com See also -v, --verbose and -I, --head. --xattr When saving output to a file, this option tells curl to store certain file metadata in extended file attributes. Currently, the URL is stored in the xdg.origin.url attribute and, for HTTP, the content type is stored in the mime_type attribute. If the file system does not support extended attributes, a warning is issued. Providing --xattr multiple times has no extra effect. Disable it again with --no-xattr. Example: curl --xattr -o storage https://example.com See also -R, --remote-time, -w, --write-out and -v, --verbose.
# curl > Transfers data from or to a server. Supports most protocols, including HTTP, > FTP, and POP3. More information: https://curl.se/docs/manpage.html. * Download the contents of a URL to a file: `curl {{http://example.com}} --output {{path/to/file}}` * Download a file, saving the output under the filename indicated by the URL: `curl --remote-name {{http://example.com/filename}}` * Download a file, following location redirects, and automatically continuing (resuming) a previous file transfer and return an error on server error: `curl --fail --remote-name --location --continue-at - {{http://example.com/filename}}` * Send form-encoded data (POST request of type `application/x-www-form-urlencoded`). Use `--data @file_name` or `--data @'-'` to read from STDIN: `curl --data {{'name=bob'}} {{http://example.com/form}}` * Send a request with an extra header, using a custom HTTP method: `curl --header {{'X-My-Header: 123'}} --request {{PUT}} {{http://example.com}}` * Send data in JSON format, specifying the appropriate content-type header: `curl --data {{'{"name":"bob"}'}} --header {{'Content-Type: application/json'}} {{http://example.com/users/1234}}` * Pass a username and password for server authentication: `curl --user myusername:mypassword {{http://example.com}}` * Pass client certificate and key for a resource, skipping certificate validation: `curl --cert {{client.pem}} --key {{key.pem}} --insecure {{https://example.com}}`
git-verify-commit
Validates the GPG signature created by git commit -S. --raw Print the raw gpg status output to standard error instead of the normal human-readable output. -v, --verbose Print the contents of the commit object before validating it. <commit>... SHA-1 identifiers of Git commit objects.
# git verify-commit > Check for GPG verification of commits. If no commits are verified, nothing > will be printed, regardless of options specified. More information: > https://git-scm.com/docs/git-verify-commit. * Check commits for a GPG signature: `git verify-commit {{commit_hash1 optional_commit_hash2 ...}}` * Check commits for a GPG signature and show details of each commit: `git verify-commit {{commit_hash1 optional_commit_hash2 ...}} --verbose` * Check commits for a GPG signature and print the raw details: `git verify-commit {{commit_hash1 optional_commit_hash2 ...}} --raw`
rmdir
Remove the DIRECTORY(ies), if they are empty. --ignore-fail-on-non-empty ignore each failure to remove a non-empty directory -p, --parents remove DIRECTORY and its ancestors; e.g., 'rmdir -p a/b' is similar to 'rmdir a/b a' -v, --verbose output a diagnostic for every directory processed --help display this help and exit --version output version information and exit
# rmdir > Remove directories without files. See also: `rm`. More information: > https://www.gnu.org/software/coreutils/rmdir. * Remove specific directories: `rmdir {{path/to/directory1 path/to/directory2 ...}}` * Remove specific nested directories recursively: `rmdir -p {{path/to/directory1 path/to/directory2 ...}}`
getfacl
For each file, getfacl displays the file name, owner, the group, and the Access Control List (ACL). If a directory has a default ACL, getfacl also displays the default ACL. Non-directories cannot have default ACLs. If getfacl is used on a file system that does not support ACLs, getfacl displays the access permissions defined by the traditional file mode permission bits. The output format of getfacl is as follows: 1: # file: somedir/ 2: # owner: lisa 3: # group: staff 4: # flags: -s- 5: user::rwx 6: user:joe:rwx #effective:r-x 7: group::rwx #effective:r-x 8: group:cool:r-x 9: mask::r-x 10: other::r-x 11: default:user::rwx 12: default:user:joe:rwx #effective:r-x 13: default:group::r-x 14: default:mask::r-x 15: default:other::--- Lines 1--3 indicate the file name, owner, and owning group. Line 4 indicates the setuid (s), setgid (s), and sticky (t) bits: either the letter representing the bit, or else a dash (-). This line is included if any of those bits is set and left out otherwise, so it will not be shown for most files. (See CONFORMANCE TO POSIX 1003.1e DRAFT STANDARD 17 below.) Lines 5, 7 and 10 correspond to the user, group and other fields of the file mode permission bits. These three are called the base ACL entries. Lines 6 and 8 are named user and named group entries. Line 9 is the effective rights mask. This entry limits the effective rights granted to all groups and to named users. (The file owner and others permissions are not affected by the effective rights mask; all other entries are.) Lines 11--15 display the default ACL associated with this directory. Directories may have a default ACL. Regular files never have a default ACL. The default behavior for getfacl is to display both the ACL and the default ACL, and to include an effective rights comment for lines where the rights of the entry differ from the effective rights. If output is to a terminal, the effective rights comment is aligned to column 40. Otherwise, a single tab character separates the ACL entry and the effective rights comment. The ACL listings of multiple files are separated by blank lines. The output of getfacl can also be used as input to setfacl. PERMISSIONS Process with search access to a file (i.e., processes with read access to the containing directory of a file) are also granted read access to the file's ACLs. This is analogous to the permissions required for accessing the file mode. -a, --access Display the file access control list. -d, --default Display the default access control list. -c, --omit-header Do not display the comment header (the first three lines of each file's output). -e, --all-effective Print all effective rights comments, even if identical to the rights defined by the ACL entry. -E, --no-effective Do not print effective rights comments. -s, --skip-base Skip files that only have the base ACL entries (owner, group, others). -R, --recursive List the ACLs of all files and directories recursively. -L, --logical Logical walk, follow symbolic links to directories. The default behavior is to follow symbolic link arguments, and skip symbolic links encountered in subdirectories. Only effective in combination with -R. -P, --physical Physical walk, do not follow symbolic links to directories. This also skips symbolic link arguments. Only effective in combination with -R. -t, --tabular Use an alternative tabular output format. The ACL and the default ACL are displayed side by side. Permissions that are ineffective due to the ACL mask entry are displayed capitalized. The entry tag names for the ACL_USER_OBJ and ACL_GROUP_OBJ entries are also displayed in capital letters, which helps in spotting those entries. -p, --absolute-names Do not strip leading slash characters (`/'). The default behavior is to strip leading slash characters. -n, --numeric List numeric user and group IDs -v, --version Print the version of getfacl and exit. -h, --help Print help explaining the command line options. -- End of command line options. All remaining parameters are interpreted as file names, even if they start with a dash character. - If the file name parameter is a single dash character, getfacl reads a list of files from standard input.
# getfacl > Get file access control lists. More information: https://manned.org/getfacl. * Display the file access control list: `getfacl {{path/to/file_or_directory}}` * Display the file access control list with numeric user and group IDs: `getfacl -n {{path/to/file_or_directory}}` * Display the file access control list with tabular output format: `getfacl -t {{path/to/file_or_directory}}`
nsenter
The nsenter command executes program in the namespace(s) that are specified in the command-line options (described below). If program is not given, then "${SHELL}" is run (default: /bin/sh). Enterable namespaces are: mount namespace Mounting and unmounting filesystems will not affect the rest of the system, except for filesystems which are explicitly marked as shared (with mount --make-shared; see /proc/self/mountinfo for the shared flag). For further details, see mount_namespaces(7) and the discussion of the CLONE_NEWNS flag in clone(2). UTS namespace Setting hostname or domainname will not affect the rest of the system. For further details, see uts_namespaces(7). IPC namespace The process will have an independent namespace for POSIX message queues as well as System V message queues, semaphore sets and shared memory segments. For further details, see ipc_namespaces(7). network namespace The process will have independent IPv4 and IPv6 stacks, IP routing tables, firewall rules, the /proc/net and /sys/class/net directory trees, sockets, etc. For further details, see network_namespaces(7). PID namespace Children will have a set of PID to process mappings separate from the nsenter process. nsenter will fork by default if changing the PID namespace, so that the new program and its children share the same PID namespace and are visible to each other. If --no-fork is used, the new program will be exec’ed without forking. For further details, see pid_namespaces(7). user namespace The process will have a distinct set of UIDs, GIDs and capabilities. For further details, see user_namespaces(7). cgroup namespace The process will have a virtualized view of /proc/self/cgroup, and new cgroup mounts will be rooted at the namespace cgroup root. For further details, see cgroup_namespaces(7). time namespace The process can have a distinct view of CLOCK_MONOTONIC and/or CLOCK_BOOTTIME which can be changed using /proc/self/timens_offsets. For further details, see time_namespaces(7). Various of the options below that relate to namespaces take an optional file argument. This should be one of the /proc/[pid]/ns/* files described in namespaces(7), or the pathname of a bind mount that was created on one of those files. -a, --all Enter all namespaces of the target process by the default /proc/[pid]/ns/* namespace paths. The default paths to the target process namespaces may be overwritten by namespace specific options (e.g., --all --mount=[path]). The user namespace will be ignored if the same as the caller’s current user namespace. It prevents a caller that has dropped capabilities from regaining those capabilities via a call to setns(). See setns(2) for more details. -t, --target PID Specify a target process to get contexts from. The paths to the contexts specified by pid are: /proc/pid/ns/mnt the mount namespace /proc/pid/ns/uts the UTS namespace /proc/pid/ns/ipc the IPC namespace /proc/pid/ns/net the network namespace /proc/pid/ns/pid the PID namespace /proc/pid/ns/user the user namespace /proc/pid/ns/cgroup the cgroup namespace /proc/pid/ns/time the time namespace /proc/pid/root the root directory /proc/pid/cwd the working directory respectively -m, --mount[=file] Enter the mount namespace. If no file is specified, enter the mount namespace of the target process. If file is specified, enter the mount namespace specified by file. -u, --uts[=file] Enter the UTS namespace. If no file is specified, enter the UTS namespace of the target process. If file is specified, enter the UTS namespace specified by file. -i, --ipc[=file] Enter the IPC namespace. If no file is specified, enter the IPC namespace of the target process. If file is specified, enter the IPC namespace specified by file. -n, --net[=file] Enter the network namespace. If no file is specified, enter the network namespace of the target process. If file is specified, enter the network namespace specified by file. -p, --pid[=file] Enter the PID namespace. If no file is specified, enter the PID namespace of the target process. If file is specified, enter the PID namespace specified by file. -U, --user[=file] Enter the user namespace. If no file is specified, enter the user namespace of the target process. If file is specified, enter the user namespace specified by file. See also the --setuid and --setgid options. -C, --cgroup[=file] Enter the cgroup namespace. If no file is specified, enter the cgroup namespace of the target process. If file is specified, enter the cgroup namespace specified by file. -T, --time[=file] Enter the time namespace. If no file is specified, enter the time namespace of the target process. If file is specified, enter the time namespace specified by file. -G, --setgid gid Set the group ID which will be used in the entered namespace and drop supplementary groups. nsenter always sets GID for user namespaces, the default is 0. If the argument "follow" is specified the GID of the target process is used. -S, --setuid uid Set the user ID which will be used in the entered namespace. nsenter always sets UID for user namespaces, the default is 0. If the argument "follow" is specified the UID of the target process is used. --keep-caps When the --user option is given, ensure that capabilities granted in the user namespace are preserved in the child process. --preserve-credentials Don’t modify UID and GID when enter user namespace. The default is to drops supplementary groups and sets GID and UID to 0. -r, --root[=directory] Set the root directory. If no directory is specified, set the root directory to the root directory of the target process. If directory is specified, set the root directory to the specified directory. The specified directory is open before it switches to the requested namespaces. -w, --wd[=directory] Set the working directory. If no directory is specified, set the working directory to the working directory of the target process. If directory is specified, set the working directory to the specified directory. The specified directory is open before it switches to the requested namespaces, it means the specified directory works as "tunnel" to the current namespace. See also --wdns. -W, --wdns[=directory] Set the working directory. The directory is open after switch to the requested namespaces and after chroot(2) call. The options --wd and --wdns are mutually exclusive. -e, --env Pass environment variables from the target process to the new process being created. If this option is not provided, the environment variables will remain the same as in the current namespace.. -F, --no-fork Do not fork before exec’ing the specified program. By default, when entering a PID namespace, nsenter calls fork before calling exec so that any children will also be in the newly entered PID namespace. -Z, --follow-context Set the SELinux security context used for executing a new process according to already running process specified by --target PID. (The util-linux has to be compiled with SELinux support otherwise the option is unavailable.) -h, --help Display help text and exit. -V, --version Print version and exit.
# nsenter > Run a new command in a running process' namespace. Particularly useful for > docker images or chroot jails. More information: https://manned.org/nsenter. * Run a specific command using the same namespaces as an existing process: `nsenter --target {{pid}} --all {{command}} {{command_arguments}}` * Run a specific command in an existing process's network namespace: `nsenter --target {{pid}} --net {{command}} {{command_arguments}}` * Run a specific command in an existing process's PID namespace: `nsenter --target {{pid}} --pid {{command}} {{command_arguments}}` * Run a specific command in an existing process's IPC namespace: `nsenter --target {{pid}} --ipc {{command}} {{command_arguments}}` * Run a specific command in an existing process's UTS, time, and IPC namespaces: `nsenter --target {{pid}} --uts --time --ipc -- {{command}} {{command_arguments}}` * Run a specific command in an existing process's namespace by referencing procfs: `nsenter --pid=/proc/{{pid}}/pid/net -- {{command}} {{command_arguments}}`
rsync
Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon. It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use. Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated. Some of the additional features of rsync are: o support for copying links, devices, owners, groups, and permissions o exclude and exclude-from options similar to GNU tar o a CVS exclude mode for ignoring the same files that CVS would ignore o can use any transparent remote shell, including ssh or rsh o does not require super-user privileges o pipelining of file transfers to minimize latency costs o support for anonymous or authenticated rsync daemons (ideal for mirroring) Rsync accepts both long (double-dash + word) and short (single- dash + letter) options. The full list of the available options are described below. If an option can be specified in more than one way, the choices are comma-separated. Some options only have a long variant, not a short. If the option takes a parameter, the parameter is only listed after the long variant, even though it must also be specified for the short. When specifying a parameter, you can either use the form --option=param, --option param, -o=param, -o param, or -oparam (the latter choices assume that your option has a short variant). The parameter may need to be quoted in some manner for it to survive the shell's command-line parsing. Also keep in mind that a leading tilde (~) in a pathname is substituted by your shell, so make sure that you separate the option name from the pathname using a space if you want the local shell to expand it. --help Print a short help page describing the options available in rsync and exit. You can also use -h for --help when it is used without any other options (since it normally means --human-readable). --version, -V Print the rsync version plus other info and exit. When repeated, the information is output is a JSON format that is still fairly readable (client side only). The output includes a list of compiled-in capabilities, a list of optimizations, the default list of checksum algorithms, the default list of compression algorithms, the default list of daemon auth digests, a link to the rsync web site, and a few other items. --verbose, -v This option increases the amount of information you are given during the transfer. By default, rsync works silently. A single -v will give you information about what files are being transferred and a brief summary at the end. Two -v options will give you information on what files are being skipped and slightly more information at the end. More than two -v options should only be used if you are debugging rsync. The end-of-run summary tells you the number of bytes sent to the remote rsync (which is the receiving side on a local copy), the number of bytes received from the remote host, and the average bytes per second of the transferred data computed over the entire length of the rsync run. The second line shows the total size (in bytes), which is the sum of all the file sizes that rsync considered transferring. It also shows a "speedup" value, which is a ratio of the total file size divided by the sum of the sent and received bytes (which is really just a feel-good bigger-is-better number). Note that these byte values can be made more (or less) human-readable by using the --human-readable (or --no-human-readable) options. In a modern rsync, the -v option is equivalent to the setting of groups of --info and --debug options. You can choose to use these newer options in addition to, or in place of using --verbose, as any fine-grained settings override the implied settings of -v. Both --info and --debug have a way to ask for help that tells you exactly what flags are set for each increase in verbosity. However, do keep in mind that a daemon's "max verbosity" setting will limit how high of a level the various individual flags can be set on the daemon side. For instance, if the max is 2, then any info and/or debug flag that is set to a higher value than what would be set by -vv will be downgraded to the -vv level in the daemon's logging. --info=FLAGS This option lets you have fine-grained control over the information output you want to see. An individual flag name may be followed by a level number, with 0 meaning to silence that output, 1 being the default output level, and higher numbers increasing the output of that flag (for those that support higher levels). Use --info=help to see all the available flag names, what they output, and what flag names are added for each increase in the verbose level. Some examples: rsync -a --info=progress2 src/ dest/ rsync -avv --info=stats2,misc1,flist0 src/ dest/ Note that --info=name's output is affected by the --out- format and --itemize-changes (-i) options. See those options for more information on what is output and when. This option was added to 3.1.0, so an older rsync on the server side might reject your attempts at fine-grained control (if one or more flags needed to be send to the server and the server was too old to understand them). See also the "max verbosity" caveat above when dealing with a daemon. --debug=FLAGS This option lets you have fine-grained control over the debug output you want to see. An individual flag name may be followed by a level number, with 0 meaning to silence that output, 1 being the default output level, and higher numbers increasing the output of that flag (for those that support higher levels). Use --debug=help to see all the available flag names, what they output, and what flag names are added for each increase in the verbose level. Some examples: rsync -avvv --debug=none src/ dest/ rsync -avA --del --debug=del2,acl src/ dest/ Note that some debug messages will only be output when the --stderr=all option is specified, especially those pertaining to I/O and buffer debugging. Beginning in 3.2.0, this option is no longer auto- forwarded to the server side in order to allow you to specify different debug values for each side of the transfer, as well as to specify a new debug option that is only present in one of the rsync versions. If you want to duplicate the same option on both sides, using brace expansion is an easy way to save you some typing. This works in zsh and bash: rsync -aiv {-M,}--debug=del2 src/ dest/ --stderr=errors|all|client This option controls which processes output to stderr and if info messages are also changed to stderr. The mode strings can be abbreviated, so feel free to use a single letter value. The 3 possible choices are: o errors - (the default) causes all the rsync processes to send an error directly to stderr, even if the process is on the remote side of the transfer. Info messages are sent to the client side via the protocol stream. If stderr is not available (i.e. when directly connecting with a daemon via a socket) errors fall back to being sent via the protocol stream. o all - causes all rsync messages (info and error) to get written directly to stderr from all (possible) processes. This causes stderr to become line- buffered (instead of raw) and eliminates the ability to divide up the info and error messages by file handle. For those doing debugging or using several levels of verbosity, this option can help to avoid clogging up the transfer stream (which should prevent any chance of a deadlock bug hanging things up). It also allows --debug to enable some extra I/O related messages. o client - causes all rsync messages to be sent to the client side via the protocol stream. One client process outputs all messages, with errors on stderr and info messages on stdout. This was the default in older rsync versions, but can cause error delays when a lot of transfer data is ahead of the messages. If you're pushing files to an older rsync, you may want to use --stderr=all since that idiom has been around for several releases. This option was added in rsync 3.2.3. This version also began the forwarding of a non-default setting to the remote side, though rsync uses the backward-compatible options --msgs2stderr and --no-msgs2stderr to represent the all and client settings, respectively. A newer rsync will continue to accept these older option names to maintain compatibility. --quiet, -q This option decreases the amount of information you are given during the transfer, notably suppressing information messages from the remote server. This option is useful when invoking rsync from cron. --no-motd This option affects the information that is output by the client at the start of a daemon transfer. This suppresses the message-of-the-day (MOTD) text, but it also affects the list of modules that the daemon sends in response to the "rsync host::" request (due to a limitation in the rsync protocol), so omit this option if you want to request the list of modules from the daemon. --ignore-times, -I Normally rsync will skip any files that are already the same size and have the same modification timestamp. This option turns off this "quick check" behavior, causing all files to be updated. This option can be confusing compared to --ignore-existing and --ignore-non-existing in that that they cause rsync to transfer fewer files, while this option causes rsync to transfer more files. --size-only This modifies rsync's "quick check" algorithm for finding files that need to be transferred, changing it from the default of transferring files with either a changed size or a changed last-modified time to just looking for files that have changed in size. This is useful when starting to use rsync after using another mirroring system which may not preserve timestamps exactly. --modify-window=NUM, -@ When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify- window value. The default is 0, which matches just integer seconds. If you specify a negative value (and the receiver is at least version 3.1.3) then nanoseconds will also be taken into account. Specifying 1 is useful for copies to/from MS Windows FAT filesystems, because FAT represents times with a 2-second resolution (allowing times to differ from the original by up to 1 second). If you want all your transfers to default to comparing nanoseconds, you can create a ~/.popt file and put these lines in it: rsync alias -a -a@-1 rsync alias -t -t@-1 With that as the default, you'd need to specify --modify- window=0 (aka -@0) to override it and ignore nanoseconds, e.g. if you're copying between ext3 and ext4, or if the receiving rsync is older than 3.1.3. --checksum, -c This changes the way rsync checks if the files have been changed and are in need of a transfer. Without this option, rsync uses a "quick check" that (by default) checks if each file's size and time of last modification match between the sender and receiver. This option changes this to compare a 128-bit checksum for each file that has a matching size. Generating the checksums means that both sides will expend a lot of disk I/O reading all the data in the files in the transfer, so this can slow things down significantly (and this is prior to any reading that will be done to transfer changed files) The sending side generates its checksums while it is doing the file-system scan that builds the list of the available files. The receiver generates its checksums when it is scanning for changed files, and will checksum any file that has the same size as the corresponding sender's file: files with either a changed size or a changed checksum are selected for transfer. Note that rsync always verifies that each transferred file was correctly reconstructed on the receiving side by checking a whole-file checksum that is generated as the file is transferred, but that automatic after-the-transfer verification has nothing to do with this option's before- the-transfer "Does this file need to be updated?" check. The checksum used is auto-negotiated between the client and the server, but can be overridden using either the --checksum-choice (--cc) option or an environment variable that is discussed in that option's section. --archive, -a This is equivalent to -rlptgoD. It is a quick way of saying you want recursion and want to preserve almost everything. Be aware that it does not include preserving ACLs (-A), xattrs (-X), atimes (-U), crtimes (-N), nor the finding and preserving of hardlinks (-H). The only exception to the above equivalence is when --files-from is specified, in which case -r is not implied. --no-OPTION You may turn off one or more implied options by prefixing the option name with "no-". Not all positive options have a negated opposite, but a lot do, including those that can be used to disable an implied option (e.g. --no-D, --no- perms) or have different defaults in various circumstances (e.g. --no-whole-file, --no-blocking-io, --no-dirs). Every valid negated option accepts both the short and the long option name after the "no-" prefix (e.g. --no-R is the same as --no-relative). As an example, if you want to use --archive (-a) but don't want --owner (-o), instead of converting -a into -rlptgD, you can specify -a --no-o (aka --archive --no-owner). The order of the options is important: if you specify --no-r -a, the -r option would end up being turned on, the opposite of -a --no-r. Note also that the side-effects of the --files-from option are NOT positional, as it affects the default state of several options and slightly changes the meaning of -a (see the --files-from option for more details). --recursive, -r This tells rsync to copy directories recursively. See also --dirs (-d) for an option that allows the scanning of a single directory. See the --inc-recursive option for a discussion of the incremental recursion for creating the list of files to transfer. --inc-recursive, --i-r This option explicitly enables on incremental recursion when scanning for files, which is enabled by default when using the --recursive option and both sides of the transfer are running rsync 3.0.0 or newer. Incremental recursion uses much less memory than non- incremental, while also beginning the transfer more quickly (since it doesn't need to scan the entire transfer hierarchy before it starts transferring files). If no recursion is enabled in the source files, this option has no effect. Some options require rsync to know the full file list, so these options disable the incremental recursion mode. These include: o --delete-before (the old default of --delete) o --delete-after o --prune-empty-dirs o --delay-updates In order to make --delete compatible with incremental recursion, rsync 3.0.0 made --delete-during the default delete mode (which was first added in 2.6.4). One side-effect of incremental recursion is that any missing sub-directories inside a recursively-scanned directory are (by default) created prior to recursing into the sub-dirs. This earlier creation point (compared to a non-incremental recursion) allows rsync to then set the modify time of the finished directory right away (without having to delay that until a bunch of recursive copying has finished). However, these early directories don't yet have their completed mode, mtime, or ownership set -- they have more restrictive rights until the subdirectory's copying actually begins. This early-creation idiom can be avoided by using the --omit-dir-times option. Incremental recursion can be disabled using the --no-inc- recursive (--no-i-r) option. --no-inc-recursive, --no-i-r Disables the new incremental recursion algorithm of the --recursive option. This makes rsync scan the full file list before it begins to transfer files. See --inc- recursive for more info. --relative, -R Use relative paths. This means that the full path names specified on the command line are sent to the server rather than just the last parts of the filenames. This is particularly useful when you want to send several different directories at the same time. For example, if you used this command: rsync -av /foo/bar/baz.c remote:/tmp/ would create a file named baz.c in /tmp/ on the remote machine. If instead you used rsync -avR /foo/bar/baz.c remote:/tmp/ then a file named /tmp/foo/bar/baz.c would be created on the remote machine, preserving its full path. These extra path elements are called "implied directories" (i.e. the "foo" and the "foo/bar" directories in the above example). Beginning with rsync 3.0.0, rsync always sends these implied directories as real directories in the file list, even if a path element is really a symlink on the sending side. This prevents some really unexpected behaviors when copying the full path of a file that you didn't realize had a symlink in its path. If you want to duplicate a server-side symlink, include both the symlink via its path, and referent directory via its real path. If you're dealing with an older rsync on the sending side, you may need to use the --no-implied-dirs option. It is also possible to limit the amount of path information that is sent as implied directories for each path you specify. With a modern rsync on the sending side (beginning with 2.6.7), you can insert a dot and a slash into the source path, like this: rsync -avR /foo/./bar/baz.c remote:/tmp/ That would create /tmp/bar/baz.c on the remote machine. (Note that the dot must be followed by a slash, so "/foo/." would not be abbreviated.) For older rsync versions, you would need to use a chdir to limit the source path. For example, when pushing files: (cd /foo; rsync -avR bar/baz.c remote:/tmp/) (Note that the parens put the two commands into a sub- shell, so that the "cd" command doesn't remain in effect for future commands.) If you're pulling files from an older rsync, use this idiom (but only for a non-daemon transfer): rsync -avR --rsync-path="cd /foo; rsync" \ remote:bar/baz.c /tmp/ --no-implied-dirs This option affects the default behavior of the --relative option. When it is specified, the attributes of the implied directories from the source names are not included in the transfer. This means that the corresponding path elements on the destination system are left unchanged if they exist, and any missing implied directories are created with default attributes. This even allows these implied path elements to have big differences, such as being a symlink to a directory on the receiving side. For instance, if a command-line arg or a files-from entry told rsync to transfer the file "path/foo/file", the directories "path" and "path/foo" are implied when --relative is used. If "path/foo" is a symlink to "bar" on the destination system, the receiving rsync would ordinarily delete "path/foo", recreate it as a directory, and receive the file into the new directory. With --no- implied-dirs, the receiving rsync updates "path/foo/file" using the existing path elements, which means that the file ends up being created in "path/bar". Another way to accomplish this link preservation is to use the --keep- dirlinks option (which will also affect symlinks to directories in the rest of the transfer). When pulling files from an rsync older than 3.0.0, you may need to use this option if the sending side has a symlink in the path you request and you wish the implied directories to be transferred as normal directories. --backup, -b With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options. If you don't specify --backup-dir: 1. the --omit-dir-times option will be forced on 2. the use of --delete (without --delete-excluded), causes rsync to add a "protect" filter-rule for the backup suffix to the end of all your existing filters that looks like this: -f "P *~". This rule prevents previously backed-up files from being deleted. Note that if you are supplying your own filter rules, you may need to manually insert your own exclude/protect rule somewhere higher up in the list so that it has a high enough priority to be effective (e.g. if your rules specify a trailing inclusion/exclusion of *, the auto- added rule would never be reached). --backup-dir=DIR This implies the --backup option, and tells rsync to store all backups in the specified directory on the receiving side. This can be used for incremental backups. You can additionally specify a backup suffix using the --suffix option (otherwise the files backed up in the specified directory will keep their original filenames). Note that if you specify a relative path, the backup directory will be relative to the destination directory, so you probably want to specify either an absolute path or a path that starts with "../". If an rsync daemon is the receiver, the backup dir cannot go outside the module's path hierarchy, so take extra care not to delete it or copy into it. --suffix=SUFFIX This option allows you to override the default backup suffix used with the --backup (-b) option. The default suffix is a ~ if no --backup-dir was specified, otherwise it is an empty string. --update, -u This forces rsync to skip any files which exist on the destination and have a modified time that is newer than the source file. (If an existing destination file has a modification time equal to the source file's, it will be updated if the sizes are different.) Note that this does not affect the copying of dirs, symlinks, or other special files. Also, a difference of file format between the sender and receiver is always considered to be important enough for an update, no matter what date is on the objects. In other words, if the source has a directory where the destination has a file, the transfer would occur regardless of the timestamps. This option is a TRANSFER RULE, so don't expect any exclude side effects. A caution for those that choose to combine --inplace with --update: an interrupted transfer will leave behind a partial file on the receiving side that has a very recent modified time, so re-running the transfer will probably not continue the interrupted file. As such, it is usually best to avoid combining this with --inplace unless you have implemented manual steps to handle any interrupted in-progress files. --inplace This option changes how rsync transfers a file when its data needs to be updated: instead of the default method of creating a new copy of the file and moving it into place when it is complete, rsync instead writes the updated data directly to the destination file. This has several effects: o Hard links are not broken. This means the new data will be visible through other hard links to the destination file. Moreover, attempts to copy differing source files onto a multiply-linked destination file will result in a "tug of war" with the destination data changing back and forth. o In-use binaries cannot be updated (either the OS will prevent this from happening, or binaries that attempt to swap-in their data will misbehave or crash). o The file's data will be in an inconsistent state during the transfer and will be left that way if the transfer is interrupted or if an update fails. o A file that rsync cannot write to cannot be updated. While a super user can update any file, a normal user needs to be granted write permission for the open of the file for writing to be successful. o The efficiency of rsync's delta-transfer algorithm may be reduced if some data in the destination file is overwritten before it can be copied to a position later in the file. This does not apply if you use --backup, since rsync is smart enough to use the backup file as the basis file for the transfer. WARNING: you should not use this option to update files that are being accessed by others, so be careful when choosing to use this for a copy. This option is useful for transferring large files with block-based changes or appended data, and also on systems that are disk bound, not network bound. It can also help keep a copy-on-write filesystem snapshot from diverging the entire contents of a file that only has minor changes. The option implies --partial (since an interrupted transfer does not delete the file), but conflicts with --partial-dir and --delay-updates. Prior to rsync 2.6.4 --inplace was also incompatible with --compare-dest and --link-dest. --append This special copy mode only works to efficiently update files that are known to be growing larger where any existing content on the receiving side is also known to be the same as the content on the sender. The use of --append can be dangerous if you aren't 100% sure that all the files in the transfer are shared, growing files. You should thus use filter rules to ensure that you weed out any files that do not fit this criteria. Rsync updates these growing file in-place without verifying any of the existing content in the file (it only verifies the content that it is appending). Rsync skips any files that exist on the receiving side that are not shorter than the associated file on the sending side (which means that new files are transferred). It also skips any files whose size on the sending side gets shorter during the send negotiations (rsync warns about a "diminished" file when this happens). This does not interfere with the updating of a file's non- content attributes (e.g. permissions, ownership, etc.) when the file does not need to be transferred, nor does it affect the updating of any directories or non-regular files. --append-verify This special copy mode works like --append except that all the data in the file is included in the checksum verification (making it less efficient but also potentially safer). This option can be dangerous if you aren't 100% sure that all the files in the transfer are shared, growing files. See the --append option for more details. Note: prior to rsync 3.0.0, the --append option worked like --append-verify, so if you are interacting with an older rsync (or the transfer is using a protocol prior to 30), specifying either append option will initiate an --append-verify transfer. --dirs, -d Tell the sending side to include any directories that are encountered. Unlike --recursive, a directory's contents are not copied unless the directory name specified is "." or ends with a trailing slash (e.g. ".", "dir/.", "dir/", etc.). Without this option or the --recursive option, rsync will skip all directories it encounters (and output a message to that effect for each one). If you specify both --dirs and --recursive, --recursive takes precedence. The --dirs option is implied by the --files-from option or the --list-only option (including an implied --list-only usage) if --recursive wasn't specified (so that directories are seen in the listing). Specify --no-dirs (or --no-d) if you want to turn this off. There is also a backward-compatibility helper option, --old-dirs (--old-d) that tells rsync to use a hack of -r --exclude='/*/*' to get an older rsync to list a single directory without recursing. --mkpath Create all missing path components of the destination path. By default, rsync allows only the final component of the destination path to not exist, which is an attempt to help you to validate your destination path. With this option, rsync creates all the missing destination-path components, just as if mkdir -p $DEST_PATH had been run on the receiving side. When specifying a destination path, including a trailing slash ensures that the whole path is treated as directory names to be created, even when the file list has a single item. See the COPYING TO A DIFFERENT NAME section for full details on how rsync decides if a final destination-path component should be created as a directory or not. If you would like the newly-created destination dirs to match the dirs on the sending side, you should be using --relative (-R) instead of --mkpath. For instance, the following two commands result in the same destination tree, but only the second command ensures that the "some/extra/path" components match the dirs on the sending side: rsync -ai --mkpath host:some/extra/path/*.c some/extra/path/ rsync -aiR host:some/extra/path/*.c ./ --links, -l Add symlinks to the transferred files instead of noisily ignoring them with a "non-regular file" warning for each symlink encountered. You can alternately silence the warning by specifying --info=nonreg0. The default handling of symlinks is to recreate each symlink's unchanged value on the receiving side. See the SYMBOLIC LINKS section for multi-option info. --copy-links, -L The sender transforms each symlink encountered in the transfer into the referent item, following the symlink chain to the file or directory that it references. If a symlink chain is broken, an error is output and the file is dropped from the transfer. This option supersedes any other options that affect symlinks in the transfer, since there are no symlinks left in the transfer. This option does not change the handling of existing symlinks on the receiving side, unlike versions of rsync prior to 2.6.3 which had the side-effect of telling the receiving side to also follow symlinks. A modern rsync won't forward this option to a remote receiver (since only the sender needs to know about it), so this caveat should only affect someone using an rsync client older than 2.6.7 (which is when -L stopped being forwarded to the receiver). See the --keep-dirlinks (-K) if you need a symlink to a directory to be treated as a real directory on the receiving side. See the SYMBOLIC LINKS section for multi-option info. --copy-unsafe-links This tells rsync to copy the referent of symbolic links that point outside the copied tree. Absolute symlinks are also treated like ordinary files, and so are any symlinks in the source path itself when --relative is used. Note that the cut-off point is the top of the transfer, which is the part of the path that rsync isn't mentioning in the verbose output. If you copy "/src/subdir" to "/dest/" then the "subdir" directory is a name inside the transfer tree, not the top of the transfer (which is /src) so it is legal for created relative symlinks to refer to other names inside the /src and /dest directories. If you instead copy "/src/subdir/" (with a trailing slash) to "/dest/subdir" that would not allow symlinks to any files outside of "subdir". Note that safe symlinks are only copied if --links was also specified or implied. The --copy-unsafe-links option has no extra effect when combined with --copy-links. See the SYMBOLIC LINKS section for multi-option info. --safe-links This tells the receiving rsync to ignore any symbolic links in the transfer which point outside the copied tree. All absolute symlinks are also ignored. Since this ignoring is happening on the receiving side, it will still be effective even when the sending side has munged symlinks (when it is using --munge-links). It also affects deletions, since the file being present in the transfer prevents any matching file on the receiver from being deleted when the symlink is deemed to be unsafe and is skipped. This option must be combined with --links (or --archive) to have any symlinks in the transfer to conditionally ignore. Its effect is superseded by --copy-unsafe-links. Using this option in conjunction with --relative may give unexpected results. See the SYMBOLIC LINKS section for multi-option info. --munge-links This option affects just one side of the transfer and tells rsync to munge symlink values when it is receiving files or unmunge symlink values when it is sending files. The munged values make the symlinks unusable on disk but allows the original contents of the symlinks to be recovered. The server-side rsync often enables this option without the client's knowledge, such as in an rsync daemon's configuration file or by an option given to the rrsync (restricted rsync) script. When specified on the client side, specify the option normally if it is the client side that has/needs the munged symlinks, or use -M--munge-links to give the option to the server when it has/needs the munged symlinks. Note that on a local transfer, the client is the sender, so specifying the option directly unmunges symlinks while specifying it as a remote option munges symlinks. This option has no effect when sent to a daemon via --remote-option because the daemon configures whether it wants munged symlinks via its "munge symlinks" parameter. The symlink value is munged/unmunged once it is in the transfer, so any option that transforms symlinks into non- symlinks occurs prior to the munging/unmunging except for --safe-links, which is a choice that the receiver makes, so it bases its decision on the munged/unmunged value. This does mean that if a receiver has munging enabled, that using --safe-links will cause all symlinks to be ignored (since they are all absolute). The method that rsync uses to munge the symlinks is to prefix each one's value with the string "/rsyncd-munged/". This prevents the links from being used as long as the directory does not exist. When this option is enabled, rsync will refuse to run if that path is a directory or a symlink to a directory (though it only checks at startup). See also the "munge-symlinks" python script in the support directory of the source code for a way to munge/unmunge one or more symlinks in-place. --copy-dirlinks, -k This option causes the sending side to treat a symlink to a directory as though it were a real directory. This is useful if you don't want symlinks to non-directories to be affected, as they would be using --copy-links. Without this option, if the sending side has replaced a directory with a symlink to a directory, the receiving side will delete anything that is in the way of the new symlink, including a directory hierarchy (as long as --force or --delete is in effect). See also --keep-dirlinks for an analogous option for the receiving side. --copy-dirlinks applies to all symlinks to directories in the source. If you want to follow only a few specified symlinks, a trick you can use is to pass them as additional source args with a trailing slash, using --relative to make the paths match up right. For example: rsync -r --relative src/./ src/./follow-me/ dest/ This works because rsync calls lstat(2) on the source arg as given, and the trailing slash makes lstat(2) follow the symlink, giving rise to a directory in the file-list which overrides the symlink found during the scan of "src/./". See the SYMBOLIC LINKS section for multi-option info. --keep-dirlinks, -K This option causes the receiving side to treat a symlink to a directory as though it were a real directory, but only if it matches a real directory from the sender. Without this option, the receiver's symlink would be deleted and replaced with a real directory. For example, suppose you transfer a directory "foo" that contains a file "file", but "foo" is a symlink to directory "bar" on the receiver. Without --keep-dirlinks, the receiver deletes symlink "foo", recreates it as a directory, and receives the file into the new directory. With --keep-dirlinks, the receiver keeps the symlink and "file" ends up in "bar". One note of caution: if you use --keep-dirlinks, you must trust all the symlinks in the copy or enable the --munge- links option on the receiving side! If it is possible for an untrusted user to create their own symlink to any real directory, the user could then (on a subsequent copy) replace the symlink with a real directory and affect the content of whatever directory the symlink references. For backup copies, you are better off using something like a bind mount instead of a symlink to modify your receiving hierarchy. See also --copy-dirlinks for an analogous option for the sending side. See the SYMBOLIC LINKS section for multi-option info. --hard-links, -H This tells rsync to look for hard-linked files in the source and link together the corresponding files on the destination. Without this option, hard-linked files in the source are treated as though they were separate files. This option does NOT necessarily ensure that the pattern of hard links on the destination exactly matches that on the source. Cases in which the destination may end up with extra hard links include the following: o If the destination contains extraneous hard-links (more linking than what is present in the source file list), the copying algorithm will not break them explicitly. However, if one or more of the paths have content differences, the normal file- update process will break those extra links (unless you are using the --inplace option). o If you specify a --link-dest directory that contains hard links, the linking of the destination files against the --link-dest files can cause some paths in the destination to become linked together due to the --link-dest associations. Note that rsync can only detect hard links between files that are inside the transfer set. If rsync updates a file that has extra hard-link connections to files outside the transfer, that linkage will be broken. If you are tempted to use the --inplace option to avoid this breakage, be very careful that you know how your files are being updated so that you are certain that no unintended changes happen due to lingering hard links (and see the --inplace option for more caveats). If incremental recursion is active (see --inc-recursive), rsync may transfer a missing hard-linked file before it finds that another link for that contents exists elsewhere in the hierarchy. This does not affect the accuracy of the transfer (i.e. which files are hard-linked together), just its efficiency (i.e. copying the data for a new, early copy of a hard-linked file that could have been found later in the transfer in another member of the hard- linked set of files). One way to avoid this inefficiency is to disable incremental recursion using the --no-inc- recursive option. --perms, -p This option causes the receiving rsync to set the destination permissions to be the same as the source permissions. (See also the --chmod option for a way to modify what rsync considers to be the source permissions.) When this option is off, permissions are set as follows: o Existing files (including updated files) retain their existing permissions, though the --executability option might change just the execute permission for the file. o New files get their "normal" permission bits set to the source file's permissions masked with the receiving directory's default permissions (either the receiving process's umask, or the permissions specified via the destination directory's default ACL), and their special permission bits disabled except in the case where a new directory inherits a setgid bit from its parent directory. Thus, when --perms and --executability are both disabled, rsync's behavior is the same as that of other file-copy utilities, such as cp(1) and tar(1). In summary: to give destination files (both old and new) the source permissions, use --perms. To give new files the destination-default permissions (while leaving existing files unchanged), make sure that the --perms option is off and use --chmod=ugo=rwX (which ensures that all non-masked bits get enabled). If you'd care to make this latter behavior easier to type, you could define a popt alias for it, such as putting this line in the file ~/.popt (the following defines the -Z option, and includes --no-g to use the default group of the destination dir): rsync alias -Z --no-p --no-g --chmod=ugo=rwX You could then use this new option in a command such as this one: rsync -avZ src/ dest/ (Caveat: make sure that -a does not follow -Z, or it will re-enable the two --no-* options mentioned above.) The preservation of the destination's setgid bit on newly- created directories when --perms is off was added in rsync 2.6.7. Older rsync versions erroneously preserved the three special permission bits for newly-created files when --perms was off, while overriding the destination's setgid bit setting on a newly-created directory. Default ACL observance was added to the ACL patch for rsync 2.6.7, so older (or non-ACL-enabled) rsyncs use the umask even if default ACLs are present. (Keep in mind that it is the version of the receiving rsync that affects these behaviors.) --executability, -E This option causes rsync to preserve the executability (or non-executability) of regular files when --perms is not enabled. A regular file is considered to be executable if at least one 'x' is turned on in its permissions. When an existing destination file's executability differs from that of the corresponding source file, rsync modifies the destination file's permissions as follows: o To make a file non-executable, rsync turns off all its 'x' permissions. o To make a file executable, rsync turns on each 'x' permission that has a corresponding 'r' permission enabled. If --perms is enabled, this option is ignored. --acls, -A This option causes rsync to update the destination ACLs to be the same as the source ACLs. The option also implies --perms. The source and destination systems must have compatible ACL entries for this option to work properly. See the --fake-super option for a way to backup and restore ACLs that are not compatible. --xattrs, -X This option causes rsync to update the destination extended attributes to be the same as the source ones. For systems that support extended-attribute namespaces, a copy being done by a super-user copies all namespaces except system.*. A normal user only copies the user.* namespace. To be able to backup and restore non-user namespaces as a normal user, see the --fake-super option. The above name filtering can be overridden by using one or more filter options with the x modifier. When you specify an xattr-affecting filter rule, rsync requires that you do your own system/user filtering, as well as any additional filtering for what xattr names are copied and what names are allowed to be deleted. For example, to skip the system namespace, you could specify: --filter='-x system.*' To skip all namespaces except the user namespace, you could specify a negated-user match: --filter='-x! user.*' To prevent any attributes from being deleted, you could specify a receiver-only rule that excludes all names: --filter='-xr *' Note that the -X option does not copy rsync's special xattr values (e.g. those used by --fake-super) unless you repeat the option (e.g. -XX). This "copy all xattrs" mode cannot be used with --fake-super. --chmod=CHMOD This option tells rsync to apply one or more comma- separated "chmod" modes to the permission of the files in the transfer. The resulting value is treated as though it were the permissions that the sending side supplied for the file, which means that this option can seem to have no effect on existing files if --perms is not enabled. In addition to the normal parsing rules specified in the chmod(1) manpage, you can specify an item that should only apply to a directory by prefixing it with a 'D', or specify an item that should only apply to a file by prefixing it with a 'F'. For example, the following will ensure that all directories get marked set-gid, that no files are other-writable, that both are user-writable and group-writable, and that both have consistent executability across all bits: --chmod=Dg+s,ug+w,Fo-w,+X Using octal mode numbers is also allowed: --chmod=D2775,F664 It is also legal to specify multiple --chmod options, as each additional option is just appended to the list of changes to make. See the --perms and --executability options for how the resulting permission value can be applied to the files in the transfer. --owner, -o This option causes rsync to set the owner of the destination file to be the same as the source file, but only if the receiving rsync is being run as the super-user (see also the --super and --fake-super options). Without this option, the owner of new and/or transferred files are set to the invoking user on the receiving side. The preservation of ownership will associate matching names by default, but may fall back to using the ID number in some circumstances (see also the --numeric-ids option for a full discussion). --group, -g This option causes rsync to set the group of the destination file to be the same as the source file. If the receiving program is not running as the super-user (or if --no-super was specified), only groups that the invoking user on the receiving side is a member of will be preserved. Without this option, the group is set to the default group of the invoking user on the receiving side. The preservation of group information will associate matching names by default, but may fall back to using the ID number in some circumstances (see also the --numeric- ids option for a full discussion). --devices This option causes rsync to transfer character and block device files to the remote system to recreate these devices. If the receiving rsync is not being run as the super-user, rsync silently skips creating the device files (see also the --super and --fake-super options). By default, rsync generates a "non-regular file" warning for each device file encountered when this option is not set. You can silence the warning by specifying --info=nonreg0. --specials This option causes rsync to transfer special files, such as named sockets and fifos. If the receiving rsync is not being run as the super-user, rsync silently skips creating the special files (see also the --super and --fake-super options). By default, rsync generates a "non-regular file" warning for each special file encountered when this option is not set. You can silence the warning by specifying --info=nonreg0. -D The -D option is equivalent to "--devices --specials". --copy-devices This tells rsync to treat a device on the sending side as a regular file, allowing it to be copied to a normal destination file (or another device if --write-devices was also specified). This option is refused by default by an rsync daemon. --write-devices This tells rsync to treat a device on the receiving side as a regular file, allowing the writing of file data into a device. This option implies the --inplace option. Be careful using this, as you should know what devices are present on the receiving side of the transfer, especially when running rsync as root. This option is refused by default by an rsync daemon. --times, -t This tells rsync to transfer modification times along with the files and update them on the remote system. Note that if this option is not used, the optimization that excludes files that have not been modified cannot be effective; in other words, a missing -t (or -a) will cause the next transfer to behave as if it used --ignore-times (-I), causing all files to be updated (though rsync's delta- transfer algorithm will make the update fairly efficient if the files haven't actually changed, you're much better off using -t). A modern rsync that is using transfer protocol 30 or 31 conveys a modify time using up to 8-bytes. If rsync is forced to speak an older protocol (perhaps due to the remote rsync being older than 3.0.0) a modify time is conveyed using 4-bytes. Prior to 3.2.7, these shorter values could convey a date range of 13-Dec-1901 to 19-Jan-2038. Beginning with 3.2.7, these 4-byte values now convey a date range of 1-Jan-1970 to 7-Feb-2106. If you have files dated older than 1970, make sure your rsync executables are upgraded so that the full range of dates can be conveyed. --atimes, -U This tells rsync to set the access (use) times of the destination files to the same value as the source files. If repeated, it also sets the --open-noatime option, which can help you to make the sending and receiving systems have the same access times on the transferred files without needing to run rsync an extra time after a file is transferred. Note that some older rsync versions (prior to 3.2.0) may have been built with a pre-release --atimes patch that does not imply --open-noatime when this option is repeated. --open-noatime This tells rsync to open files with the O_NOATIME flag (on systems that support it) to avoid changing the access time of the files that are being transferred. If your OS does not support the O_NOATIME flag then rsync will silently ignore this option. Note also that some filesystems are mounted to avoid updating the atime on read access even without the O_NOATIME flag being set. --crtimes, -N, This tells rsync to set the create times (newness) of the destination files to the same value as the source files. --omit-dir-times, -O This tells rsync to omit directories when it is preserving modification, access, and create times. If NFS is sharing the directories on the receiving side, it is a good idea to use -O. This option is inferred if you use --backup without --backup-dir. This option also has the side-effect of avoiding early creation of missing sub-directories when incremental recursion is enabled, as discussed in the --inc-recursive section. --omit-link-times, -J This tells rsync to omit symlinks when it is preserving modification, access, and create times. --super This tells the receiving side to attempt super-user activities even if the receiving rsync wasn't run by the super-user. These activities include: preserving users via the --owner option, preserving all groups (not just the current user's groups) via the --group option, and copying devices via the --devices option. This is useful for systems that allow such activities without being the super-user, and also for ensuring that you will get errors if the receiving side isn't being run as the super-user. To turn off super-user activities, the super-user can use --no-super. --fake-super When this option is enabled, rsync simulates super-user activities by saving/restoring the privileged attributes via special extended attributes that are attached to each file (as needed). This includes the file's owner and group (if it is not the default), the file's device info (device & special files are created as empty text files), and any permission bits that we won't allow to be set on the real file (e.g. the real file gets u-s,g-s,o-t for safety) or that would limit the owner's access (since the real super-user can always access/change a file, the files we create can always be accessed/changed by the creating user). This option also handles ACLs (if --acls was specified) and non-user extended attributes (if --xattrs was specified). This is a good way to backup data without using a super- user, and to store ACLs from incompatible systems. The --fake-super option only affects the side where the option is used. To affect the remote side of a remote- shell connection, use the --remote-option (-M) option: rsync -av -M--fake-super /src/ host:/dest/ For a local copy, this option affects both the source and the destination. If you wish a local copy to enable this option just for the destination files, specify -M--fake- super. If you wish a local copy to enable this option just for the source files, combine --fake-super with -M--super. This option is overridden by both --super and --no-super. See also the fake super setting in the daemon's rsyncd.conf file. --sparse, -S Try to handle sparse files efficiently so they take up less space on the destination. If combined with --inplace the file created might not end up with sparse blocks with some combinations of kernel version and/or filesystem type. If --whole-file is in effect (e.g. for a local copy) then it will always work because rsync truncates the file prior to writing out the updated version. Note that versions of rsync older than 3.1.3 will reject the combination of --sparse and --inplace. --preallocate This tells the receiver to allocate each destination file to its eventual size before writing data to the file. Rsync will only use the real filesystem-level preallocation support provided by Linux's fallocate(2) system call or Cygwin's posix_fallocate(3), not the slow glibc implementation that writes a null byte into each block. Without this option, larger files may not be entirely contiguous on the filesystem, but with this option rsync will probably copy more slowly. If the destination is not an extent-supporting filesystem (such as ext4, xfs, NTFS, etc.), this option may have no positive effect at all. If combined with --sparse, the file will only have sparse blocks (as opposed to allocated sequences of null bytes) if the kernel version and filesystem type support creating holes in the allocated data. --dry-run, -n This makes rsync perform a trial run that doesn't make any changes (and produces mostly the same output as a real run). It is most commonly used in combination with the --verbose (-v) and/or --itemize-changes (-i) options to see what an rsync command is going to do before one actually runs it. The output of --itemize-changes is supposed to be exactly the same on a dry run and a subsequent real run (barring intentional trickery and system call failures); if it isn't, that's a bug. Other output should be mostly unchanged, but may differ in some areas. Notably, a dry run does not send the actual data for file transfers, so --progress has no effect, the "bytes sent", "bytes received", "literal data", and "matched data" statistics are too small, and the "speedup" value is equivalent to a run where no file transfers were needed. --whole-file, -W This option disables rsync's delta-transfer algorithm, which causes all transferred files to be sent whole. The transfer may be faster if this option is used when the bandwidth between the source and destination machines is higher than the bandwidth to disk (especially when the "disk" is actually a networked filesystem). This is the default when both the source and destination are specified as local paths, but only if no batch-writing option is in effect. --no-whole-file, --no-W Disable whole-file updating when it is enabled by default for a local transfer. This usually slows rsync down, but it can be useful if you are trying to minimize the writes to the destination file (if combined with --inplace) or for testing the checksum-based update algorithm. See also the --whole-file option. --checksum-choice=STR, --cc=STR This option overrides the checksum algorithms. If one algorithm name is specified, it is used for both the transfer checksums and (assuming --checksum is specified) the pre-transfer checksums. If two comma-separated names are supplied, the first name affects the transfer checksums, and the second name affects the pre-transfer checksums (-c). The checksum options that you may be able to use are: o auto (the default automatic choice) o xxh128 o xxh3 o xxh64 (aka xxhash) o md5 o md4 o sha1 o none Run rsync --version to see the default checksum list compiled into your version (which may differ from the list above). If "none" is specified for the first (or only) name, the --whole-file option is forced on and no checksum verification is performed on the transferred data. If "none" is specified for the second (or only) name, the --checksum option cannot be used. The "auto" option is the default, where rsync bases its algorithm choice on a negotiation between the client and the server as follows: When both sides of the transfer are at least 3.2.0, rsync chooses the first algorithm in the client's list of choices that is also in the server's list of choices. If no common checksum choice is found, rsync exits with an error. If the remote rsync is too old to support checksum negotiation, a value is chosen based on the protocol version (which chooses between MD5 and various flavors of MD4 based on protocol age). The default order can be customized by setting the environment variable RSYNC_CHECKSUM_LIST to a space- separated list of acceptable checksum names. If the string contains a "&" character, it is separated into the "client string & server string", otherwise the same string applies to both. If the string (or string portion) contains no non-whitespace characters, the default checksum list is used. This method does not allow you to specify the transfer checksum separately from the pre- transfer checksum, and it discards "auto" and all unknown checksum names. A list with only invalid names results in a failed negotiation. The use of the --checksum-choice option overrides this environment list. --one-file-system, -x This tells rsync to avoid crossing a filesystem boundary when recursing. This does not limit the user's ability to specify items to copy from multiple filesystems, just rsync's recursion through the hierarchy of each directory that the user specified, and also the analogous recursion on the receiving side during deletion. Also keep in mind that rsync treats a "bind" mount to the same device as being on the same filesystem. If this option is repeated, rsync omits all mount-point directories from the copy. Otherwise, it includes an empty directory at each mount-point it encounters (using the attributes of the mounted directory because those of the underlying mount-point directory are inaccessible). If rsync has been told to collapse symlinks (via --copy- links or --copy-unsafe-links), a symlink to a directory on another device is treated like a mount-point. Symlinks to non-directories are unaffected by this option. --ignore-non-existing, --existing This tells rsync to skip creating files (including directories) that do not exist yet on the destination. If this option is combined with the --ignore-existing option, no files will be updated (which can be useful if all you want to do is delete extraneous files). This option is a TRANSFER RULE, so don't expect any exclude side effects. --ignore-existing This tells rsync to skip updating files that already exist on the destination (this does not ignore existing directories, or nothing would get done). See also --ignore-non-existing. This option is a TRANSFER RULE, so don't expect any exclude side effects. This option can be useful for those doing backups using the --link-dest option when they need to continue a backup run that got interrupted. Since a --link-dest run is copied into a new directory hierarchy (when it is used properly), using [--ignore-existing will ensure that the already-handled files don't get tweaked (which avoids a change in permissions on the hard-linked files). This does mean that this option is only looking at the existing files in the destination hierarchy itself. When --info=skip2 is used rsync will output "FILENAME exists (INFO)" messages where the INFO indicates one of "type change", "sum change" (requires -c), "file change" (based on the quick check), "attr change", or "uptodate". Using --info=skip1 (which is also implied by 2 -v options) outputs the exists message without the INFO suffix. --remove-source-files This tells rsync to remove from the sending side the files (meaning non-directories) that are a part of the transfer and have been successfully duplicated on the receiving side. Note that you should only use this option on source files that are quiescent. If you are using this to move files that show up in a particular directory over to another host, make sure that the finished files get renamed into the source directory, not directly written into it, so that rsync can't possibly transfer a file that is not yet fully written. If you can't first write the files into a different directory, you should use a naming idiom that lets rsync avoid transferring files that are not yet finished (e.g. name the file "foo.new" when it is written, rename it to "foo" when it is done, and then use the option --exclude='*.new' for the rsync transfer). Starting with 3.1.0, rsync will skip the sender-side removal (and output an error) if the file's size or modify time has not stayed unchanged. Starting with 3.2.6, a local rsync copy will ensure that the sender does not remove a file the receiver just verified, such as when the user accidentally makes the source and destination directory the same path. --delete This tells rsync to delete extraneous files from the receiving side (ones that aren't on the sending side), but only for the directories that are being synchronized. You must have asked rsync to send the whole directory (e.g. "dir" or "dir/") without using a wildcard for the directory's contents (e.g. "dir/*") since the wildcard is expanded by the shell and rsync thus gets a request to transfer individual files, not the files' parent directory. Files that are excluded from the transfer are also excluded from being deleted unless you use the --delete-excluded option or mark the rules as only matching on the sending side (see the include/exclude modifiers in the FILTER RULES section). Prior to rsync 2.6.7, this option would have no effect unless --recursive was enabled. Beginning with 2.6.7, deletions will also occur when --dirs (-d) is enabled, but only for directories whose contents are being copied. This option can be dangerous if used incorrectly! It is a very good idea to first try a run using the --dry-run (-n) option to see what files are going to be deleted. If the sending side detects any I/O errors, then the deletion of any files at the destination will be automatically disabled. This is to prevent temporary filesystem failures (such as NFS errors) on the sending side from causing a massive deletion of files on the destination. You can override this with the --ignore- errors option. The --delete option may be combined with one of the --delete-WHEN options without conflict, as well as --delete-excluded. However, if none of the --delete-WHEN options are specified, rsync will choose the --delete- during algorithm when talking to rsync 3.0.0 or newer, or the --delete-before algorithm when talking to an older rsync. See also --delete-delay and --delete-after. --delete-before Request that the file-deletions on the receiving side be done before the transfer starts. See --delete (which is implied) for more details on file-deletion. Deleting before the transfer is helpful if the filesystem is tight for space and removing extraneous files would help to make the transfer possible. However, it does introduce a delay before the start of the transfer, and this delay might cause the transfer to timeout (if --timeout was specified). It also forces rsync to use the old, non-incremental recursion algorithm that requires rsync to scan all the files in the transfer into memory at once (see --recursive). --delete-during, --del Request that the file-deletions on the receiving side be done incrementally as the transfer happens. The per- directory delete scan is done right before each directory is checked for updates, so it behaves like a more efficient --delete-before, including doing the deletions prior to any per-directory filter files being updated. This option was first added in rsync version 2.6.4. See --delete (which is implied) for more details on file- deletion. --delete-delay Request that the file-deletions on the receiving side be computed during the transfer (like --delete-during), and then removed after the transfer completes. This is useful when combined with --delay-updates and/or --fuzzy, and is more efficient than using --delete-after (but can behave differently, since --delete-after computes the deletions in a separate pass after all updates are done). If the number of removed files overflows an internal buffer, a temporary file will be created on the receiving side to hold the names (it is removed while open, so you shouldn't see it during the transfer). If the creation of the temporary file fails, rsync will try to fall back to using --delete-after (which it cannot do if --recursive is doing an incremental scan). See --delete (which is implied) for more details on file-deletion. --delete-after Request that the file-deletions on the receiving side be done after the transfer has completed. This is useful if you are sending new per-directory merge files as a part of the transfer and you want their exclusions to take effect for the delete phase of the current transfer. It also forces rsync to use the old, non-incremental recursion algorithm that requires rsync to scan all the files in the transfer into memory at once (see --recursive). See --delete (which is implied) for more details on file- deletion. See also the --delete-delay option that might be a faster choice for those that just want the deletions to occur at the end of the transfer. --delete-excluded This option turns any unqualified exclude/include rules into server-side rules that do not affect the receiver's deletions. By default, an exclude or include has both a server-side effect (to "hide" and "show" files when building the server's file list) and a receiver-side effect (to "protect" and "risk" files when deletions are occurring). Any rule that has no modifier to specify what sides it is executed on will be instead treated as if it were a server-side rule only, avoiding any "protect" effects of the rules. A rule can still apply to both sides even with this option specified if the rule is given both the sender & receiver modifier letters (e.g., -f'-sr foo'). Receiver-side protect/risk rules can also be explicitly specified to limit the deletions. This saves you from having to edit a bunch of -f'- foo' rules into -f'-s foo' (aka -f'H foo') rules (not to mention the corresponding includes). See the FILTER RULES section for more information. See --delete (which is implied) for more details on deletion. --ignore-missing-args When rsync is first processing the explicitly requested source files (e.g. command-line arguments or --files-from entries), it is normally an error if the file cannot be found. This option suppresses that error, and does not try to transfer the file. This does not affect subsequent vanished-file errors if a file was initially found to be present and later is no longer there. --delete-missing-args This option takes the behavior of the (implied) --ignore- missing-args option a step farther: each missing arg will become a deletion request of the corresponding destination file on the receiving side (should it exist). If the destination file is a non-empty directory, it will only be successfully deleted if --force or --delete are in effect. Other than that, this option is independent of any other type of delete processing. The missing source files are represented by special file- list entries which display as a "*missing" entry in the --list-only output. --ignore-errors Tells --delete to go ahead and delete files even when there are I/O errors. --force This option tells rsync to delete a non-empty directory when it is to be replaced by a non-directory. This is only relevant if deletions are not active (see --delete for details). Note for older rsync versions: --force used to still be required when using --delete-after, and it used to be non- functional unless the --recursive option was also enabled. --max-delete=NUM This tells rsync not to delete more than NUM files or directories. If that limit is exceeded, all further deletions are skipped through the end of the transfer. At the end, rsync outputs a warning (including a count of the skipped deletions) and exits with an error code of 25 (unless some more important error condition also occurred). Beginning with version 3.0.0, you may specify --max- delete=0 to be warned about any extraneous files in the destination without removing any of them. Older clients interpreted this as "unlimited", so if you don't know what version the client is, you can use the less obvious --max- delete=-1 as a backward-compatible way to specify that no deletions be allowed (though really old versions didn't warn when the limit was exceeded). --max-size=SIZE This tells rsync to avoid transferring any file that is larger than the specified SIZE. A numeric value can be suffixed with a string to indicate the numeric units or left unqualified to specify bytes. Feel free to use a fractional value along with the units, such as --max- size=1.5m. This option is a TRANSFER RULE, so don't expect any exclude side effects. The first letter of a units string can be B (bytes), K (kilo), M (mega), G (giga), T (tera), or P (peta). If the string is a single char or has "ib" added to it (e.g. "G" or "GiB") then the units are multiples of 1024. If you use a two-letter suffix that ends with a "B" (e.g. "kb") then you get units that are multiples of 1000. The string's letters can be any mix of upper and lower-case that you want to use. Finally, if the string ends with either "+1" or "-1", it is offset by one byte in the indicated direction. The largest possible value is usually 8192P-1. Examples: --max-size=1.5mb-1 is 1499999 bytes, and --max- size=2g+1 is 2147483649 bytes. Note that rsync versions prior to 3.1.0 did not allow --max-size=0. --min-size=SIZE This tells rsync to avoid transferring any file that is smaller than the specified SIZE, which can help in not transferring small, junk files. See the --max-size option for a description of SIZE and other info. Note that rsync versions prior to 3.1.0 did not allow --min-size=0. --max-alloc=SIZE By default rsync limits an individual malloc/realloc to about 1GB in size. For most people this limit works just fine and prevents a protocol error causing rsync to request massive amounts of memory. However, if you have many millions of files in a transfer, a large amount of server memory, and you don't want to split up your transfer into multiple parts, you can increase the per- allocation limit to something larger and rsync will consume more memory. Keep in mind that this is not a limit on the total size of allocated memory. It is a sanity-check value for each individual allocation. See the --max-size option for a description of how SIZE can be specified. The default suffix if none is given is bytes. Beginning in 3.2.3, a value of 0 specifies no limit. You can set a default value using the environment variable RSYNC_MAX_ALLOC using the same SIZE values as supported by this option. If the remote rsync doesn't understand the --max-alloc option, you can override an environmental value by specifying --max-alloc=1g, which will make rsync avoid sending the option to the remote side (because "1G" is the default). --block-size=SIZE, -B This forces the block size used in rsync's delta-transfer algorithm to a fixed value. It is normally selected based on the size of each file being updated. See the technical report for details. Beginning in 3.2.3 the SIZE can be specified with a suffix as detailed in the --max-size option. Older versions only accepted a byte count. --rsh=COMMAND, -e This option allows you to choose an alternative remote shell program to use for communication between the local and remote copies of rsync. Typically, rsync is configured to use ssh by default, but you may prefer to use rsh on a local network. If this option is used with [user@]host::module/path, then the remote shell COMMAND will be used to run an rsync daemon on the remote host, and all data will be transmitted through that remote shell connection, rather than through a direct socket connection to a running rsync daemon on the remote host. See the USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION section above. Beginning with rsync 3.2.0, the RSYNC_PORT environment variable will be set when a daemon connection is being made via a remote-shell connection. It is set to 0 if the default daemon port is being assumed, or it is set to the value of the rsync port that was specified via either the --port option or a non-empty port value in an rsync:// URL. This allows the script to discern if a non-default port is being requested, allowing for things such as an SSL or stunnel helper script to connect to a default or alternate port. Command-line arguments are permitted in COMMAND provided that COMMAND is presented to rsync as a single argument. You must use spaces (not tabs or other whitespace) to separate the command and args from each other, and you can use single- and/or double-quotes to preserve spaces in an argument (but not backslashes). Note that doubling a single-quote inside a single-quoted string gives you a single-quote; likewise for double-quotes (though you need to pay attention to which quotes your shell is parsing and which quotes rsync is parsing). Some examples: -e 'ssh -p 2234' -e 'ssh -o "ProxyCommand nohup ssh firewall nc -w1 %h %p"' (Note that ssh users can alternately customize site- specific connect options in their .ssh/config file.) You can also choose the remote shell program using the RSYNC_RSH environment variable, which accepts the same range of values as -e. See also the --blocking-io option which is affected by this option. --rsync-path=PROGRAM Use this to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell's path (e.g. --rsync- path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you'd care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate. One tricky example is to set a different default directory on the remote machine for use with the --relative option. For instance: rsync -avR --rsync-path="cd /a/b && rsync" host:c/d /e/ --remote-option=OPTION, -M This option is used for more advanced situations where you want certain effects to be limited to one side of the transfer only. For instance, if you want to pass --log- file=FILE and --fake-super to the remote system, specify it like this: rsync -av -M --log-file=foo -M--fake-super src/ dest/ If you want to have an option affect only the local side of a transfer when it normally affects both sides, send its negation to the remote side. Like this: rsync -av -x -M--no-x src/ dest/ Be cautious using this, as it is possible to toggle an option that will cause rsync to have a different idea about what data to expect next over the socket, and that will make it fail in a cryptic fashion. Note that you should use a separate -M option for each remote option you want to pass. On older rsync versions, the presence of any spaces in the remote-option arg could cause it to be split into separate remote args, but this requires the use of --old-args in a modern rsync. When performing a local transfer, the "local" side is the sender and the "remote" side is the receiver. Note some versions of the popt option-parsing library have a bug in them that prevents you from using an adjacent arg with an equal in it next to a short option letter (e.g. -M--log-file=/tmp/foo). If this bug affects your version of popt, you can use the version of popt that is included with rsync. --cvs-exclude, -C This is a useful shorthand for excluding a broad range of files that you often don't want to transfer between systems. It uses a similar algorithm to CVS to determine if a file should be ignored. The exclude list is initialized to exclude the following items (these initial items are marked as perishable -- see the FILTER RULES section): RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS .make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old *.bak *.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj *.so *.exe *.Z *.elc *.ln core .svn/ .git/ .hg/ .bzr/ then, files listed in a $HOME/.cvsignore are added to the list and any files listed in the CVSIGNORE environment variable (all cvsignore names are delimited by whitespace). Finally, any file is ignored if it is in the same directory as a .cvsignore file and matches one of the patterns listed therein. Unlike rsync's filter/exclude files, these patterns are split on whitespace. See the cvs(1) manual for more information. If you're combining -C with your own --filter rules, you should note that these CVS excludes are appended at the end of your own rules, regardless of where the -C was placed on the command-line. This makes them a lower priority than any rules you specified explicitly. If you want to control where these CVS excludes get inserted into your filter rules, you should omit the -C as a command- line option and use a combination of --filter=:C and --filter=-C (either on your command-line or by putting the ":C" and "-C" rules into a filter file with your other rules). The first option turns on the per-directory scanning for the .cvsignore file. The second option does a one-time import of the CVS excludes mentioned above. --filter=RULE, -f This option allows you to add rules to selectively exclude certain files from the list of files to be transferred. This is most useful in combination with a recursive transfer. You may use as many --filter options on the command line as you like to build up the list of files to exclude. If the filter contains whitespace, be sure to quote it so that the shell gives the rule to rsync as a single argument. The text below also mentions that you can use an underscore to replace the space that separates a rule from its arg. See the FILTER RULES section for detailed information on this option. -F The -F option is a shorthand for adding two --filter rules to your command. The first time it is used is a shorthand for this rule: --filter='dir-merge /.rsync-filter' This tells rsync to look for per-directory .rsync-filter files that have been sprinkled through the hierarchy and use their rules to filter the files in the transfer. If -F is repeated, it is a shorthand for this rule: --filter='exclude .rsync-filter' This filters out the .rsync-filter files themselves from the transfer. See the FILTER RULES section for detailed information on how these options work. --exclude=PATTERN This option is a simplified form of the --filter option that specifies an exclude rule and does not allow the full rule-parsing syntax of normal filter rules. This is equivalent to specifying -f'- PATTERN'. See the FILTER RULES section for detailed information on this option. --exclude-from=FILE This option is related to the --exclude option, but it specifies a FILE that contains exclude patterns (one per line). Blank lines in the file are ignored, as are whole- line comments that start with ';' or '#' (filename rules that contain those characters are unaffected). If a line begins with "- " (dash, space) or "+ " (plus, space), then the type of rule is being explicitly specified as an exclude or an include (respectively). Any rules without such a prefix are taken to be an exclude. If a line consists of just "!", then the current filter rules are cleared before adding any further rules. If FILE is '-', the list will be read from standard input. --include=PATTERN This option is a simplified form of the --filter option that specifies an include rule and does not allow the full rule-parsing syntax of normal filter rules. This is equivalent to specifying -f'+ PATTERN'. See the FILTER RULES section for detailed information on this option. --include-from=FILE This option is related to the --include option, but it specifies a FILE that contains include patterns (one per line). Blank lines in the file are ignored, as are whole- line comments that start with ';' or '#' (filename rules that contain those characters are unaffected). If a line begins with "- " (dash, space) or "+ " (plus, space), then the type of rule is being explicitly specified as an exclude or an include (respectively). Any rules without such a prefix are taken to be an include. If a line consists of just "!", then the current filter rules are cleared before adding any further rules. If FILE is '-', the list will be read from standard input. --files-from=FILE Using this option allows you to specify the exact list of files to transfer (as read from the specified FILE or '-' for standard input). It also tweaks the default behavior of rsync to make transferring just the specified files and directories easier: o The --relative (-R) option is implied, which preserves the path information that is specified for each item in the file (use --no-relative or --no-R if you want to turn that off). o The --dirs (-d) option is implied, which will create directories specified in the list on the destination rather than noisily skipping them (use --no-dirs or --no-d if you want to turn that off). o The --archive (-a) option's behavior does not imply --recursive (-r), so specify it explicitly, if you want it. o These side-effects change the default state of rsync, so the position of the --files-from option on the command-line has no bearing on how other options are parsed (e.g. -a works the same before or after --files-from, as does --no-R and all other options). The filenames that are read from the FILE are all relative to the source dir -- any leading slashes are removed and no ".." references are allowed to go higher than the source dir. For example, take this command: rsync -a --files-from=/tmp/foo /usr remote:/backup If /tmp/foo contains the string "bin" (or even "/bin"), the /usr/bin directory will be created as /backup/bin on the remote host. If it contains "bin/" (note the trailing slash), the immediate contents of the directory would also be sent (without needing to be explicitly mentioned in the file -- this began in version 2.6.4). In both cases, if the -r option was enabled, that dir's entire hierarchy would also be transferred (keep in mind that -r needs to be specified explicitly with --files-from, since it is not implied by -a. Also note that the effect of the (enabled by default) -r option is to duplicate only the path info that is read from the file -- it does not force the duplication of the source-spec path (/usr in this case). In addition, the --files-from file can be read from the remote host instead of the local host if you specify a "host:" in front of the file (the host must match one end of the transfer). As a short-cut, you can specify just a prefix of ":" to mean "use the remote end of the transfer". For example: rsync -a --files-from=:/path/file-list src:/ /tmp/copy This would copy all the files specified in the /path/file- list file that was located on the remote "src" host. If the --iconv and --secluded-args options are specified and the --files-from filenames are being sent from one host to another, the filenames will be translated from the sending host's charset to the receiving host's charset. NOTE: sorting the list of files in the --files-from input helps rsync to be more efficient, as it will avoid re- visiting the path elements that are shared between adjacent entries. If the input is not sorted, some path elements (implied directories) may end up being scanned multiple times, and rsync will eventually unduplicate them after they get turned into file-list elements. --from0, -0 This tells rsync that the rules/filenames it reads from a file are terminated by a null ('\0') character, not a NL, CR, or CR+LF. This affects --exclude-from, --include- from, --files-from, and any merged files specified in a --filter rule. It does not affect --cvs-exclude (since all names read from a .cvsignore file are split on whitespace). --old-args This option tells rsync to stop trying to protect the arg values on the remote side from unintended word-splitting or other misinterpretation. It also allows the client to treat an empty arg as a "." instead of generating an error. The default in a modern rsync is for "shell-active" characters (including spaces) to be backslash-escaped in the args that are sent to the remote shell. The wildcard characters *, ?, [, & ] are not escaped in filename args (allowing them to expand into multiple filenames) while being protected in option args, such as --usermap. If you have a script that wants to use old-style arg splitting in its filenames, specify this option once. If the remote shell has a problem with any backslash escapes at all, specify this option twice. You may also control this setting via the RSYNC_OLD_ARGS environment variable. If it has the value "1", rsync will default to a single-option setting. If it has the value "2" (or more), rsync will default to a repeated-option setting. If it is "0", you'll get the default escaping behavior. The environment is always overridden by manually specified positive or negative options (the negative is --no-old-args). Note that this option also disables the extra safety check added in 3.2.5 that ensures that a remote sender isn't including extra top-level items in the file-list that you didn't request. This side-effect is necessary because we can't know for sure what names to expect when the remote shell is interpreting the args. This option conflicts with the --secluded-args option. --secluded-args, -s This option sends all filenames and most options to the remote rsync via the protocol (not the remote shell command line) which avoids letting the remote shell modify them. Wildcards are expanded on the remote host by rsync instead of a shell. This is similar to the default backslash-escaping of args that was added in 3.2.4 (see --old-args) in that it prevents things like space splitting and unwanted special- character side-effects. However, it has the drawbacks of being incompatible with older rsync versions (prior to 3.0.0) and of being refused by restricted shells that want to be able to inspect all the option values for safety. This option is useful for those times that you need the argument's character set to be converted for the remote host, if the remote shell is incompatible with the default backslash-escpaing method, or there is some other reason that you want the majority of the options and arguments to bypass the command-line of the remote shell. If you combine this option with --iconv, the args related to the remote side will be translated from the local to the remote character-set. The translation happens before wild-cards are expanded. See also the --files-from option. You may also control this setting via the RSYNC_PROTECT_ARGS environment variable. If it has a non- zero value, this setting will be enabled by default, otherwise it will be disabled by default. Either state is overridden by a manually specified positive or negative version of this option (note that --no-s and --no- secluded-args are the negative versions). This environment variable is also superseded by a non-zero RSYNC_OLD_ARGS export. This option conflicts with the --old-args option. This option used to be called --protect-args (before 3.2.6) and that older name can still be used (though specifying it as -s is always the easiest and most compatible choice). --trust-sender This option disables two extra validation checks that a local client performs on the file list generated by a remote sender. This option should only be used if you trust the sender to not put something malicious in the file list (something that could possibly be done via a modified rsync, a modified shell, or some other similar manipulation). Normally, the rsync client (as of version 3.2.5) runs two extra validation checks when pulling files from a remote rsync: o It verifies that additional arg items didn't get added at the top of the transfer. o It verifies that none of the items in the file list are names that should have been excluded (if filter rules were specified). Note that various options can turn off one or both of these checks if the option interferes with the validation. For instance: o Using a per-directory filter file reads filter rules that only the server knows about, so the filter checking is disabled. o Using the --old-args option allows the sender to manipulate the requested args, so the arg checking is disabled. o Reading the files-from list from the server side means that the client doesn't know the arg list, so the arg checking is disabled. o Using --read-batch disables both checks since the batch file's contents will have been verified when it was created. This option may help an under-powered client server if the extra pattern matching is slowing things down on a huge transfer. It can also be used to work around a currently- unknown bug in the verification logic for a transfer from a trusted sender. When using this option it is a good idea to specify a dedicated destination directory, as discussed in the MULTI-HOST SECURITY section. --copy-as=USER[:GROUP] This option instructs rsync to use the USER and (if specified after a colon) the GROUP for the copy operations. This only works if the user that is running rsync has the ability to change users. If the group is not specified then the user's default groups are used. This option can help to reduce the risk of an rsync being run as root into or out of a directory that might have live changes happening to it and you want to make sure that root-level read or write actions of system files are not possible. While you could alternatively run all of rsync as the specified user, sometimes you need the root- level host-access credentials to be used, so this allows rsync to drop root for the copying part of the operation after the remote-shell or daemon connection is established. The option only affects one side of the transfer unless the transfer is local, in which case it affects both sides. Use the --remote-option to affect the remote side, such as -M--copy-as=joe. For a local transfer, the lsh (or lsh.sh) support file provides a local-shell helper script that can be used to allow a "localhost:" or "lh:" host-spec to be specified without needing to setup any remote shells, allowing you to specify remote options that affect the side of the transfer that is using the host- spec (and using hostname "lh" avoids the overriding of the remote directory to the user's home dir). For example, the following rsync writes the local files as user "joe": sudo rsync -aiv --copy-as=joe host1:backups/joe/ /home/joe/ This makes all files owned by user "joe", limits the groups to those that are available to that user, and makes it impossible for the joe user to do a timed exploit of the path to induce a change to a file that the joe user has no permissions to change. The following command does a local copy into the "dest/" dir as user "joe" (assuming you've installed support/lsh into a dir on your $PATH): sudo rsync -aive lsh -M--copy-as=joe src/ lh:dest/ --temp-dir=DIR, -T This option instructs rsync to use DIR as a scratch directory when creating temporary copies of the files transferred on the receiving side. The default behavior is to create each temporary file in the same directory as the associated destination file. Beginning with rsync 3.1.1, the temp-file names inside the specified DIR will not be prefixed with an extra dot (though they will still have a random suffix added). This option is most often used when the receiving disk partition does not have enough free space to hold a copy of the largest file in the transfer. In this case (i.e. when the scratch directory is on a different disk partition), rsync will not be able to rename each received temporary file over the top of the associated destination file, but instead must copy it into place. Rsync does this by copying the file over the top of the destination file, which means that the destination file will contain truncated data during this copy. If this were not done this way (even if the destination file were first removed, the data locally copied to a temporary file in the destination directory, and then renamed into place) it would be possible for the old file to continue taking up disk space (if someone had it open), and thus there might not be enough room to fit the new version on the disk at the same time. If you are using this option for reasons other than a shortage of disk space, you may wish to combine it with the --delay-updates option, which will ensure that all copied files get put into subdirectories in the destination hierarchy, awaiting the end of the transfer. If you don't have enough room to duplicate all the arriving files on the destination partition, another way to tell rsync that you aren't overly concerned about disk space is to use the --partial-dir option with a relative path; because this tells rsync that it is OK to stash off a copy of a single file in a subdir in the destination hierarchy, rsync will use the partial-dir as a staging area to bring over the copied file, and then rename it into place from there. (Specifying a --partial-dir with an absolute path does not have this side-effect.) --fuzzy, -y This option tells rsync that it should look for a basis file for any destination file that is missing. The current algorithm looks in the same directory as the destination file for either a file that has an identical size and modified-time, or a similarly-named file. If found, rsync uses the fuzzy basis file to try to speed up the transfer. If the option is repeated, the fuzzy scan will also be done in any matching alternate destination directories that are specified via --compare-dest, --copy-dest, or --link-dest. Note that the use of the --delete option might get rid of any potential fuzzy-match files, so either use --delete- after or specify some filename exclusions if you need to prevent this. --compare-dest=DIR This option instructs rsync to use DIR on the destination machine as an additional hierarchy to compare destination files against doing transfers (if the files are missing in the destination directory). If a file is found in DIR that is identical to the sender's file, the file will NOT be transferred to the destination directory. This is useful for creating a sparse backup of just files that have changed from an earlier backup. This option is typically used to copy into an empty (or newly created) directory. Beginning in version 2.6.4, multiple --compare-dest directories may be provided, which will cause rsync to search the list in the order specified for an exact match. If a match is found that differs only in attributes, a local copy is made and the attributes updated. If a match is not found, a basis file from one of the DIRs will be selected to try to speed up the transfer. If DIR is a relative path, it is relative to the destination directory. See also --copy-dest and --link- dest. NOTE: beginning with version 3.1.0, rsync will remove a file from a non-empty destination hierarchy if an exact match is found in one of the compare-dest hierarchies (making the end result more closely match a fresh copy). --copy-dest=DIR This option behaves like --compare-dest, but rsync will also copy unchanged files found in DIR to the destination directory using a local copy. This is useful for doing transfers to a new destination while leaving existing files intact, and then doing a flash-cutover when all files have been successfully transferred. Multiple --copy-dest directories may be provided, which will cause rsync to search the list in the order specified for an unchanged file. If a match is not found, a basis file from one of the DIRs will be selected to try to speed up the transfer. If DIR is a relative path, it is relative to the destination directory. See also --compare-dest and --link-dest. --link-dest=DIR This option behaves like --copy-dest, but unchanged files are hard linked from DIR to the destination directory. The files must be identical in all preserved attributes (e.g. permissions, possibly ownership) in order for the files to be linked together. An example: rsync -av --link-dest=$PWD/prior_dir host:src_dir/ new_dir/ If files aren't linking, double-check their attributes. Also check if some attributes are getting forced outside of rsync's control, such a mount option that squishes root to a single user, or mounts a removable drive with generic ownership (such as OS X's "Ignore ownership on this volume" option). Beginning in version 2.6.4, multiple --link-dest directories may be provided, which will cause rsync to search the list in the order specified for an exact match (there is a limit of 20 such directories). If a match is found that differs only in attributes, a local copy is made and the attributes updated. If a match is not found, a basis file from one of the DIRs will be selected to try to speed up the transfer. This option works best when copying into an empty destination hierarchy, as existing files may get their attributes tweaked, and that can affect alternate destination files via hard-links. Also, itemizing of changes can get a bit muddled. Note that prior to version 3.1.0, an alternate-directory exact match would never be found (nor linked into the destination) when a destination file already exists. Note that if you combine this option with --ignore-times, rsync will not link any files together because it only links identical files together as a substitute for transferring the file, never as an additional check after the file is updated. If DIR is a relative path, it is relative to the destination directory. See also --compare-dest and --copy-dest. Note that rsync versions prior to 2.6.1 had a bug that could prevent --link-dest from working properly for a non- super-user when --owner (-o) was specified (or implied). You can work-around this bug by avoiding the -o option (or using --no-o) when sending to an old rsync. --compress, -z With this option, rsync compresses the file data as it is sent to the destination machine, which reduces the amount of data being transmitted -- something that is useful over a slow connection. Rsync supports multiple compression methods and will choose one for you unless you force the choice using the --compress-choice (--zc) option. Run rsync --version to see the default compress list compiled into your version. When both sides of the transfer are at least 3.2.0, rsync chooses the first algorithm in the client's list of choices that is also in the server's list of choices. If no common compress choice is found, rsync exits with an error. If the remote rsync is too old to support checksum negotiation, its list is assumed to be "zlib". The default order can be customized by setting the environment variable RSYNC_COMPRESS_LIST to a space- separated list of acceptable compression names. If the string contains a "&" character, it is separated into the "client string & server string", otherwise the same string applies to both. If the string (or string portion) contains no non-whitespace characters, the default compress list is used. Any unknown compression names are discarded from the list, but a list with only invalid names results in a failed negotiation. There are some older rsync versions that were configured to reject a -z option and require the use of -zz because their compression library was not compatible with the default zlib compression method. You can usually ignore this weirdness unless the rsync server complains and tells you to specify -zz. --compress-choice=STR, --zc=STR This option can be used to override the automatic negotiation of the compression algorithm that occurs when --compress is used. The option implies --compress unless "none" was specified, which instead implies --no-compress. The compression options that you may be able to use are: o zstd o lz4 o zlibx o zlib o none Run rsync --version to see the default compress list compiled into your version (which may differ from the list above). Note that if you see an error about an option named --old- compress or --new-compress, this is rsync trying to send the --compress-choice=zlib or --compress-choice=zlibx option in a backward-compatible manner that more rsync versions understand. This error indicates that the older rsync version on the server will not allow you to force the compression type. Note that the "zlibx" compression algorithm is just the "zlib" algorithm with matched data excluded from the compression stream (to try to make it more compatible with an external zlib implementation). --compress-level=NUM, --zl=NUM Explicitly set the compression level to use (see --compress, -z) instead of letting it default. The --compress option is implied as long as the level chosen is not a "don't compress" level for the compression algorithm that is in effect (e.g. zlib compression treats level 0 as "off"). The level values vary depending on the checksum in effect. Because rsync will negotiate a checksum choice by default (when the remote rsync is new enough), it can be good to combine this option with a --compress-choice (--zc) option unless you're sure of the choice in effect. For example: rsync -aiv --zc=zstd --zl=22 host:src/ dest/ For zlib & zlibx compression the valid values are from 1 to 9 with 6 being the default. Specifying --zl=0 turns compression off, and specifying --zl=-1 chooses the default level of 6. For zstd compression the valid values are from -131072 to 22 with 3 being the default. Specifying 0 chooses the default of 3. For lz4 compression there are no levels, so the value is always 0. If you specify a too-large or too-small value, the number is silently limited to a valid value. This allows you to specify something like --zl=999999999 and be assured that you'll end up with the maximum compression level no matter what algorithm was chosen. If you want to know the compression level that is in effect, specify --debug=nstr to see the "negotiated string" results. This will report something like "Client compress: zstd (level 3)" (along with the checksum choice in effect). --skip-compress=LIST NOTE: no compression method currently supports per-file compression changes, so this option has no effect. Override the list of file suffixes that will be compressed as little as possible. Rsync sets the compression level on a per-file basis based on the file's suffix. If the compression algorithm has an "off" level, then no compression occurs for those files. Other algorithms that support changing the streaming level on-the-fly will have the level minimized to reduces the CPU usage as much as possible for a matching file. The LIST should be one or more file suffixes (without the dot) separated by slashes (/). You may specify an empty string to indicate that no files should be skipped. Simple character-class matching is supported: each must consist of a list of letters inside the square brackets (e.g. no special classes, such as "[:alpha:]", are supported, and '-' has no special meaning). The characters asterisk (*) and question-mark (?) have no special meaning. Here's an example that specifies 6 suffixes to skip (since 1 of the 5 rules matches 2 suffixes): --skip-compress=gz/jpg/mp[34]/7z/bz2 The default file suffixes in the skip-compress list in this version of rsync are: 3g2 3gp 7z aac ace apk avi bz2 deb dmg ear f4v flac flv gpg gz iso jar jpeg jpg lrz lz lz4 lzma lzo m1a m1v m2a m2ts m2v m4a m4b m4p m4r m4v mka mkv mov mp1 mp2 mp3 mp4 mpa mpeg mpg mpv mts odb odf odg odi odm odp ods odt oga ogg ogm ogv ogx opus otg oth otp ots ott oxt png qt rar rpm rz rzip spx squashfs sxc sxd sxg sxm sxw sz tbz tbz2 tgz tlz ts txz tzo vob war webm webp xz z zip zst This list will be replaced by your --skip-compress list in all but one situation: a copy from a daemon rsync will add your skipped suffixes to its list of non-compressing files (and its list may be configured to a different default). --numeric-ids With this option rsync will transfer numeric group and user IDs rather than using user and group names and mapping them at both ends. By default rsync will use the username and groupname to determine what ownership to give files. The special uid 0 and the special group 0 are never mapped via user/group names even if the --numeric-ids option is not specified. If a user or group has no name on the source system or it has no match on the destination system, then the numeric ID from the source system is used instead. See also the use chroot setting in the rsyncd.conf manpage for some comments on how the chroot setting affects rsync's ability to look up the names of the users and groups and what you can do about it. --usermap=STRING, --groupmap=STRING These options allow you to specify users and groups that should be mapped to other values by the receiving side. The STRING is one or more FROM:TO pairs of values separated by commas. Any matching FROM value from the sender is replaced with a TO value from the receiver. You may specify usernames or user IDs for the FROM and TO values, and the FROM value may also be a wild-card string, which will be matched against the sender's names (wild- cards do NOT match against ID numbers, though see below for why a '*' matches everything). You may instead specify a range of ID numbers via an inclusive range: LOW- HIGH. For example: --usermap=0-99:nobody,wayne:admin,*:normal --groupmap=usr:1,1:usr The first match in the list is the one that is used. You should specify all your user mappings using a single --usermap option, and/or all your group mappings using a single --groupmap option. Note that the sender's name for the 0 user and group are not transmitted to the receiver, so you should either match these values using a 0, or use the names in effect on the receiving side (typically "root"). All other FROM names match those in use on the sending side. All TO names match those in use on the receiving side. Any IDs that do not have a name on the sending side are treated as having an empty name for the purpose of matching. This allows them to be matched via a "*" or using an empty name. For instance: --usermap=:nobody --groupmap=*:nobody When the --numeric-ids option is used, the sender does not send any names, so all the IDs are treated as having an empty name. This means that you will need to specify numeric FROM values if you want to map these nameless IDs to different values. For the --usermap option to work, the receiver will need to be running as a super-user (see also the --super and --fake-super options). For the --groupmap option to work, the receiver will need to have permissions to set that group. Starting with rsync 3.2.4, the --usermap option implies the --owner (-o) option while the --groupmap option implies the --group (-g) option (since rsync needs to have those options enabled for the mapping options to work). An older rsync client may need to use -s to avoid a complaint about wildcard characters, but a modern rsync handles this automatically. --chown=USER:GROUP This option forces all files to be owned by USER with group GROUP. This is a simpler interface than using --usermap & --groupmap directly, but it is implemented using those options internally so they cannot be mixed. If either the USER or GROUP is empty, no mapping for the omitted user/group will occur. If GROUP is empty, the trailing colon may be omitted, but if USER is empty, a leading colon must be supplied. If you specify "--chown=foo:bar", this is exactly the same as specifying "--usermap=*:foo --groupmap=*:bar", only easier (and with the same implied --owner and/or --group options). An older rsync client may need to use -s to avoid a complaint about wildcard characters, but a modern rsync handles this automatically. --timeout=SECONDS This option allows you to set a maximum I/O timeout in seconds. If no data is transferred for the specified time then rsync will exit. The default is 0, which means no timeout. --contimeout=SECONDS This option allows you to set the amount of time that rsync will wait for its connection to an rsync daemon to succeed. If the timeout is reached, rsync exits with an error. --address=ADDRESS By default rsync will bind to the wildcard address when connecting to an rsync daemon. The --address option allows you to specify a specific IP address (or hostname) to bind to. See also the daemon version of the --address option. --port=PORT This specifies an alternate TCP port number to use rather than the default of 873. This is only needed if you are using the double-colon (::) syntax to connect with an rsync daemon (since the URL syntax has a way to specify the port as a part of the URL). See also the daemon version of the --port option. --sockopts=OPTIONS This option can provide endless fun for people who like to tune their systems to the utmost degree. You can set all sorts of socket options which may make transfers faster (or slower!). Read the manpage for the setsockopt() system call for details on some of the options you may be able to set. By default no special socket options are set. This only affects direct socket connections to a remote rsync daemon. See also the daemon version of the --sockopts option. --blocking-io This tells rsync to use blocking I/O when launching a remote shell transport. If the remote shell is either rsh or remsh, rsync defaults to using blocking I/O, otherwise it defaults to using non-blocking I/O. (Note that ssh prefers non-blocking I/O.) --outbuf=MODE This sets the output buffering mode. The mode can be None (aka Unbuffered), Line, or Block (aka Full). You may specify as little as a single letter for the mode, and use upper or lower case. The main use of this option is to change Full buffering to Line buffering when rsync's output is going to a file or pipe. --itemize-changes, -i Requests a simple itemized list of the changes that are being made to each file, including attribute changes. This is exactly the same as specifying --out- format='%i %n%L'. If you repeat the option, unchanged files will also be output, but only if the receiving rsync is at least version 2.6.7 (you can use -vv with older versions of rsync, but that also turns on the output of other verbose messages). The "%i" escape has a cryptic output that is 11 letters long. The general format is like the string YXcstpoguax, where Y is replaced by the type of update being done, X is replaced by the file-type, and the other letters represent attributes that may be output if they are being modified. The update types that replace the Y are as follows: o A < means that a file is being transferred to the remote host (sent). o A > means that a file is being transferred to the local host (received). o A c means that a local change/creation is occurring for the item (such as the creation of a directory or the changing of a symlink, etc.). o A h means that the item is a hard link to another item (requires --hard-links). o A . means that the item is not being updated (though it might have attributes that are being modified). o A * means that the rest of the itemized-output area contains a message (e.g. "deleting"). The file-types that replace the X are: f for a file, a d for a directory, an L for a symlink, a D for a device, and a S for a special file (e.g. named sockets and fifos). The other letters in the string indicate if some attributes of the file have changed, as follows: o "." - the attribute is unchanged. o "+" - the file is newly created. o " " - all the attributes are unchanged (all dots turn to spaces). o "?" - the change is unknown (when the remote rsync is old). o A letter indicates an attribute is being updated. The attribute that is associated with each letter is as follows: o A c means either that a regular file has a different checksum (requires --checksum) or that a symlink, device, or special file has a changed value. Note that if you are sending files to an rsync prior to 3.0.1, this change flag will be present only for checksum-differing regular files. o A s means the size of a regular file is different and will be updated by the file transfer. o A t means the modification time is different and is being updated to the sender's value (requires --times). An alternate value of T means that the modification time will be set to the transfer time, which happens when a file/symlink/device is updated without --times and when a symlink is changed and the receiver can't set its time. (Note: when using an rsync 3.0.0 client, you might see the s flag combined with t instead of the proper T flag for this time-setting failure.) o A p means the permissions are different and are being updated to the sender's value (requires --perms). o An o means the owner is different and is being updated to the sender's value (requires --owner and super-user privileges). o A g means the group is different and is being updated to the sender's value (requires --group and the authority to set the group). o o A u|n|b indicates the following information: u means the access (use) time is different and is being updated to the sender's value (requires --atimes) o n means the create time (newness) is different and is being updated to the sender's value (requires --crtimes) o b means that both the access and create times are being updated o The a means that the ACL information is being changed. o The x means that the extended attribute information is being changed. One other output is possible: when deleting files, the "%i" will output the string "*deleting" for each item that is being removed (assuming that you are talking to a recent enough rsync that it logs deletions instead of outputting them as a verbose message). --out-format=FORMAT This allows you to specify exactly what the rsync client outputs to the user on a per-update basis. The format is a text string containing embedded single-character escape sequences prefixed with a percent (%) character. A default format of "%n%L" is assumed if either --info=name or -v is specified (this tells you just the name of the file and, if the item is a link, where it points). For a full list of the possible escape characters, see the log format setting in the rsyncd.conf manpage. Specifying the --out-format option implies the --info=name option, which will mention each file, dir, etc. that gets updated in a significant way (a transferred file, a recreated symlink/device, or a touched directory). In addition, if the itemize-changes escape (%i) is included in the string (e.g. if the --itemize-changes option was used), the logging of names increases to mention any item that is changed in any way (as long as the receiving side is at least 2.6.4). See the --itemize-changes option for a description of the output of "%i". Rsync will output the out-format string prior to a file's transfer unless one of the transfer-statistic escapes is requested, in which case the logging is done at the end of the file's transfer. When this late logging is in effect and --progress is also specified, rsync will also output the name of the file being transferred prior to its progress information (followed, of course, by the out- format output). --log-file=FILE This option causes rsync to log what it is doing to a file. This is similar to the logging that a daemon does, but can be requested for the client side and/or the server side of a non-daemon transfer. If specified as a client option, transfer logging will be enabled with a default format of "%i %n%L". See the --log-file-format option if you wish to override this. Here's an example command that requests the remote side to log what is happening: rsync -av --remote-option=--log-file=/tmp/rlog src/ dest/ This is very useful if you need to debug why a connection is closing unexpectedly. See also the daemon version of the --log-file option. --log-file-format=FORMAT This allows you to specify exactly what per-update logging is put into the file specified by the --log-file option (which must also be specified for this option to have any effect). If you specify an empty string, updated files will not be mentioned in the log file. For a list of the possible escape characters, see the log format setting in the rsyncd.conf manpage. The default FORMAT used if --log-file is specified and this option is not is '%i %n%L'. See also the daemon version of the --log-file-format option. --stats This tells rsync to print a verbose set of statistics on the file transfer, allowing you to tell how effective rsync's delta-transfer algorithm is for your data. This option is equivalent to --info=stats2 if combined with 0 or 1 -v options, or --info=stats3 if combined with 2 or more -v options. The current statistics are as follows: o Number of files is the count of all "files" (in the generic sense), which includes directories, symlinks, etc. The total count will be followed by a list of counts by filetype (if the total is non- zero). For example: "(reg: 5, dir: 3, link: 2, dev: 1, special: 1)" lists the totals for regular files, directories, symlinks, devices, and special files. If any of value is 0, it is completely omitted from the list. o Number of created files is the count of how many "files" (generic sense) were created (as opposed to updated). The total count will be followed by a list of counts by filetype (if the total is non- zero). o Number of deleted files is the count of how many "files" (generic sense) were deleted. The total count will be followed by a list of counts by filetype (if the total is non-zero). Note that this line is only output if deletions are in effect, and only if protocol 31 is being used (the default for rsync 3.1.x). o Number of regular files transferred is the count of normal files that were updated via rsync's delta- transfer algorithm, which does not include dirs, symlinks, etc. Note that rsync 3.1.0 added the word "regular" into this heading. o Total file size is the total sum of all file sizes in the transfer. This does not count any size for directories or special files, but does include the size of symlinks. o Total transferred file size is the total sum of all files sizes for just the transferred files. o Literal data is how much unmatched file-update data we had to send to the receiver for it to recreate the updated files. o Matched data is how much data the receiver got locally when recreating the updated files. o File list size is how big the file-list data was when the sender sent it to the receiver. This is smaller than the in-memory size for the file list due to some compressing of duplicated data when rsync sends the list. o File list generation time is the number of seconds that the sender spent creating the file list. This requires a modern rsync on the sending side for this to be present. o File list transfer time is the number of seconds that the sender spent sending the file list to the receiver. o Total bytes sent is the count of all the bytes that rsync sent from the client side to the server side. o Total bytes received is the count of all non- message bytes that rsync received by the client side from the server side. "Non-message" bytes means that we don't count the bytes for a verbose message that the server sent to us, which makes the stats more consistent. --8-bit-output, -8 This tells rsync to leave all high-bit characters unescaped in the output instead of trying to test them to see if they're valid in the current locale and escaping the invalid ones. All control characters (but never tabs) are always escaped, regardless of this option's setting. The escape idiom that started in 2.6.7 is to output a literal backslash (\) and a hash (#), followed by exactly 3 octal digits. For example, a newline would output as "\#012". A literal backslash that is in a filename is not escaped unless it is followed by a hash and 3 digits (0-9). --human-readable, -h Output numbers in a more human-readable format. There are 3 possible levels: 1. output numbers with a separator between each set of 3 digits (either a comma or a period, depending on if the decimal point is represented by a period or a comma). 2. output numbers in units of 1000 (with a character suffix for larger units -- see below). 3. output numbers in units of 1024. The default is human-readable level 1. Each -h option increases the level by one. You can take the level down to 0 (to output numbers as pure digits) by specifying the --no-human-readable (--no-h) option. The unit letters that are appended in levels 2 and 3 are: K (kilo), M (mega), G (giga), T (tera), or P (peta). For example, a 1234567-byte file would output as 1.23M in level-2 (assuming that a period is your local decimal point). Backward compatibility note: versions of rsync prior to 3.1.0 do not support human-readable level 1, and they default to level 0. Thus, specifying one or two -h options will behave in a comparable manner in old and new versions as long as you didn't specify a --no-h option prior to one or more -h options. See the --list-only option for one difference. --partial By default, rsync will delete any partially transferred file if the transfer is interrupted. In some circumstances it is more desirable to keep partially transferred files. Using the --partial option tells rsync to keep the partial file which should make a subsequent transfer of the rest of the file much faster. --partial-dir=DIR This option modifies the behavior of the --partial option while also implying that it be enabled. This enhanced partial-file method puts any partially transferred files into the specified DIR instead of writing the partial file out to the destination file. On the next transfer, rsync will use a file found in this dir as data to speed up the resumption of the transfer and then delete it after it has served its purpose. Note that if --whole-file is specified (or implied), any partial-dir files that are found for a file that is being updated will simply be removed (since rsync is sending files without using rsync's delta-transfer algorithm). Rsync will create the DIR if it is missing, but just the last dir -- not the whole path. This makes it easy to use a relative path (such as "--partial-dir=.rsync-partial") to have rsync create the partial-directory in the destination file's directory when it is needed, and then remove it again when the partial file is deleted. Note that this directory removal is only done for a relative pathname, as it is expected that an absolute path is to a directory that is reserved for partial-dir work. If the partial-dir value is not an absolute path, rsync will add an exclude rule at the end of all your existing excludes. This will prevent the sending of any partial- dir files that may exist on the sending side, and will also prevent the untimely deletion of partial-dir items on the receiving side. An example: the above --partial-dir option would add the equivalent of this "perishable" exclude at the end of any other filter rules: -f '-p .rsync-partial/' If you are supplying your own exclude rules, you may need to add your own exclude/hide/protect rule for the partial- dir because: 1. the auto-added rule may be ineffective at the end of your other rules, or 2. you may wish to override rsync's exclude choice. For instance, if you want to make rsync clean-up any left- over partial-dirs that may be lying around, you should specify --delete-after and add a "risk" filter rule, e.g. -f 'R .rsync-partial/'. Avoid using --delete-before or --delete-during unless you don't need rsync to use any of the left-over partial-dir data during the current run. IMPORTANT: the --partial-dir should not be writable by other users or it is a security risk! E.g. AVOID "/tmp"! You can also set the partial-dir value the RSYNC_PARTIAL_DIR environment variable. Setting this in the environment does not force --partial to be enabled, but rather it affects where partial files go when --partial is specified. For instance, instead of using --partial-dir=.rsync-tmp along with --progress, you could set RSYNC_PARTIAL_DIR=.rsync-tmp in your environment and then use the -P option to turn on the use of the .rsync- tmp dir for partial transfers. The only times that the --partial option does not look for this environment value are: 1. when --inplace was specified (since --inplace conflicts with --partial-dir), and 2. when --delay-updates was specified (see below). When a modern rsync resumes the transfer of a file in the partial-dir, that partial file is now updated in-place instead of creating yet another tmp-file copy (so it maxes out at dest + tmp instead of dest + partial + tmp). This requires both ends of the transfer to be at least version 3.2.0. For the purposes of the daemon-config's "refuse options" setting, --partial-dir does not imply --partial. This is so that a refusal of the --partial option can be used to disallow the overwriting of destination files with a partial transfer, while still allowing the safer idiom provided by --partial-dir. --delay-updates This option puts the temporary file from each updated file into a holding directory until the end of the transfer, at which time all the files are renamed into place in rapid succession. This attempts to make the updating of the files a little more atomic. By default the files are placed into a directory named .~tmp~ in each file's destination directory, but if you've specified the --partial-dir option, that directory will be used instead. See the comments in the --partial-dir section for a discussion of how this .~tmp~ dir will be excluded from the transfer, and what you can do if you want rsync to cleanup old .~tmp~ dirs that might be lying around. Conflicts with --inplace and --append. This option implies --no-inc-recursive since it needs the full file list in memory in order to be able to iterate over it at the end. This option uses more memory on the receiving side (one bit per file transferred) and also requires enough free disk space on the receiving side to hold an additional copy of all the updated files. Note also that you should not use an absolute path to --partial-dir unless: 1. there is no chance of any of the files in the transfer having the same name (since all the updated files will be put into a single directory if the path is absolute), and 2. there are no mount points in the hierarchy (since the delayed updates will fail if they can't be renamed into place). See also the "atomic-rsync" python script in the "support" subdir for an update algorithm that is even more atomic (it uses --link-dest and a parallel hierarchy of files). --prune-empty-dirs, -m This option tells the receiving rsync to get rid of empty directories from the file-list, including nested directories that have no non-directory children. This is useful for avoiding the creation of a bunch of useless directories when the sending rsync is recursively scanning a hierarchy of files using include/exclude/filter rules. This option can still leave empty directories on the receiving side if you make use of TRANSFER_RULES. Because the file-list is actually being pruned, this option also affects what directories get deleted when a delete is active. However, keep in mind that excluded files and directories can prevent existing items from being deleted due to an exclude both hiding source files and protecting destination files. See the perishable filter-rule option for how to avoid this. You can prevent the pruning of certain empty directories from the file-list by using a global "protect" filter. For instance, this option would ensure that the directory "emptydir" was kept in the file-list: --filter 'protect emptydir/' Here's an example that copies all .pdf files in a hierarchy, only creating the necessary destination directories to hold the .pdf files, and ensures that any superfluous files and directories in the destination are removed (note the hide filter of non-directories being used instead of an exclude): rsync -avm --del --include='*.pdf' -f 'hide,! */' src/ dest If you didn't want to remove superfluous destination files, the more time-honored options of --include='*/' --exclude='*' would work fine in place of the hide-filter (if that is more natural to you). --progress This option tells rsync to print information showing the progress of the transfer. This gives a bored user something to watch. With a modern rsync this is the same as specifying --info=flist2,name,progress, but any user- supplied settings for those info flags takes precedence (e.g. --info=flist0 --progress). While rsync is transferring a regular file, it updates a progress line that looks like this: 782448 63% 110.64kB/s 0:00:04 In this example, the receiver has reconstructed 782448 bytes or 63% of the sender's file, which is being reconstructed at a rate of 110.64 kilobytes per second, and the transfer will finish in 4 seconds if the current rate is maintained until the end. These statistics can be misleading if rsync's delta- transfer algorithm is in use. For example, if the sender's file consists of the basis file followed by additional data, the reported rate will probably drop dramatically when the receiver gets to the literal data, and the transfer will probably take much longer to finish than the receiver estimated as it was finishing the matched part of the file. When the file transfer finishes, rsync replaces the progress line with a summary line that looks like this: 1,238,099 100% 146.38kB/s 0:00:08 (xfr#5, to-chk=169/396) In this example, the file was 1,238,099 bytes long in total, the average rate of transfer for the whole file was 146.38 kilobytes per second over the 8 seconds that it took to complete, it was the 5th transfer of a regular file during the current rsync session, and there are 169 more files for the receiver to check (to see if they are up-to-date or not) remaining out of the 396 total files in the file-list. In an incremental recursion scan, rsync won't know the total number of files in the file-list until it reaches the ends of the scan, but since it starts to transfer files during the scan, it will display a line with the text "ir-chk" (for incremental recursion check) instead of "to-chk" until the point that it knows the full size of the list, at which point it will switch to using "to-chk". Thus, seeing "ir-chk" lets you know that the total count of files in the file list is still going to increase (and each time it does, the count of files left to check will increase by the number of the files added to the list). -P The -P option is equivalent to "--partial --progress". Its purpose is to make it much easier to specify these two options for a long transfer that may be interrupted. There is also a --info=progress2 option that outputs statistics based on the whole transfer, rather than individual files. Use this flag without outputting a filename (e.g. avoid -v or specify --info=name0) if you want to see how the transfer is doing without scrolling the screen with a lot of names. (You don't need to specify the --progress option in order to use --info=progress2.) Finally, you can get an instant progress report by sending rsync a signal of either SIGINFO or SIGVTALRM. On BSD systems, a SIGINFO is generated by typing a Ctrl+T (Linux doesn't currently support a SIGINFO signal). When the client-side process receives one of those signals, it sets a flag to output a single progress report which is output when the current file transfer finishes (so it may take a little time if a big file is being handled when the signal arrives). A filename is output (if needed) followed by the --info=progress2 format of progress info. If you don't know which of the 3 rsync processes is the client process, it's OK to signal all of them (since the non- client processes ignore the signal). CAUTION: sending SIGVTALRM to an older rsync (pre-3.2.0) will kill it. --password-file=FILE This option allows you to provide a password for accessing an rsync daemon via a file or via standard input if FILE is -. The file should contain just the password on the first line (all other lines are ignored). Rsync will exit with an error if FILE is world readable or if a root-run rsync command finds a non-root-owned file. This option does not supply a password to a remote shell transport such as ssh; to learn how to do that, consult the remote shell's documentation. When accessing an rsync daemon using a remote shell as the transport, this option only comes into effect after the remote shell finishes its authentication (i.e. if you have also specified a password in the daemon's config file). --early-input=FILE This option allows rsync to send up to 5K of data to the "early exec" script on its stdin. One possible use of this data is to give the script a secret that can be used to mount an encrypted filesystem (which you should unmount in the the "post-xfer exec" script). The daemon must be at least version 3.2.1. --list-only This option will cause the source files to be listed instead of transferred. This option is inferred if there is a single source arg and no destination specified, so its main uses are: 1. to turn a copy command that includes a destination arg into a file-listing command, or 2. to be able to specify more than one source arg. Note: be sure to include the destination. CAUTION: keep in mind that a source arg with a wild-card is expanded by the shell into multiple args, so it is never safe to try to specify a single wild-card arg to try to infer this option. A safe example is: rsync -av --list-only foo* dest/ This option always uses an output format that looks similar to this: drwxrwxr-x 4,096 2022/09/30 12:53:11 support -rw-rw-r-- 80 2005/01/11 10:37:37 support/Makefile The only option that affects this output style is (as of 3.1.0) the --human-readable (-h) option. The default is to output sizes as byte counts with digit separators (in a 14-character-width column). Specifying at least one -h option makes the sizes output with unit suffixes. If you want old-style bytecount sizes without digit separators (and an 11-character-width column) use --no-h. Compatibility note: when requesting a remote listing of files from an rsync that is version 2.6.3 or older, you may encounter an error if you ask for a non-recursive listing. This is because a file listing implies the --dirs option w/o --recursive, and older rsyncs don't have that option. To avoid this problem, either specify the --no-dirs option (if you don't need to expand a directory's content), or turn on recursion and exclude the content of subdirectories: -r --exclude='/*/*'. --bwlimit=RATE This option allows you to specify the maximum transfer rate for the data sent over the socket, specified in units per second. The RATE value can be suffixed with a string to indicate a size multiplier, and may be a fractional value (e.g. --bwlimit=1.5m). If no suffix is specified, the value will be assumed to be in units of 1024 bytes (as if "K" or "KiB" had been appended). See the --max-size option for a description of all the available suffixes. A value of 0 specifies no limit. For backward-compatibility reasons, the rate limit will be rounded to the nearest KiB unit, so no rate smaller than 1024 bytes per second is possible. Rsync writes data over the socket in blocks, and this option both limits the size of the blocks that rsync writes, and tries to keep the average transfer rate at the requested limit. Some burstiness may be seen where rsync writes out a block of data and then sleeps to bring the average rate into compliance. Due to the internal buffering of data, the --progress option may not be an accurate reflection on how fast the data is being sent. This is because some files can show up as being rapidly sent when the data is quickly buffered, while other can show up as very slow when the flushing of the output buffer occurs. This may be fixed in a future version. See also the daemon version of the --bwlimit option. --stop-after=MINS, (--time-limit=MINS) This option tells rsync to stop copying when the specified number of minutes has elapsed. For maximal flexibility, rsync does not communicate this option to the remote rsync since it is usually enough that one side of the connection quits as specified. This allows the option's use even when only one side of the connection supports it. You can tell the remote side about the time limit using --remote-option (-M), should the need arise. The --time-limit version of this option is deprecated. --stop-at=y-m-dTh:m This option tells rsync to stop copying when the specified point in time has been reached. The date & time can be fully specified in a numeric format of year-month- dayThour:minute (e.g. 2000-12-31T23:59) in the local timezone. You may choose to separate the date numbers using slashes instead of dashes. The value can also be abbreviated in a variety of ways, such as specifying a 2-digit year and/or leaving off various values. In all cases, the value will be taken to be the next possible point in time where the supplied information matches. If the value specifies the current time or a past time, rsync exits with an error. For example, "1-30" specifies the next January 30th (at midnight local time), "14:00" specifies the next 2 P.M., "1" specifies the next 1st of the month at midnight, "31" specifies the next month where we can stop on its 31st day, and ":59" specifies the next 59th minute after the hour. For maximal flexibility, rsync does not communicate this option to the remote rsync since it is usually enough that one side of the connection quits as specified. This allows the option's use even when only one side of the connection supports it. You can tell the remote side about the time limit using --remote-option (-M), should the need arise. Do keep in mind that the remote host may have a different default timezone than your local host. --fsync Cause the receiving side to fsync each finished file. This may slow down the transfer, but can help to provide peace of mind when updating critical files. --write-batch=FILE Record a file that can later be applied to another identical destination with --read-batch. See the "BATCH MODE" section for details, and also the --only-write-batch option. This option overrides the negotiated checksum & compress lists and always negotiates a choice based on old-school md5/md4/zlib choices. If you want a more modern choice, use the --checksum-choice (--cc) and/or --compress-choice (--zc) options. --only-write-batch=FILE Works like --write-batch, except that no updates are made on the destination system when creating the batch. This lets you transport the changes to the destination system via some other means and then apply the changes via --read-batch. Note that you can feel free to write the batch directly to some portable media: if this media fills to capacity before the end of the transfer, you can just apply that partial transfer to the destination and repeat the whole process to get the rest of the changes (as long as you don't mind a partially updated destination system while the multi-update cycle is happening). Also note that you only save bandwidth when pushing changes to a remote system because this allows the batched data to be diverted from the sender into the batch file without having to flow over the wire to the receiver (when pulling, the sender is remote, and thus can't write the batch). --read-batch=FILE Apply all of the changes stored in FILE, a file previously generated by --write-batch. If FILE is -, the batch data will be read from standard input. See the "BATCH MODE" section for details. --protocol=NUM Force an older protocol version to be used. This is useful for creating a batch file that is compatible with an older version of rsync. For instance, if rsync 2.6.4 is being used with the --write-batch option, but rsync 2.6.3 is what will be used to run the --read-batch option, you should use "--protocol=28" when creating the batch file to force the older protocol version to be used in the batch file (assuming you can't upgrade the rsync on the reading system). --iconv=CONVERT_SPEC Rsync can convert filenames between character sets using this option. Using a CONVERT_SPEC of "." tells rsync to look up the default character-set via the locale setting. Alternately, you can fully specify what conversion to do by giving a local and a remote charset separated by a comma in the order --iconv=LOCAL,REMOTE, e.g. --iconv=utf8,iso88591. This order ensures that the option will stay the same whether you're pushing or pulling files. Finally, you can specify either --no-iconv or a CONVERT_SPEC of "-" to turn off any conversion. The default setting of this option is site-specific, and can also be affected via the RSYNC_ICONV environment variable. For a list of what charset names your local iconv library supports, you can run "iconv --list". If you specify the --secluded-args (-s) option, rsync will translate the filenames you specify on the command-line that are being sent to the remote host. See also the --files-from option. Note that rsync does not do any conversion of names in filter files (including include/exclude files). It is up to you to ensure that you're specifying matching rules that can match on both sides of the transfer. For instance, you can specify extra include/exclude rules if there are filename differences on the two sides that need to be accounted for. When you pass an --iconv option to an rsync daemon that allows it, the daemon uses the charset specified in its "charset" configuration parameter regardless of the remote charset you actually pass. Thus, you may feel free to specify just the local charset for a daemon transfer (e.g. --iconv=utf8). --ipv4, -4 or --ipv6, -6 Tells rsync to prefer IPv4/IPv6 when creating sockets or running ssh. This affects sockets that rsync has direct control over, such as the outgoing socket when directly contacting an rsync daemon, as well as the forwarding of the -4 or -6 option to ssh when rsync can deduce that ssh is being used as the remote shell. For other remote shells you'll need to specify the "--rsh SHELL -4" option directly (or whatever IPv4/IPv6 hint options it uses). See also the daemon version of these options. If rsync was compiled without support for IPv6, the --ipv6 option will have no effect. The rsync --version output will contain "no IPv6" if is the case. --checksum-seed=NUM Set the checksum seed to the integer NUM. This 4 byte checksum seed is included in each block and MD4 file checksum calculation (the more modern MD5 file checksums don't use a seed). By default the checksum seed is generated by the server and defaults to the current time(). This option is used to set a specific checksum seed, which is useful for applications that want repeatable block checksums, or in the case where the user wants a more random checksum seed. Setting NUM to 0 causes rsync to use the default of time() for checksum seed.
# rsync > Transfer files either to or from a remote host (but not between two remote > hosts), by default using SSH. To specify a remote path, use > `host:path/to/file_or_directory`. More information: > https://download.samba.org/pub/rsync/rsync.1. * Transfer a file: `rsync {{path/to/source}} {{path/to/destination}}` * Use archive mode (recursively copy directories, copy symlinks without resolving and preserve permissions, ownership and modification times): `rsync --archive {{path/to/source}} {{path/to/destination}}` * Compress the data as it is sent to the destination, display verbose and human-readable progress, and keep partially transferred files if interrupted: `rsync --compress --verbose --human-readable --partial --progress {{path/to/source}} {{path/to/destination}}` * Recursively copy directories: `rsync --recursive {{path/to/source}} {{path/to/destination}}` * Transfer directory contents, but not the directory itself: `rsync --recursive {{path/to/source}}/ {{path/to/destination}}` * Recursively copy directories, use archive mode, resolve symlinks and skip files that are newer on the destination: `rsync --recursive --archive --update --copy-links {{path/to/source}} {{path/to/destination}}` * Transfer a directory to a remote host running `rsyncd` and delete files on the destination that do not exist on the source: `rsync --recursive --delete rsync://{{host}}:{{path/to/source}} {{path/to/destination}}` * Transfer a file over SSH using a different port than the default (22) and show global progress: `rsync --rsh 'ssh -p {{port}}' --info=progress2 {{host}}:{{path/to/source}} {{path/to/destination}}`
unexpand
Convert blanks in each FILE to tabs, writing to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -a, --all convert all blanks, instead of just initial blanks --first-only convert only leading sequences of blanks (overrides -a) -t, --tabs=N have tabs N characters apart instead of 8 (enables -a) -t, --tabs=LIST use comma separated list of tab positions. The last specified position can be prefixed with '/' to specify a tab size to use after the last explicitly specified tab stop. Also a prefix of '+' can be used to align remaining tab stops relative to the last specified tab stop instead of the first column --help display this help and exit --version output version information and exit
# unexpand > Convert spaces to tabs. More information: > https://www.gnu.org/software/coreutils/unexpand. * Convert blanks in each file to tabs, writing to `stdout`: `unexpand {{path/to/file}}` * Convert blanks to tabs, reading from `stdout`: `unexpand` * Convert all blanks, instead of just initial blanks: `unexpand -a {{path/to/file}}` * Convert only leading sequences of blanks (overrides -a): `unexpand --first-only {{path/to/file}}` * Have tabs a certain number of characters apart, not 8 (enables -a): `unexpand -t {{number}} {{path/to/file}}`
scp
scp copies files between hosts on a network. scp uses the SFTP protocol over a ssh(1) connection for data transfer, and uses the same authentication and provides the same security as a login session. scp will ask for passwords or passphrases if they are needed for authentication. The source and target may be specified as a local pathname, a remote host with optional path in the form [user@]host:[path], or a URI in the form scp://[user@]host[:port][/path]. Local file names can be made explicit using absolute or relative pathnames to avoid scp treating file names containing ‘:’ as host specifiers. When copying between two remote hosts, if the URI format is used, a port cannot be specified on the target if the -R option is used. The options are as follows: -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly between the two remote hosts. Note that, when using the legacy SCP protocol (via the -O flag), this option selects batch mode for the second host as scp cannot ask for passwords or passphrases for both hosts. This mode is the default. -4 Forces scp to use IPv4 addresses only. -6 Forces scp to use IPv6 addresses only. -A Allows forwarding of ssh-agent(1) to the remote system. The default is not to forward an authentication agent. -B Selects batch mode (prevents asking for passwords or passphrases). -C Compression enable. Passes the -C flag to ssh(1) to enable compression. -c cipher Selects the cipher to use for encrypting the data transfer. This option is directly passed to ssh(1). -D sftp_server_path Connect directly to a local SFTP server program rather than a remote one via ssh(1). This option may be useful in debugging the client and server. -F ssh_config Specifies an alternative per-user configuration file for ssh. This option is directly passed to ssh(1). -i identity_file Selects the file from which the identity (private key) for public key authentication is read. This option is directly passed to ssh(1). -J destination Connect to the target host by first making an scp connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. This option is directly passed to ssh(1). -l limit Limits the used bandwidth, specified in Kbit/s. -O Use the legacy SCP protocol for file transfers instead of the SFTP protocol. Forcing the use of the SCP protocol may be necessary for servers that do not implement SFTP, for backwards-compatibility for particular filename wildcard patterns and for expanding paths with a ‘~’ prefix for older SFTP servers. -o ssh_option Can be used to pass options to ssh in the format used in ssh_config(5). This is useful for specifying options for which there is no separate scp command-line flag. For full details of the options listed below, and their possible values, see ssh_config(5). AddressFamily BatchMode BindAddress BindInterface CanonicalDomains CanonicalizeFallbackLocal CanonicalizeHostname CanonicalizeMaxDots CanonicalizePermittedCNAMEs CASignatureAlgorithms CertificateFile CheckHostIP Ciphers Compression ConnectionAttempts ConnectTimeout ControlMaster ControlPath ControlPersist GlobalKnownHostsFile GSSAPIAuthentication GSSAPIDelegateCredentials HashKnownHosts Host HostbasedAcceptedAlgorithms HostbasedAuthentication HostKeyAlgorithms HostKeyAlias Hostname IdentitiesOnly IdentityAgent IdentityFile IPQoS KbdInteractiveAuthentication KbdInteractiveDevices KexAlgorithms KnownHostsCommand LogLevel MACs NoHostAuthenticationForLocalhost NumberOfPasswordPrompts PasswordAuthentication PKCS11Provider Port PreferredAuthentications ProxyCommand ProxyJump PubkeyAcceptedAlgorithms PubkeyAuthentication RekeyLimit RequiredRSASize SendEnv ServerAliveInterval ServerAliveCountMax SetEnv StrictHostKeyChecking TCPKeepAlive UpdateHostKeys User UserKnownHostsFile VerifyHostKeyDNS -P port Specifies the port to connect to on the remote host. Note that this option is written with a capital ‘P’, because -p is already reserved for preserving the times and mode bits of the file. -p Preserves modification times, access times, and file mode bits from the source file. -q Quiet mode: disables the progress meter as well as warning and diagnostic messages from ssh(1). -R Copies between two remote hosts are performed by connecting to the origin host and executing scp there. This requires that scp running on the origin host can authenticate to the destination host without requiring a password. -r Recursively copy entire directories. Note that scp follows symbolic links encountered in the tree traversal. -S program Name of program to use for the encrypted connection. The program must understand ssh(1) options. -T Disable strict filename checking. By default when copying files from a remote host to a local directory scp checks that the received filenames match those requested on the command-line to prevent the remote end from sending unexpected or unwanted files. Because of differences in how various operating systems and shells interpret filename wildcards, these checks may cause wanted files to be rejected. This option disables these checks at the expense of fully trusting that the server will not send unexpected filenames. -v Verbose mode. Causes scp and ssh(1) to print debugging messages about their progress. This is helpful in debugging connection, authentication, and configuration problems. -X sftp_option Specify an option that controls aspects of SFTP protocol behaviour. The valid options are: nrequests=value Controls how many concurrent SFTP read or write requests may be in progress at any point in time during a download or upload. By default 64 requests may be active concurrently. buffer=value Controls the maximum buffer size for a single SFTP read/write operation used during download or upload. By default a 32KB buffer is used.
# scp > Secure copy. Copy files between hosts using Secure Copy Protocol over SSH. > More information: https://man.openbsd.org/scp. * Copy a local file to a remote host: `scp {{path/to/local_file}} {{remote_host}}:{{path/to/remote_file}}` * Use a specific port when connecting to the remote host: `scp -P {{port}} {{path/to/local_file}} {{remote_host}}:{{path/to/remote_file}}` * Copy a file from a remote host to a local directory: `scp {{remote_host}}:{{path/to/remote_file}} {{path/to/local_directory}}` * Recursively copy the contents of a directory from a remote host to a local directory: `scp -r {{remote_host}}:{{path/to/remote_directory}} {{path/to/local_directory}}` * Copy a file between two remote hosts transferring through the local host: `scp -3 {{host1}}:{{path/to/remote_file}} {{host2}}:{{path/to/remote_directory}}` * Use a specific username when connecting to the remote host: `scp {{path/to/local_file}} {{remote_username}}@{{remote_host}}:{{path/to/remote_directory}}` * Use a specific ssh private key for authentication with the remote host: `scp -i {{~/.ssh/private_key}} {{local_file}} {{remote_host}}:{{/path/remote_file}}`
timedatectl
timedatectl may be used to query and change the system clock and its settings, and enable or disable time synchronization services. Use systemd-firstboot(1) to initialize the system time zone for mounted (but not booted) system images. timedatectl may be used to show the current status of time synchronization services, for example systemd-timesyncd.service(8). The following options are understood: --no-ask-password Do not query the user for authentication for privileged operations. --adjust-system-clock If set-local-rtc is invoked and this option is passed, the system clock is synchronized from the RTC again, taking the new setting into account. Otherwise, the RTC is synchronized from the system clock. --monitor If timesync-status is invoked and this option is passed, then timedatectl monitors the status of systemd-timesyncd.service(8) and updates the outputs. Use Ctrl+C to terminate the monitoring. -a, --all When showing properties of systemd-timesyncd.service(8), show all properties regardless of whether they are set or not. -p, --property= When showing properties of systemd-timesyncd.service(8), limit display to certain properties as specified as argument. If not specified, all set properties are shown. The argument should be a property name, such as "ServerName". If specified more than once, all properties with the specified names are shown. --value When printing properties with show-timesync, only print the value, and skip the property name and "=". -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user --machine=lennart@.host"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager.
# timedatectl > Control the system time and date. More information: > https://manned.org/timedatectl. * Check the current system clock time: `timedatectl` * Set the local time of the system clock directly: `timedatectl set-time "{{yyyy-MM-dd hh:mm:ss}}"` * List available timezones: `timedatectl list-timezones` * Set the system timezone: `timedatectl set-timezone {{timezone}}` * Enable Network Time Protocol (NTP) synchronization: `timedatectl set-ntp on` * Change the hardware clock time standard to localtime: `timedatectl set-local-rtc 1`
screen
Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells). Each virtual terminal provides the functions of a DEC VT100 terminal and, in addition, several control functions from the ISO 6429 (ECMA 48, ANSI X3.64) and ISO 2022 standards (e.g. insert/delete line and support for multiple character sets). There is a scrollback history buffer for each virtual terminal and a copy-and-paste mechanism that allows moving text regions between windows. When screen is called, it creates a single window with a shell in it (or the specified command) and then gets out of your way so that you can use the program as you normally would. Then, at any time, you can create new (full-screen) windows with other programs in them (including more shells), kill existing windows, view a list of windows, turn output logging on and off, copy-and- paste text between windows, view the scrollback history, switch between windows in whatever manner you wish, etc. All windows run their programs completely independent of each other. Programs continue to run when their window is currently not visible and even when the whole screen session is detached from the user's terminal. When a program terminates, screen (per default) kills the window that contained it. If this window was in the foreground, the display switches to the previous window; if none are left, screen exits. Shells usually distinguish between running as login-shell or sub-shell. Screen runs them as sub- shells, unless told otherwise (See "shell" .screenrc command). Everything you type is sent to the program running in the current window. The only exception to this is the one keystroke that is used to initiate a command to the window manager. By default, each command begins with a control-a (abbreviated C-a from now on), and is followed by one other keystroke. The command character and all the key bindings can be fully customized to be anything you like, though they are always two characters in length. Screen does not understand the prefix "C-" to mean control, although this notation is used in this manual for readability. Please use the caret notation ("^A" instead of "C-a") as arguments to e.g. the escape command or the -e option. Screen will also print out control characters in caret notation. The standard way to create a new window is to type "C-a c". This creates a new window running a shell and switches to that window immediately, regardless of the state of the process running in the current window. Similarly, you can create a new window with a custom command in it by first binding the command to a keystroke (in your .screenrc file or at the "C-a :" command line) and then using it just like the "C-a c" command. In addition, new windows can be created by running a command like: screen emacs prog.c from a shell prompt within a previously created window. This will not run another copy of screen, but will instead supply the command name and its arguments to the window manager (specified in the $STY environment variable) who will use it to create the new window. The above example would start the emacs editor (editing prog.c) and switch to its window. - Note that you cannot transport environment variables from the invoking shell to the application (emacs in this case), because it is forked from the parent screen process, not from the invoking shell. If "/etc/utmp" is writable by screen, an appropriate record will be written to this file for each window, and removed when the window is terminated. This is useful for working with "talk", "script", "shutdown", "rsend", "sccs" and other similar programs that use the utmp file to determine who you are. As long as screen is active on your terminal, the terminal's own record is removed from the utmp file. See also "C-a L".
# screen > Hold a session open on a remote server. Manage multiple windows with a > single SSH connection. See also `tmux` and `zellij`. More information: > https://manned.org/screen. * Start a new screen session: `screen` * Start a new named screen session: `screen -S {{session_name}}` * Start a new daemon and log the output to `screenlog.x`: `screen -dmLS {{session_name}} {{command}}` * Show open screen sessions: `screen -ls` * Reattach to an open screen: `screen -r {{session_name}}` * Detach from inside a screen: `Ctrl + A, D` * Kill the current screen session: `Ctrl + A, K` * Kill a detached screen: `screen -X -S {{session_name}} quit`
write
The write utility shall read lines from the standard input and write them to the terminal of the specified user. When first invoked, it shall write the message: Message from sender-login-id (sending-terminal) [date]... to user_name. When it has successfully completed the connection, the sender's terminal shall be alerted twice to indicate that what the sender is typing is being written to the recipient's terminal. If the recipient wants to reply, this can be accomplished by typing: write sender-login-id [sending-terminal] upon receipt of the initial message. Whenever a line of input as delimited by an NL, EOF, or EOL special character (see the Base Definitions volume of POSIX.1‐2017, Chapter 11, General Terminal Interface) is accumulated while in canonical input mode, the accumulated data shall be written on the other user's terminal. Characters shall be processed as follows: * Typing <alert> shall write the <alert> character to the recipient's terminal. * Typing the erase and kill characters shall affect the sender's terminal in the manner described by the termios interface in the Base Definitions volume of POSIX.1‐2017, Chapter 11, General Terminal Interface. * Typing the interrupt or end-of-file characters shall cause write to write an appropriate message ("EOT\n" in the POSIX locale) to the recipient's terminal and exit. * Typing characters from LC_CTYPE classifications print or space shall cause those characters to be sent to the recipient's terminal. * When and only when the stty iexten local mode is enabled, the existence and processing of additional special control characters and multi-byte or single-byte functions is implementation-defined. * Typing other non-printable characters shall cause implementation-defined sequences of printable characters to be written to the recipient's terminal. To write to a user who is logged in more than once, the terminal argument can be used to indicate which terminal to write to; otherwise, the recipient's terminal is selected in an implementation-defined manner and an informational message is written to the sender's standard output, indicating which terminal was chosen. Permission to be a recipient of a write message can be denied or granted by use of the mesg utility. However, a user's privilege may further constrain the domain of accessibility of other users' terminals. The write utility shall fail when the user lacks appropriate privileges to perform the requested action. None.
# write > Write a message on the terminal of a specified logged in user (ctrl-C to > stop writing messages). Use the `who` command to find out all terminal_ids > of all active users active on the system. See also `mesg`. More information: > https://manned.org/write. * Send a message to a given user on a given terminal id: `write {{username}} {{terminal_id}}` * Send message to "testuser" on terminal `/dev/tty/5`: `write {{testuser}} {{tty/5}}` * Send message to "johndoe" on pseudo terminal `/dev/pts/5`: `write {{johndoe}} {{pts/5}}`
as
GNU as is really a family of assemblers. If you use (or have used) the GNU assembler on one architecture, you should find a fairly similar environment when you use it on another architecture. Each version has much in common with the others, including object file formats, most assembler directives (often called pseudo-ops) and assembler syntax. as is primarily intended to assemble the output of the GNU C compiler "gcc" for use by the linker "ld". Nevertheless, we've tried to make as assemble correctly everything that other assemblers for the same machine would assemble. Any exceptions are documented explicitly. This doesn't mean as always uses the same syntax as another assembler for the same architecture; for example, we know of several incompatible versions of 680x0 assembly language syntax. Each time you run as it assembles exactly one source program. The source program is made up of one or more files. (The standard input is also a file.) You give as a command line that has zero or more input file names. The input files are read (from left file name to right). A command-line argument (in any position) that has no special meaning is taken to be an input file name. If you give as no file names it attempts to read one input file from the as standard input, which is normally your terminal. You may have to type ctl-D to tell as there is no more program to assemble. Use -- if you need to explicitly name the standard input file in your command line. If the source is empty, as produces a small, empty object file. as may write warnings and error messages to the standard error file (usually your terminal). This should not happen when a compiler runs as automatically. Warnings report an assumption made so that as could keep assembling a flawed program; errors report a grave problem that stops the assembly. If you are invoking as via the GNU C compiler, you can use the -Wa option to pass arguments through to the assembler. The assembler arguments must be separated from each other (and the -Wa) by commas. For example: gcc -c -g -O -Wa,-alh,-L file.c This passes two options to the assembler: -alh (emit a listing to standard output with high-level and assembly source) and -L (retain local symbols in the symbol table). Usually you do not need to use this -Wa mechanism, since many compiler command-line options are automatically passed to the assembler by the compiler. (You can call the GNU compiler driver with the -v option to see precisely what options it passes to each compilation pass, including the assembler.) @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. -a[cdghlmns] Turn on listings, in any of a variety of ways: -ac omit false conditionals -ad omit debugging directives -ag include general information, like as version and options passed -ah include high-level source -al include assembly -am include macro expansions -an omit forms processing -as include symbols =file set the name of the listing file You may combine these options; for example, use -aln for assembly listing without forms processing. The =file option, if used, must be the last one. By itself, -a defaults to -ahls. --alternate Begin in alternate macro mode. --compress-debug-sections Compress DWARF debug sections using zlib with SHF_COMPRESSED from the ELF ABI. The resulting object file may not be compatible with older linkers and object file utilities. Note if compression would make a given section larger then it is not compressed. --compress-debug-sections=none --compress-debug-sections=zlib --compress-debug-sections=zlib-gnu --compress-debug-sections=zlib-gabi --compress-debug-sections=zstd These options control how DWARF debug sections are compressed. --compress-debug-sections=none is equivalent to --nocompress-debug-sections. --compress-debug-sections=zlib and --compress-debug-sections=zlib-gabi are equivalent to --compress-debug-sections. --compress-debug-sections=zlib-gnu compresses DWARF debug sections using the obsoleted zlib-gnu format. The debug sections are renamed to begin with .zdebug. --compress-debug-sections=zstd compresses DWARF debug sections using zstd. Note - if compression would actually make a section larger, then it is not compressed nor renamed. --nocompress-debug-sections Do not compress DWARF debug sections. This is usually the default for all targets except the x86/x86_64, but a configure time option can be used to override this. -D Enable denugging in target specific backends, if supported. Otherwise ignored. Even if ignored, this option is accepted for script compatibility with calls to other assemblers. --debug-prefix-map old=new When assembling files in directory old, record debugging information describing them as in new instead. --defsym sym=value Define the symbol sym to be value before assembling the input file. value must be an integer constant. As in C, a leading 0x indicates a hexadecimal value, and a leading 0 indicates an octal value. The value of the symbol can be overridden inside a source file via the use of a ".set" pseudo-op. --dump-config Displays how the assembler is configured and then exits. --elf-stt-common=no --elf-stt-common=yes These options control whether the ELF assembler should generate common symbols with the "STT_COMMON" type. The default can be controlled by a configure option --enable-elf-stt-common. --emulation=name If the assembler is configured to support multiple different target configurations then this option can be used to select the desired form. -f "fast"---skip whitespace and comment preprocessing (assume source is compiler output). -g --gen-debug Generate debugging information for each assembler source line using whichever debug format is preferred by the target. This currently means either STABS, ECOFF or DWARF2. When the debug format is DWARF then a ".debug_info" and ".debug_line" section is only emitted when the assembly file doesn't generate one itself. --gstabs Generate stabs debugging information for each assembler line. This may help debugging assembler code, if the debugger can handle it. --gstabs+ Generate stabs debugging information for each assembler line, with GNU extensions that probably only gdb can handle, and that could make other debuggers crash or refuse to read your program. This may help debugging assembler code. Currently the only GNU extension is the location of the current working directory at assembling time. --gdwarf-2 Generate DWARF2 debugging information for each assembler line. This may help debugging assembler code, if the debugger can handle it. Note---this option is only supported by some targets, not all of them. --gdwarf-3 This option is the same as the --gdwarf-2 option, except that it allows for the possibility of the generation of extra debug information as per version 3 of the DWARF specification. Note - enabling this option does not guarantee the generation of any extra information, the choice to do so is on a per target basis. --gdwarf-4 This option is the same as the --gdwarf-2 option, except that it allows for the possibility of the generation of extra debug information as per version 4 of the DWARF specification. Note - enabling this option does not guarantee the generation of any extra information, the choice to do so is on a per target basis. --gdwarf-5 This option is the same as the --gdwarf-2 option, except that it allows for the possibility of the generation of extra debug information as per version 5 of the DWARF specification. Note - enabling this option does not guarantee the generation of any extra information, the choice to do so is on a per target basis. --gdwarf-sections Instead of creating a .debug_line section, create a series of .debug_line.foo sections where foo is the name of the corresponding code section. For example a code section called .text.func will have its dwarf line number information placed into a section called .debug_line.text.func. If the code section is just called .text then debug line section will still be called just .debug_line without any suffix. --gdwarf-cie-version=version Control which version of DWARF Common Information Entries (CIEs) are produced. When this flag is not specificed the default is version 1, though some targets can modify this default. Other possible values for version are 3 or 4. --generate-missing-build-notes=yes --generate-missing-build-notes=no These options control whether the ELF assembler should generate GNU Build attribute notes if none are present in the input sources. The default can be controlled by the --enable-generate-build-notes configure option. --gsframe --gsframe Create .sframe section from CFI directives. --hash-size N Ignored. Supported for command line compatibility with other assemblers. --help Print a summary of the command-line options and exit. --target-help Print a summary of all target specific options and exit. -I dir Add directory dir to the search list for ".include" directives. -J Don't warn about signed overflow. -K Issue warnings when difference tables altered for long displacements. -L --keep-locals Keep (in the symbol table) local symbols. These symbols start with system-specific local label prefixes, typically .L for ELF systems or L for traditional a.out systems. --listing-lhs-width=number Set the maximum width, in words, of the output data column for an assembler listing to number. --listing-lhs-width2=number Set the maximum width, in words, of the output data column for continuation lines in an assembler listing to number. --listing-rhs-width=number Set the maximum width of an input source line, as displayed in a listing, to number bytes. --listing-cont-lines=number Set the maximum number of lines printed in a listing for a single line of input to number + 1. --multibyte-handling=allow --multibyte-handling=warn --multibyte-handling=warn-sym-only --multibyte-handling=warn_sym_only Controls how the assembler handles multibyte characters in the input. The default (which can be restored by using the allow argument) is to allow such characters without complaint. Using the warn argument will make the assembler generate a warning message whenever any multibyte character is encountered. Using the warn-sym-only argument will only cause a warning to be generated when a symbol is defined with a name that contains multibyte characters. (References to undefined symbols will not generate a warning). --no-pad-sections Stop the assembler for padding the ends of output sections to the alignment of that section. The default is to pad the sections, but this can waste space which might be needed on targets which have tight memory constraints. -o objfile Name the object-file output from as objfile. -R Fold the data section into the text section. --reduce-memory-overheads Ignored. Supported for compatibility with tools that apss the same option to both the assembler and the linker. --sectname-subst Honor substitution sequences in section names. --size-check=error --size-check=warning Issue an error or warning for invalid ELF .size directive. --statistics Print the maximum space (in bytes) and total time (in seconds) used by assembly. --strip-local-absolute Remove local absolute symbols from the outgoing symbol table. -v -version Print the as version. --version Print the as version and exit. -W --no-warn Suppress warning messages. --fatal-warnings Treat warnings as errors. --warn Don't suppress warning messages or treat them as errors. -w Ignored. -x Ignored. -Z Generate an object file even after errors. -- | files ... Standard input, or source files to assemble. The following options are available when as is configured for the 64-bit mode of the ARM Architecture (AArch64). -EB This option specifies that the output generated by the assembler should be marked as being encoded for a big-endian processor. -EL This option specifies that the output generated by the assembler should be marked as being encoded for a little- endian processor. -mabi=abi Specify which ABI the source code uses. The recognized arguments are: "ilp32" and "lp64", which decides the generated object file in ELF32 and ELF64 format respectively. The default is "lp64". -mcpu=processor[+extension...] This option specifies the target processor. The assembler will issue an error message if an attempt is made to assemble an instruction which will not execute on the target processor. The following processor names are recognized: "cortex-a34", "cortex-a35", "cortex-a53", "cortex-a55", "cortex-a57", "cortex-a65", "cortex-a65ae", "cortex-a72", "cortex-a73", "cortex-a75", "cortex-a76", "cortex-a76ae", "cortex-a77", "cortex-a78", "cortex-a78ae", "cortex-a78c", "cortex-a510", "cortex-a710", "ares", "exynos-m1", "falkor", "neoverse-n1", "neoverse-n2", "neoverse-e1", "neoverse-v1", "qdf24xx", "saphira", "thunderx", "vulcan", "xgene1" "xgene2", "cortex-r82", "cortex-x1", and "cortex-x2". The special name "all" may be used to allow the assembler to accept instructions valid for any supported processor, including all optional extensions. In addition to the basic instruction set, the assembler can be told to accept, or restrict, various extension mnemonics that extend the processor. If some implementations of a particular processor can have an extension, then then those extensions are automatically enabled. Consequently, you will not normally have to specify any additional extensions. -march=architecture[+extension...] This option specifies the target architecture. The assembler will issue an error message if an attempt is made to assemble an instruction which will not execute on the target architecture. The following architecture names are recognized: "armv8-a", "armv8.1-a", "armv8.2-a", "armv8.3-a", "armv8.4-a" "armv8.5-a", "armv8.6-a", "armv8.7-a", "armv8.8-a", "armv8-r", "armv9-a", "armv9.1-a", "armv9.2-a", and "armv9.3-a". If both -mcpu and -march are specified, the assembler will use the setting for -mcpu. If neither are specified, the assembler will default to -mcpu=all. The architecture option can be extended with the same instruction set extension options as the -mcpu option. Unlike -mcpu, extensions are not always enabled by default, -mverbose-error This option enables verbose error messages for AArch64 gas. This option is enabled by default. -mno-verbose-error This option disables verbose error messages in AArch64 gas. The following options are available when as is configured for an Alpha processor. -mcpu This option specifies the target processor. If an attempt is made to assemble an instruction which will not execute on the target processor, the assembler may either expand the instruction as a macro or issue an error message. This option is equivalent to the ".arch" directive. The following processor names are recognized: 21064, "21064a", 21066, 21068, 21164, "21164a", "21164pc", 21264, "21264a", "21264b", "ev4", "ev5", "lca45", "ev5", "ev56", "pca56", "ev6", "ev67", "ev68". The special name "all" may be used to allow the assembler to accept instructions valid for any Alpha processor. In order to support existing practice in OSF/1 with respect to ".arch", and existing practice within MILO (the Linux ARC bootloader), the numbered processor names (e.g. 21064) enable the processor-specific PALcode instructions, while the "electro-vlasic" names (e.g. "ev4") do not. -mdebug -no-mdebug Enables or disables the generation of ".mdebug" encapsulation for stabs directives and procedure descriptors. The default is to automatically enable ".mdebug" when the first stabs directive is seen. -relax This option forces all relocations to be put into the object file, instead of saving space and resolving some relocations at assembly time. Note that this option does not propagate all symbol arithmetic into the object file, because not all symbol arithmetic can be represented. However, the option can still be useful in specific applications. -replace -noreplace Enables or disables the optimization of procedure calls, both at assemblage and at link time. These options are only available for VMS targets and "-replace" is the default. See section 1.4.1 of the OpenVMS Linker Utility Manual. -g This option is used when the compiler generates debug information. When gcc is using mips-tfile to generate debug information for ECOFF, local labels must be passed through to the object file. Otherwise this option has no effect. -Gsize A local common symbol larger than size is placed in ".bss", while smaller symbols are placed in ".sbss". -F -32addr These options are ignored for backward compatibility. The following options are available when as is configured for an ARC processor. -mcpu=cpu This option selects the core processor variant. -EB | -EL Select either big-endian (-EB) or little-endian (-EL) output. -mcode-density Enable Code Density extension instructions. The following options are available when as is configured for the ARM processor family. -mcpu=processor[+extension...] Specify which ARM processor variant is the target. -march=architecture[+extension...] Specify which ARM architecture variant is used by the target. -mfpu=floating-point-format Select which Floating Point architecture is the target. -mfloat-abi=abi Select which floating point ABI is in use. -mthumb Enable Thumb only instruction decoding. -mapcs-32 | -mapcs-26 | -mapcs-float | -mapcs-reentrant Select which procedure calling convention is in use. -EB | -EL Select either big-endian (-EB) or little-endian (-EL) output. -mthumb-interwork Specify that the code has been generated with interworking between Thumb and ARM code in mind. -mccs Turns on CodeComposer Studio assembly syntax compatibility mode. -k Specify that PIC code has been generated. The following options are available when as is configured for the Blackfin processor family. -mcpu=processor[-sirevision] This option specifies the target processor. The optional sirevision is not used in assembler. It's here such that GCC can easily pass down its "-mcpu=" option. The assembler will issue an error message if an attempt is made to assemble an instruction which will not execute on the target processor. The following processor names are recognized: "bf504", "bf506", "bf512", "bf514", "bf516", "bf518", "bf522", "bf523", "bf524", "bf525", "bf526", "bf527", "bf531", "bf532", "bf533", "bf534", "bf535" (not implemented yet), "bf536", "bf537", "bf538", "bf539", "bf542", "bf542m", "bf544", "bf544m", "bf547", "bf547m", "bf548", "bf548m", "bf549", "bf549m", "bf561", and "bf592". -mfdpic Assemble for the FDPIC ABI. -mno-fdpic -mnopic Disable -mfdpic. The following options are available when as is configured for the Linux kernel BPF processor family. @chapter BPF Dependent Features Options -EB This option specifies that the assembler should emit big- endian eBPF. -EL This option specifies that the assembler should emit little- endian eBPF. Note that if no endianness option is specified in the command line, the host endianness is used. See the info pages for documentation of the CRIS-specific options. The following options are available when as is configured for the C-SKY processor family. -march=archname Assemble for architecture archname. The --help option lists valid values for archname. -mcpu=cpuname Assemble for architecture cpuname. The --help option lists valid values for cpuname. -EL -mlittle-endian Generate little-endian output. -EB -mbig-endian Generate big-endian output. -fpic -pic Generate position-independent code. -mljump -mno-ljump Enable/disable transformation of the short branch instructions "jbf", "jbt", and "jbr" to "jmpi". This option is for V2 processors only. It is ignored on CK801 and CK802 targets, which do not support the "jmpi" instruction, and is enabled by default for other processors. -mbranch-stub -mno-branch-stub Pass through "R_CKCORE_PCREL_IMM26BY2" relocations for "bsr" instructions to the linker. This option is only available for bare-metal C-SKY V2 ELF targets, where it is enabled by default. It cannot be used in code that will be dynamically linked against shared libraries. -force2bsr -mforce2bsr -no-force2bsr -mno-force2bsr Enable/disable transformation of "jbsr" instructions to "bsr". This option is always enabled (and -mno-force2bsr is ignored) for CK801/CK802 targets. It is also always enabled when -mbranch-stub is in effect. -jsri2bsr -mjsri2bsr -no-jsri2bsr -mno-jsri2bsr Enable/disable transformation of "jsri" instructions to "bsr". This option is enabled by default. -mnolrw -mno-lrw Enable/disable transformation of "lrw" instructions into a "movih"/"ori" pair. -melrw -mno-elrw Enable/disable extended "lrw" instructions. This option is enabled by default for CK800-series processors. -mlaf -mliterals-after-func -mno-laf -mno-literals-after-func Enable/disable placement of literal pools after each function. -mlabr -mliterals-after-br -mno-labr -mnoliterals-after-br Enable/disable placement of literal pools after unconditional branches. This option is enabled by default. -mistack -mno-istack Enable/disable interrupt stack instructions. This option is enabled by default on CK801, CK802, and CK802 processors. The following options explicitly enable certain optional instructions. These features are also enabled implicitly by using "-mcpu=" to specify a processor that supports it. -mhard-float Enable hard float instructions. -mmp Enable multiprocessor instructions. -mcp Enable coprocessor instructions. -mcache Enable cache prefetch instruction. -msecurity Enable C-SKY security instructions. -mtrust Enable C-SKY trust instructions. -mdsp Enable DSP instructions. -medsp Enable enhanced DSP instructions. -mvdsp Enable vector DSP instructions. The following options are available when as is configured for an Epiphany processor. -mepiphany Specifies that the both 32 and 16 bit instructions are allowed. This is the default behavior. -mepiphany16 Restricts the permitted instructions to just the 16 bit set. The following options are available when as is configured for an H8/300 processor. @chapter H8/300 Dependent Features Options The Renesas H8/300 version of "as" has one machine-dependent option: -h-tick-hex Support H'00 style hex constants in addition to 0x00 style. -mach=name Sets the H8300 machine variant. The following machine names are recognised: "h8300h", "h8300hn", "h8300s", "h8300sn", "h8300sx" and "h8300sxn". The following options are available when as is configured for an i386 processor. --32 | --x32 | --64 Select the word size, either 32 bits or 64 bits. --32 implies Intel i386 architecture, while --x32 and --64 imply AMD x86-64 architecture with 32-bit or 64-bit word-size respectively. These options are only available with the ELF object file format, and require that the necessary BFD support has been included (on a 32-bit platform you have to add --enable-64-bit-bfd to configure enable 64-bit usage and use x86-64 as target platform). -n By default, x86 GAS replaces multiple nop instructions used for alignment within code sections with multi-byte nop instructions such as leal 0(%esi,1),%esi. This switch disables the optimization if a single byte nop (0x90) is explicitly specified as the fill byte for alignment. --divide On SVR4-derived platforms, the character / is treated as a comment character, which means that it cannot be used in expressions. The --divide option turns / into a normal character. This does not disable / at the beginning of a line starting a comment, or affect using # for starting a comment. -march=CPU[+EXTENSION...] This option specifies the target processor. The assembler will issue an error message if an attempt is made to assemble an instruction which will not execute on the target processor. The following processor names are recognized: "i8086", "i186", "i286", "i386", "i486", "i586", "i686", "pentium", "pentiumpro", "pentiumii", "pentiumiii", "pentium4", "prescott", "nocona", "core", "core2", "corei7", "iamcu", "k6", "k6_2", "athlon", "opteron", "k8", "amdfam10", "bdver1", "bdver2", "bdver3", "bdver4", "znver1", "znver2", "znver3", "znver4", "btver1", "btver2", "generic32" and "generic64". In addition to the basic instruction set, the assembler can be told to accept various extension mnemonics. For example, "-march=i686+sse4+vmx" extends i686 with sse4 and vmx. The following extensions are currently supported: 8087, 287, 387, 687, "cmov", "fxsr", "mmx", "sse", "sse2", "sse3", "sse4a", "ssse3", "sse4.1", "sse4.2", "sse4", "avx", "avx2", "adx", "rdseed", "prfchw", "smap", "mpx", "sha", "rdpid", "ptwrite", "cet", "gfni", "vaes", "vpclmulqdq", "prefetchwt1", "clflushopt", "se1", "clwb", "movdiri", "movdir64b", "enqcmd", "serialize", "tsxldtrk", "kl", "widekl", "hreset", "avx512f", "avx512cd", "avx512er", "avx512pf", "avx512vl", "avx512bw", "avx512dq", "avx512ifma", "avx512vbmi", "avx512_4fmaps", "avx512_4vnniw", "avx512_vpopcntdq", "avx512_vbmi2", "avx512_vnni", "avx512_bitalg", "avx512_vp2intersect", "tdx", "avx512_bf16", "avx_vnni", "avx512_fp16", "prefetchi", "avx_ifma", "avx_vnni_int8", "cmpccxadd", "wrmsrns", "msrlist", "avx_ne_convert", "rao_int", "amx_int8", "amx_bf16", "amx_fp16", "amx_tile", "vmx", "vmfunc", "smx", "xsave", "xsaveopt", "xsavec", "xsaves", "aes", "pclmul", "fsgsbase", "rdrnd", "f16c", "bmi2", "fma", "movbe", "ept", "lzcnt", "popcnt", "hle", "rtm", "tsx", "invpcid", "clflush", "mwaitx", "clzero", "wbnoinvd", "pconfig", "waitpkg", "uintr", "cldemote", "rdpru", "mcommit", "sev_es", "lwp", "fma4", "xop", "cx16", "syscall", "rdtscp", "3dnow", "3dnowa", "sse4a", "sse5", "snp", "invlpgb", "tlbsync", "svme" and "padlock". Note that these extension mnemonics can be prefixed with "no" to revoke the respective (and any dependent) functionality. When the ".arch" directive is used with -march, the ".arch" directive will take precedent. -mtune=CPU This option specifies a processor to optimize for. When used in conjunction with the -march option, only instructions of the processor specified by the -march option will be generated. Valid CPU values are identical to the processor list of -march=CPU. -msse2avx This option specifies that the assembler should encode SSE instructions with VEX prefix. -muse-unaligned-vector-move This option specifies that the assembler should encode aligned vector move as unaligned vector move. -msse-check=none -msse-check=warning -msse-check=error These options control if the assembler should check SSE instructions. -msse-check=none will make the assembler not to check SSE instructions, which is the default. -msse-check=warning will make the assembler issue a warning for any SSE instruction. -msse-check=error will make the assembler issue an error for any SSE instruction. -mavxscalar=128 -mavxscalar=256 These options control how the assembler should encode scalar AVX instructions. -mavxscalar=128 will encode scalar AVX instructions with 128bit vector length, which is the default. -mavxscalar=256 will encode scalar AVX instructions with 256bit vector length. WARNING: Don't use this for production code - due to CPU errata the resulting code may not work on certain models. -mvexwig=0 -mvexwig=1 These options control how the assembler should encode VEX.W-ignored (WIG) VEX instructions. -mvexwig=0 will encode WIG VEX instructions with vex.w = 0, which is the default. -mvexwig=1 will encode WIG EVEX instructions with vex.w = 1. WARNING: Don't use this for production code - due to CPU errata the resulting code may not work on certain models. -mevexlig=128 -mevexlig=256 -mevexlig=512 These options control how the assembler should encode length- ignored (LIG) EVEX instructions. -mevexlig=128 will encode LIG EVEX instructions with 128bit vector length, which is the default. -mevexlig=256 and -mevexlig=512 will encode LIG EVEX instructions with 256bit and 512bit vector length, respectively. -mevexwig=0 -mevexwig=1 These options control how the assembler should encode w-ignored (WIG) EVEX instructions. -mevexwig=0 will encode WIG EVEX instructions with evex.w = 0, which is the default. -mevexwig=1 will encode WIG EVEX instructions with evex.w = 1. -mmnemonic=att -mmnemonic=intel This option specifies instruction mnemonic for matching instructions. The ".att_mnemonic" and ".intel_mnemonic" directives will take precedent. -msyntax=att -msyntax=intel This option specifies instruction syntax when processing instructions. The ".att_syntax" and ".intel_syntax" directives will take precedent. -mnaked-reg This option specifies that registers don't require a % prefix. The ".att_syntax" and ".intel_syntax" directives will take precedent. -madd-bnd-prefix This option forces the assembler to add BND prefix to all branches, even if such prefix was not explicitly specified in the source code. -mno-shared On ELF target, the assembler normally optimizes out non-PLT relocations against defined non-weak global branch targets with default visibility. The -mshared option tells the assembler to generate code which may go into a shared library where all non-weak global branch targets with default visibility can be preempted. The resulting code is slightly bigger. This option only affects the handling of branch instructions. -mbig-obj On PE/COFF target this option forces the use of big object file format, which allows more than 32768 sections. -momit-lock-prefix=no -momit-lock-prefix=yes These options control how the assembler should encode lock prefix. This option is intended as a workaround for processors, that fail on lock prefix. This option can only be safely used with single-core, single-thread computers -momit-lock-prefix=yes will omit all lock prefixes. -momit-lock-prefix=no will encode lock prefix as usual, which is the default. -mfence-as-lock-add=no -mfence-as-lock-add=yes These options control how the assembler should encode lfence, mfence and sfence. -mfence-as-lock-add=yes will encode lfence, mfence and sfence as lock addl $0x0, (%rsp) in 64-bit mode and lock addl $0x0, (%esp) in 32-bit mode. -mfence-as-lock-add=no will encode lfence, mfence and sfence as usual, which is the default. -mrelax-relocations=no -mrelax-relocations=yes These options control whether the assembler should generate relax relocations, R_386_GOT32X, in 32-bit mode, or R_X86_64_GOTPCRELX and R_X86_64_REX_GOTPCRELX, in 64-bit mode. -mrelax-relocations=yes will generate relax relocations. -mrelax-relocations=no will not generate relax relocations. The default can be controlled by a configure option --enable-x86-relax-relocations. -malign-branch-boundary=NUM This option controls how the assembler should align branches with segment prefixes or NOP. NUM must be a power of 2. It should be 0 or no less than 16. Branches will be aligned within NUM byte boundary. -malign-branch-boundary=0, which is the default, doesn't align branches. -malign-branch=TYPE[+TYPE...] This option specifies types of branches to align. TYPE is combination of jcc, which aligns conditional jumps, fused, which aligns fused conditional jumps, jmp, which aligns unconditional jumps, call which aligns calls, ret, which aligns rets, indirect, which aligns indirect jumps and calls. The default is -malign-branch=jcc+fused+jmp. -malign-branch-prefix-size=NUM This option specifies the maximum number of prefixes on an instruction to align branches. NUM should be between 0 and 5. The default NUM is 5. -mbranches-within-32B-boundaries This option aligns conditional jumps, fused conditional jumps and unconditional jumps within 32 byte boundary with up to 5 segment prefixes on an instruction. It is equivalent to -malign-branch-boundary=32 -malign-branch=jcc+fused+jmp -malign-branch-prefix-size=5. The default doesn't align branches. -mlfence-after-load=no -mlfence-after-load=yes These options control whether the assembler should generate lfence after load instructions. -mlfence-after-load=yes will generate lfence. -mlfence-after-load=no will not generate lfence, which is the default. -mlfence-before-indirect-branch=none -mlfence-before-indirect-branch=all -mlfence-before-indirect-branch=register -mlfence-before-indirect-branch=memory These options control whether the assembler should generate lfence before indirect near branch instructions. -mlfence-before-indirect-branch=all will generate lfence before indirect near branch via register and issue a warning before indirect near branch via memory. It also implicitly sets -mlfence-before-ret=shl when there's no explicit -mlfence-before-ret=. -mlfence-before-indirect-branch=register will generate lfence before indirect near branch via register. -mlfence-before-indirect-branch=memory will issue a warning before indirect near branch via memory. -mlfence-before-indirect-branch=none will not generate lfence nor issue warning, which is the default. Note that lfence won't be generated before indirect near branch via register with -mlfence-after-load=yes since lfence will be generated after loading branch target register. -mlfence-before-ret=none -mlfence-before-ret=shl -mlfence-before-ret=or -mlfence-before-ret=yes -mlfence-before-ret=not These options control whether the assembler should generate lfence before ret. -mlfence-before-ret=or will generate generate or instruction with lfence. -mlfence-before-ret=shl/yes will generate shl instruction with lfence. -mlfence-before-ret=not will generate not instruction with lfence. -mlfence-before-ret=none will not generate lfence, which is the default. -mx86-used-note=no -mx86-used-note=yes These options control whether the assembler should generate GNU_PROPERTY_X86_ISA_1_USED and GNU_PROPERTY_X86_FEATURE_2_USED GNU property notes. The default can be controlled by the --enable-x86-used-note configure option. -mevexrcig=rne -mevexrcig=rd -mevexrcig=ru -mevexrcig=rz These options control how the assembler should encode SAE- only EVEX instructions. -mevexrcig=rne will encode RC bits of EVEX instruction with 00, which is the default. -mevexrcig=rd, -mevexrcig=ru and -mevexrcig=rz will encode SAE-only EVEX instructions with 01, 10 and 11 RC bits, respectively. -mamd64 -mintel64 This option specifies that the assembler should accept only AMD64 or Intel64 ISA in 64-bit mode. The default is to accept common, Intel64 only and AMD64 ISAs. -O0 | -O | -O1 | -O2 | -Os Optimize instruction encoding with smaller instruction size. -O and -O1 encode 64-bit register load instructions with 64-bit immediate as 32-bit register load instructions with 31-bit or 32-bits immediates, encode 64-bit register clearing instructions with 32-bit register clearing instructions, encode 256-bit/512-bit VEX/EVEX vector register clearing instructions with 128-bit VEX vector register clearing instructions, encode 128-bit/256-bit EVEX vector register load/store instructions with VEX vector register load/store instructions, and encode 128-bit/256-bit EVEX packed integer logical instructions with 128-bit/256-bit VEX packed integer logical. -O2 includes -O1 optimization plus encodes 256-bit/512-bit EVEX vector register clearing instructions with 128-bit EVEX vector register clearing instructions. In 64-bit mode VEX encoded instructions with commutative source operands will also have their source operands swapped if this allows using the 2-byte VEX prefix form instead of the 3-byte one. Certain forms of AND as well as OR with the same (register) operand specified twice will also be changed to TEST. -Os includes -O2 optimization plus encodes 16-bit, 32-bit and 64-bit register tests with immediate as 8-bit register test with immediate. -O0 turns off this optimization. The following options are available when as is configured for the Ubicom IP2K series. -mip2022ext Specifies that the extended IP2022 instructions are allowed. -mip2022 Restores the default behaviour, which restricts the permitted instructions to just the basic IP2022 ones. The following options are available when as is configured for the Renesas M32C and M16C processors. -m32c Assemble M32C instructions. -m16c Assemble M16C instructions (the default). -relax Enable support for link-time relaxations. -h-tick-hex Support H'00 style hex constants in addition to 0x00 style. The following options are available when as is configured for the Renesas M32R (formerly Mitsubishi M32R) series. --m32rx Specify which processor in the M32R family is the target. The default is normally the M32R, but this option changes it to the M32RX. --warn-explicit-parallel-conflicts or --Wp Produce warning messages when questionable parallel constructs are encountered. --no-warn-explicit-parallel-conflicts or --Wnp Do not produce warning messages when questionable parallel constructs are encountered. The following options are available when as is configured for the Motorola 68000 series. -l Shorten references to undefined symbols, to one word instead of two. -m68000 | -m68008 | -m68010 | -m68020 | -m68030 | -m68040 | -m68060 | -m68302 | -m68331 | -m68332 | -m68333 | -m68340 | -mcpu32 | -m5200 Specify what processor in the 68000 family is the target. The default is normally the 68020, but this can be changed at configuration time. -m68881 | -m68882 | -mno-68881 | -mno-68882 The target machine does (or does not) have a floating-point coprocessor. The default is to assume a coprocessor for 68020, 68030, and cpu32. Although the basic 68000 is not compatible with the 68881, a combination of the two can be specified, since it's possible to do emulation of the coprocessor instructions with the main processor. -m68851 | -mno-68851 The target machine does (or does not) have a memory- management unit coprocessor. The default is to assume an MMU for 68020 and up. The following options are available when as is configured for an Altera Nios II processor. -relax-section Replace identified out-of-range branches with PC-relative "jmp" sequences when possible. The generated code sequences are suitable for use in position-independent code, but there is a practical limit on the extended branch range because of the length of the sequences. This option is the default. -relax-all Replace branch instructions not determinable to be in range and all call instructions with "jmp" and "callr" sequences (respectively). This option generates absolute relocations against the target symbols and is not appropriate for position-independent code. -no-relax Do not replace any branches or calls. -EB Generate big-endian output. -EL Generate little-endian output. This is the default. -march=architecture This option specifies the target architecture. The assembler issues an error message if an attempt is made to assemble an instruction which will not execute on the target architecture. The following architecture names are recognized: "r1", "r2". The default is "r1". The following options are available when as is configured for a PRU processor. -mlink-relax Assume that LD would optimize LDI32 instructions by checking the upper 16 bits of the expression. If they are all zeros, then LD would shorten the LDI32 instruction to a single LDI. In such case "as" will output DIFF relocations for diff expressions. -mno-link-relax Assume that LD would not optimize LDI32 instructions. As a consequence, DIFF relocations will not be emitted. -mno-warn-regname-label Do not warn if a label name matches a register name. Usually assembler programmers will want this warning to be emitted. C compilers may want to turn this off. The following options are available when as is configured for a MIPS processor. -G num This option sets the largest size of an object that can be referenced implicitly with the "gp" register. It is only accepted for targets that use ECOFF format, such as a DECstation running Ultrix. The default value is 8. -EB Generate "big endian" format output. -EL Generate "little endian" format output. -mips1 -mips2 -mips3 -mips4 -mips5 -mips32 -mips32r2 -mips32r3 -mips32r5 -mips32r6 -mips64 -mips64r2 -mips64r3 -mips64r5 -mips64r6 Generate code for a particular MIPS Instruction Set Architecture level. -mips1 is an alias for -march=r3000, -mips2 is an alias for -march=r6000, -mips3 is an alias for -march=r4000 and -mips4 is an alias for -march=r8000. -mips5, -mips32, -mips32r2, -mips32r3, -mips32r5, -mips32r6, -mips64, -mips64r2, -mips64r3, -mips64r5, and -mips64r6 correspond to generic MIPS V, MIPS32, MIPS32 Release 2, MIPS32 Release 3, MIPS32 Release 5, MIPS32 Release 6, MIPS64, MIPS64 Release 2, MIPS64 Release 3, MIPS64 Release 5, and MIPS64 Release 6 ISA processors, respectively. -march=cpu Generate code for a particular MIPS CPU. -mtune=cpu Schedule and tune for a particular MIPS CPU. -mfix7000 -mno-fix7000 Cause nops to be inserted if the read of the destination register of an mfhi or mflo instruction occurs in the following two instructions. -mfix-rm7000 -mno-fix-rm7000 Cause nops to be inserted if a dmult or dmultu instruction is followed by a load instruction. -mfix-r5900 -mno-fix-r5900 Do not attempt to schedule the preceding instruction into the delay slot of a branch instruction placed at the end of a short loop of six instructions or fewer and always schedule a "nop" instruction there instead. The short loop bug under certain conditions causes loops to execute only once or twice, due to a hardware bug in the R5900 chip. -mdebug -no-mdebug Cause stabs-style debugging output to go into an ECOFF-style .mdebug section instead of the standard ELF .stabs sections. -mpdr -mno-pdr Control generation of ".pdr" sections. -mgp32 -mfp32 The register sizes are normally inferred from the ISA and ABI, but these flags force a certain group of registers to be treated as 32 bits wide at all times. -mgp32 controls the size of general-purpose registers and -mfp32 controls the size of floating-point registers. -mgp64 -mfp64 The register sizes are normally inferred from the ISA and ABI, but these flags force a certain group of registers to be treated as 64 bits wide at all times. -mgp64 controls the size of general-purpose registers and -mfp64 controls the size of floating-point registers. -mfpxx The register sizes are normally inferred from the ISA and ABI, but using this flag in combination with -mabi=32 enables an ABI variant which will operate correctly with floating- point registers which are 32 or 64 bits wide. -modd-spreg -mno-odd-spreg Enable use of floating-point operations on odd-numbered single-precision registers when supported by the ISA. -mfpxx implies -mno-odd-spreg, otherwise the default is -modd-spreg. -mips16 -no-mips16 Generate code for the MIPS 16 processor. This is equivalent to putting ".module mips16" at the start of the assembly file. -no-mips16 turns off this option. -mmips16e2 -mno-mips16e2 Enable the use of MIPS16e2 instructions in MIPS16 mode. This is equivalent to putting ".module mips16e2" at the start of the assembly file. -mno-mips16e2 turns off this option. -mmicromips -mno-micromips Generate code for the microMIPS processor. This is equivalent to putting ".module micromips" at the start of the assembly file. -mno-micromips turns off this option. This is equivalent to putting ".module nomicromips" at the start of the assembly file. -msmartmips -mno-smartmips Enables the SmartMIPS extension to the MIPS32 instruction set. This is equivalent to putting ".module smartmips" at the start of the assembly file. -mno-smartmips turns off this option. -mips3d -no-mips3d Generate code for the MIPS-3D Application Specific Extension. This tells the assembler to accept MIPS-3D instructions. -no-mips3d turns off this option. -mdmx -no-mdmx Generate code for the MDMX Application Specific Extension. This tells the assembler to accept MDMX instructions. -no-mdmx turns off this option. -mdsp -mno-dsp Generate code for the DSP Release 1 Application Specific Extension. This tells the assembler to accept DSP Release 1 instructions. -mno-dsp turns off this option. -mdspr2 -mno-dspr2 Generate code for the DSP Release 2 Application Specific Extension. This option implies -mdsp. This tells the assembler to accept DSP Release 2 instructions. -mno-dspr2 turns off this option. -mdspr3 -mno-dspr3 Generate code for the DSP Release 3 Application Specific Extension. This option implies -mdsp and -mdspr2. This tells the assembler to accept DSP Release 3 instructions. -mno-dspr3 turns off this option. -mmsa -mno-msa Generate code for the MIPS SIMD Architecture Extension. This tells the assembler to accept MSA instructions. -mno-msa turns off this option. -mxpa -mno-xpa Generate code for the MIPS eXtended Physical Address (XPA) Extension. This tells the assembler to accept XPA instructions. -mno-xpa turns off this option. -mmt -mno-mt Generate code for the MT Application Specific Extension. This tells the assembler to accept MT instructions. -mno-mt turns off this option. -mmcu -mno-mcu Generate code for the MCU Application Specific Extension. This tells the assembler to accept MCU instructions. -mno-mcu turns off this option. -mcrc -mno-crc Generate code for the MIPS cyclic redundancy check (CRC) Application Specific Extension. This tells the assembler to accept CRC instructions. -mno-crc turns off this option. -mginv -mno-ginv Generate code for the Global INValidate (GINV) Application Specific Extension. This tells the assembler to accept GINV instructions. -mno-ginv turns off this option. -mloongson-mmi -mno-loongson-mmi Generate code for the Loongson MultiMedia extensions Instructions (MMI) Application Specific Extension. This tells the assembler to accept MMI instructions. -mno-loongson-mmi turns off this option. -mloongson-cam -mno-loongson-cam Generate code for the Loongson Content Address Memory (CAM) instructions. This tells the assembler to accept Loongson CAM instructions. -mno-loongson-cam turns off this option. -mloongson-ext -mno-loongson-ext Generate code for the Loongson EXTensions (EXT) instructions. This tells the assembler to accept Loongson EXT instructions. -mno-loongson-ext turns off this option. -mloongson-ext2 -mno-loongson-ext2 Generate code for the Loongson EXTensions R2 (EXT2) instructions. This option implies -mloongson-ext. This tells the assembler to accept Loongson EXT2 instructions. -mno-loongson-ext2 turns off this option. -minsn32 -mno-insn32 Only use 32-bit instruction encodings when generating code for the microMIPS processor. This option inhibits the use of any 16-bit instructions. This is equivalent to putting ".set insn32" at the start of the assembly file. -mno-insn32 turns off this option. This is equivalent to putting ".set noinsn32" at the start of the assembly file. By default -mno-insn32 is selected, allowing all instructions to be used. --construct-floats --no-construct-floats The --no-construct-floats option disables the construction of double width floating point constants by loading the two halves of the value into the two single width floating point registers that make up the double width register. By default --construct-floats is selected, allowing construction of these floating point constants. --relax-branch --no-relax-branch The --relax-branch option enables the relaxation of out-of- range branches. By default --no-relax-branch is selected, causing any out-of-range branches to produce an error. -mignore-branch-isa -mno-ignore-branch-isa Ignore branch checks for invalid transitions between ISA modes. The semantics of branches does not provide for an ISA mode switch, so in most cases the ISA mode a branch has been encoded for has to be the same as the ISA mode of the branch's target label. Therefore GAS has checks implemented that verify in branch assembly that the two ISA modes match. -mignore-branch-isa disables these checks. By default -mno-ignore-branch-isa is selected, causing any invalid branch requiring a transition between ISA modes to produce an error. -mnan=encoding Select between the IEEE 754-2008 (-mnan=2008) or the legacy (-mnan=legacy) NaN encoding format. The latter is the default. --emulation=name This option was formerly used to switch between ELF and ECOFF output on targets like IRIX 5 that supported both. MIPS ECOFF support was removed in GAS 2.24, so the option now serves little purpose. It is retained for backwards compatibility. The available configuration names are: mipself, mipslelf and mipsbelf. Choosing mipself now has no effect, since the output is always ELF. mipslelf and mipsbelf select little- and big-endian output respectively, but -EL and -EB are now the preferred options instead. -nocpp as ignores this option. It is accepted for compatibility with the native tools. --trap --no-trap --break --no-break Control how to deal with multiplication overflow and division by zero. --trap or --no-break (which are synonyms) take a trap exception (and only work for Instruction Set Architecture level 2 and higher); --break or --no-trap (also synonyms, and the default) take a break exception. -n When this option is used, as will issue a warning every time it generates a nop instruction from a macro. The following options are available when as is configured for a LoongArch processor. -fpic -fPIC Generate position-independent code -fno-pic Don't generate position-independent code (default) The following options are available when as is configured for a Meta processor. "-mcpu=metac11" Generate code for Meta 1.1. "-mcpu=metac12" Generate code for Meta 1.2. "-mcpu=metac21" Generate code for Meta 2.1. "-mfpu=metac21" Allow code to use FPU hardware of Meta 2.1. See the info pages for documentation of the MMIX-specific options. The following options are available when as is configured for a NDS32 processor. "-O1" Optimize for performance. "-Os" Optimize for space. "-EL" Produce little endian data output. "-EB" Produce little endian data output. "-mpic" Generate PIC. "-mno-fp-as-gp-relax" Suppress fp-as-gp relaxation for this file. "-mb2bb-relax" Back-to-back branch optimization. "-mno-all-relax" Suppress all relaxation for this file. "-march=<arch name>" Assemble for architecture <arch name> which could be v3, v3j, v3m, v3f, v3s, v2, v2j, v2f, v2s. "-mbaseline=<baseline>" Assemble for baseline <baseline> which could be v2, v3, v3m. "-mfpu-freg=FREG" Specify a FPU configuration. "0 8 SP / 4 DP registers" "1 16 SP / 8 DP registers" "2 32 SP / 16 DP registers" "3 32 SP / 32 DP registers" "-mabi=abi" Specify a abi version <abi> could be v1, v2, v2fp, v2fpp. "-m[no-]mac" Enable/Disable Multiply instructions support. "-m[no-]div" Enable/Disable Divide instructions support. "-m[no-]16bit-ext" Enable/Disable 16-bit extension "-m[no-]dx-regs" Enable/Disable d0/d1 registers "-m[no-]perf-ext" Enable/Disable Performance extension "-m[no-]perf2-ext" Enable/Disable Performance extension 2 "-m[no-]string-ext" Enable/Disable String extension "-m[no-]reduced-regs" Enable/Disable Reduced Register configuration (GPR16) option "-m[no-]audio-isa-ext" Enable/Disable AUDIO ISA extension "-m[no-]fpu-sp-ext" Enable/Disable FPU SP extension "-m[no-]fpu-dp-ext" Enable/Disable FPU DP extension "-m[no-]fpu-fma" Enable/Disable FPU fused-multiply-add instructions "-mall-ext" Turn on all extensions and instructions support The following options are available when as is configured for a PowerPC processor. -a32 Generate ELF32 or XCOFF32. -a64 Generate ELF64 or XCOFF64. -K PIC Set EF_PPC_RELOCATABLE_LIB in ELF flags. -mpwrx | -mpwr2 Generate code for POWER/2 (RIOS2). -mpwr Generate code for POWER (RIOS1) -m601 Generate code for PowerPC 601. -mppc, -mppc32, -m603, -m604 Generate code for PowerPC 603/604. -m403, -m405 Generate code for PowerPC 403/405. -m440 Generate code for PowerPC 440. BookE and some 405 instructions. -m464 Generate code for PowerPC 464. -m476 Generate code for PowerPC 476. -m7400, -m7410, -m7450, -m7455 Generate code for PowerPC 7400/7410/7450/7455. -m750cl, -mgekko, -mbroadway Generate code for PowerPC 750CL/Gekko/Broadway. -m821, -m850, -m860 Generate code for PowerPC 821/850/860. -mppc64, -m620 Generate code for PowerPC 620/625/630. -me200z2, -me200z4 Generate code for e200 variants, e200z2 with LSP, e200z4 with SPE. -me300 Generate code for PowerPC e300 family. -me500, -me500x2 Generate code for Motorola e500 core complex. -me500mc Generate code for Freescale e500mc core complex. -me500mc64 Generate code for Freescale e500mc64 core complex. -me5500 Generate code for Freescale e5500 core complex. -me6500 Generate code for Freescale e6500 core complex. -mlsp Enable LSP instructions. (Disables SPE and SPE2.) -mspe Generate code for Motorola SPE instructions. (Disables LSP.) -mspe2 Generate code for Freescale SPE2 instructions. (Disables LSP.) -mtitan Generate code for AppliedMicro Titan core complex. -mppc64bridge Generate code for PowerPC 64, including bridge insns. -mbooke Generate code for 32-bit BookE. -ma2 Generate code for A2 architecture. -maltivec Generate code for processors with AltiVec instructions. -mvle Generate code for Freescale PowerPC VLE instructions. -mvsx Generate code for processors with Vector-Scalar (VSX) instructions. -mhtm Generate code for processors with Hardware Transactional Memory instructions. -mpower4, -mpwr4 Generate code for Power4 architecture. -mpower5, -mpwr5, -mpwr5x Generate code for Power5 architecture. -mpower6, -mpwr6 Generate code for Power6 architecture. -mpower7, -mpwr7 Generate code for Power7 architecture. -mpower8, -mpwr8 Generate code for Power8 architecture. -mpower9, -mpwr9 Generate code for Power9 architecture. -mpower10, -mpwr10 Generate code for Power10 architecture. -mfuture Generate code for 'future' architecture. -mcell -mcell Generate code for Cell Broadband Engine architecture. -mcom Generate code Power/PowerPC common instructions. -many Generate code for any architecture (PWR/PWRX/PPC). -mregnames Allow symbolic names for registers. -mno-regnames Do not allow symbolic names for registers. -mrelocatable Support for GCC's -mrelocatable option. -mrelocatable-lib Support for GCC's -mrelocatable-lib option. -memb Set PPC_EMB bit in ELF flags. -mlittle, -mlittle-endian, -le Generate code for a little endian machine. -mbig, -mbig-endian, -be Generate code for a big endian machine. -msolaris Generate code for Solaris. -mno-solaris Do not generate code for Solaris. -nops=count If an alignment directive inserts more than count nops, put a branch at the beginning to skip execution of the nops. The following options are available when as is configured for a RISC-V processor. -fpic -fPIC Generate position-independent code -fno-pic Don't generate position-independent code (default) -march=ISA Select the base isa, as specified by ISA. For example -march=rv32ima. If this option and the architecture attributes aren't set, then assembler will check the default configure setting --with-arch=ISA. -misa-spec=ISAspec Select the default isa spec version. If the version of ISA isn't set by -march, then assembler helps to set the version according to the default chosen spec. If this option isn't set, then assembler will check the default configure setting --with-isa-spec=ISAspec. -mpriv-spec=PRIVspec Select the privileged spec version. We can decide whether the CSR is valid or not according to the chosen spec. If this option and the privilege attributes aren't set, then assembler will check the default configure setting --with-priv-spec=PRIVspec. -mabi=ABI Selects the ABI, which is either "ilp32" or "lp64", optionally followed by "f", "d", or "q" to indicate single- precision, double-precision, or quad-precision floating-point calling convention, or none to indicate the soft-float calling convention. Also, "ilp32" can optionally be followed by "e" to indicate the RVE ABI, which is always soft-float. -mrelax Take advantage of linker relaxations to reduce the number of instructions required to materialize symbol addresses. (default) -mno-relax Don't do linker relaxations. -march-attr Generate the default contents for the riscv elf attribute section if the .attribute directives are not set. This section is used to record the information that a linker or runtime loader needs to check compatibility. This information includes ISA string, stack alignment requirement, unaligned memory accesses, and the major, minor and revision version of privileged specification. -mno-arch-attr Don't generate the default riscv elf attribute section if the .attribute directives are not set. -mcsr-check Enable the CSR checking for the ISA-dependent CRS and the read-only CSR. The ISA-dependent CSR are only valid when the specific ISA is set. The read-only CSR can not be written by the CSR instructions. -mno-csr-check Don't do CSR checking. -mlittle-endian Generate code for a little endian machine. -mbig-endian Generate code for a big endian machine. See the info pages for documentation of the RX-specific options. The following options are available when as is configured for the s390 processor family. -m31 -m64 Select the word size, either 31/32 bits or 64 bits. -mesa -mzarch Select the architecture mode, either the Enterprise System Architecture (esa) or the z/Architecture mode (zarch). -march=processor Specify which s390 processor variant is the target, g5 (or arch3), g6, z900 (or arch5), z990 (or arch6), z9-109, z9-ec (or arch7), z10 (or arch8), z196 (or arch9), zEC12 (or arch10), z13 (or arch11), z14 (or arch12), z15 (or arch13), or z16 (or arch14). -mregnames -mno-regnames Allow or disallow symbolic names for registers. -mwarn-areg-zero Warn whenever the operand for a base or index register has been specified but evaluates to zero. The following options are available when as is configured for a TMS320C6000 processor. -march=arch Enable (only) instructions from architecture arch. By default, all instructions are permitted. The following values of arch are accepted: "c62x", "c64x", "c64x+", "c67x", "c67x+", "c674x". -mdsbt -mno-dsbt The -mdsbt option causes the assembler to generate the "Tag_ABI_DSBT" attribute with a value of 1, indicating that the code is using DSBT addressing. The -mno-dsbt option, the default, causes the tag to have a value of 0, indicating that the code does not use DSBT addressing. The linker will emit a warning if objects of different type (DSBT and non-DSBT) are linked together. -mpid=no -mpid=near -mpid=far The -mpid= option causes the assembler to generate the "Tag_ABI_PID" attribute with a value indicating the form of data addressing used by the code. -mpid=no, the default, indicates position-dependent data addressing, -mpid=near indicates position-independent addressing with GOT accesses using near DP addressing, and -mpid=far indicates position- independent addressing with GOT accesses using far DP addressing. The linker will emit a warning if objects built with different settings of this option are linked together. -mpic -mno-pic The -mpic option causes the assembler to generate the "Tag_ABI_PIC" attribute with a value of 1, indicating that the code is using position-independent code addressing, The "-mno-pic" option, the default, causes the tag to have a value of 0, indicating position-dependent code addressing. The linker will emit a warning if objects of different type (position-dependent and position-independent) are linked together. -mbig-endian -mlittle-endian Generate code for the specified endianness. The default is little-endian. The following options are available when as is configured for a TILE-Gx processor. -m32 | -m64 Select the word size, either 32 bits or 64 bits. -EB | -EL Select the endianness, either big-endian (-EB) or little- endian (-EL). The following option is available when as is configured for a Visium processor. -mtune=arch This option specifies the target architecture. If an attempt is made to assemble an instruction that will not execute on the target architecture, the assembler will issue an error message. The following names are recognized: "mcm24" "mcm" "gr5" "gr6" The following options are available when as is configured for an Xtensa processor. --text-section-literals | --no-text-section-literals Control the treatment of literal pools. The default is --no-text-section-literals, which places literals in separate sections in the output file. This allows the literal pool to be placed in a data RAM/ROM. With --text-section-literals, the literals are interspersed in the text section in order to keep them as close as possible to their references. This may be necessary for large assembly files, where the literals would otherwise be out of range of the "L32R" instructions in the text section. Literals are grouped into pools following ".literal_position" directives or preceding "ENTRY" instructions. These options only affect literals referenced via PC-relative "L32R" instructions; literals for absolute mode "L32R" instructions are handled separately. --auto-litpools | --no-auto-litpools Control the treatment of literal pools. The default is --no-auto-litpools, which in the absence of --text-section-literals places literals in separate sections in the output file. This allows the literal pool to be placed in a data RAM/ROM. With --auto-litpools, the literals are interspersed in the text section in order to keep them as close as possible to their references, explicit ".literal_position" directives are not required. This may be necessary for very large functions, where single literal pool at the beginning of the function may not be reachable by "L32R" instructions at the end. These options only affect literals referenced via PC-relative "L32R" instructions; literals for absolute mode "L32R" instructions are handled separately. When used together with --text-section-literals, --auto-litpools takes precedence. --absolute-literals | --no-absolute-literals Indicate to the assembler whether "L32R" instructions use absolute or PC-relative addressing. If the processor includes the absolute addressing option, the default is to use absolute "L32R" relocations. Otherwise, only the PC- relative "L32R" relocations can be used. --target-align | --no-target-align Enable or disable automatic alignment to reduce branch penalties at some expense in code size. This optimization is enabled by default. Note that the assembler will always align instructions like "LOOP" that have fixed alignment requirements. --longcalls | --no-longcalls Enable or disable transformation of call instructions to allow calls across a greater range of addresses. This option should be used when call targets can potentially be out of range. It may degrade both code size and performance, but the linker can generally optimize away the unnecessary overhead when a call ends up within range. The default is --no-longcalls. --transform | --no-transform Enable or disable all assembler transformations of Xtensa instructions, including both relaxation and optimization. The default is --transform; --no-transform should only be used in the rare cases when the instructions must be exactly as specified in the assembly source. Using --no-transform causes out of range instruction operands to be errors. --rename-section oldname=newname Rename the oldname section to newname. This option can be used multiple times to rename multiple sections. --trampolines | --no-trampolines Enable or disable transformation of jump instructions to allow jumps across a greater range of addresses. This option should be used when jump targets can potentially be out of range. In the absence of such jumps this option does not affect code size or performance. The default is --trampolines. --abi-windowed | --abi-call0 Choose ABI tag written to the ".xtensa.info" section. ABI tag indicates ABI of the assembly code. A warning is issued by the linker on an attempt to link object files with inconsistent ABI tags. Default ABI is chosen by the Xtensa core configuration. The following options are available when as is configured for an Z80 processor. @chapter Z80 Dependent Features Command-line Options -march=CPU[-EXT...][+EXT...] This option specifies the target processor. The assembler will issue an error message if an attempt is made to assemble an instruction which will not execute on the target processor. The following processor names are recognized: "z80", "z180", "ez80", "gbz80", "z80n", "r800". In addition to the basic instruction set, the assembler can be told to accept some extention mnemonics. For example, "-march=z180+sli+infc" extends z180 with SLI instructions and IN F,(C). The following extentions are currently supported: "full" (all known instructions), "adl" (ADL CPU mode by default, eZ80 only), "sli" (instruction known as SLI, SLL or SL1), "xyhl" (instructions with halves of index registers: IXL, IXH, IYL, IYH), "xdcb" (instructions like RotOp (II+d),R and BitOp n,(II+d),R), "infc" (instruction IN F,(C) or IN (C)), "outc0" (instruction OUT (C),0). Note that rather than extending a basic instruction set, the extention mnemonics starting with "-" revoke the respective functionality: "-march=z80-full+xyhl" first removes all default extentions and adds support for index registers halves only. If this option is not specified then "-march=z80+xyhl+infc" is assumed. -local-prefix=prefix Mark all labels with specified prefix as local. But such label can be marked global explicitly in the code. This option do not change default local label prefix ".L", it is just adds new one. -colonless Accept colonless labels. All symbols at line begin are treated as labels. -sdcc Accept assembler code produced by SDCC. -fp-s=FORMAT Single precision floating point numbers format. Default: ieee754 (32 bit). -fp-d=FORMAT Double precision floating point numbers format. Default: ieee754 (64 bit).
# as > Portable GNU assembler. Primarily intended to assemble output from `gcc` to > be used by `ld`. More information: https://www.unix.com/man-page/osx/1/as/. * Assemble a file, writing the output to `a.out`: `as {{path/to/file.s}}` * Assemble the output to a given file: `as {{path/to/file.s}} -o {{path/to/output_file.o}}` * Generate output faster by skipping whitespace and comment preprocessing. (Should only be used for trusted compilers): `as -f {{path/to/file.s}}` * Include a given path to the list of directories to search for files specified in `.include` directives: `as -I {{path/to/directory}} {{path/to/file.s}}`
systemd-cat
systemd-cat may be used to connect the standard input and output of a process to the journal, or as a filter tool in a shell pipeline to pass the output the previous pipeline element generates to the journal. If no parameter is passed, systemd-cat will write everything it reads from standard input (stdin) to the journal. If parameters are passed, they are executed as command line with standard output (stdout) and standard error output (stderr) connected to the journal, so that all it writes is stored in the journal. The following options are understood: -h, --help Print a short help text and exit. --version Print a short version string and exit. -t, --identifier= Specify a short string that is used to identify the logging tool. If not specified, no identification string is written to the journal. -p, --priority= Specify the default priority level for the logged messages. Pass one of "emerg", "alert", "crit", "err", "warning", "notice", "info", "debug", or a value between 0 and 7 (corresponding to the same named levels). These priority values are the same as defined by syslog(3). Defaults to "info". Note that this simply controls the default, individual lines may be logged with different levels if they are prefixed accordingly. For details, see --level-prefix= below. --stderr-priority= Specifies the default priority level for messages from the process's standard error output (stderr). Usage of this option is the same as the --priority= option, above, and both can be used at once. When both are used, --priority= will specify the default priority for standard output (stdout). If --stderr-priority= is not specified, messages from stderr will still be logged, with the same default priority level as stdout. Also, note that when stdout and stderr use the same default priority, the messages will be strictly ordered, because one channel is used for both. When the default priority differs, two channels are used, and so stdout messages will not be strictly ordered with respect to stderr messages - though they will tend to be approximately ordered. --level-prefix= Controls whether lines read are parsed for syslog priority level prefixes. If enabled (the default), a line prefixed with a priority prefix such as "<5>" is logged at priority 5 ("notice"), and similarly for the other priority levels. Takes a boolean argument.
# systemd-cat > Connect a pipeline or program's output streams with the systemd journal. > More information: https://www.freedesktop.org/software/systemd/man/systemd- > cat.html. * Write the output of the specified command to the journal (both output streams are captured): `systemd-cat {{command}}` * Write the output of a pipeline to the journal (`stderr` stays connected to the terminal): `{{command}} | systemd-cat`
git-rev-parse
Many Git porcelainish commands take mixture of flags (i.e. parameters that begin with a dash -) and parameters meant for the underlying git rev-list command they use internally and flags and parameters for the other commands they use downstream of git rev-list. This command is used to distinguish between them. Operation Modes Each of these options must appear first on the command line. --parseopt Use git rev-parse in option parsing mode (see PARSEOPT section below). --sq-quote Use git rev-parse in shell quoting mode (see SQ-QUOTE section below). In contrast to the --sq option below, this mode does only quoting. Nothing else is done to command input. Options for --parseopt --keep-dashdash Only meaningful in --parseopt mode. Tells the option parser to echo out the first -- met instead of skipping it. --stop-at-non-option Only meaningful in --parseopt mode. Lets the option parser stop at the first non-option argument. This can be used to parse sub-commands that take options themselves. --stuck-long Only meaningful in --parseopt mode. Output the options in their long form if available, and with their arguments stuck. Options for Filtering --revs-only Do not output flags and parameters not meant for git rev-list command. --no-revs Do not output flags and parameters meant for git rev-list command. --flags Do not output non-flag parameters. --no-flags Do not output flag parameters. Options for Output --default <arg> If there is no parameter given by the user, use <arg> instead. --prefix <arg> Behave as if git rev-parse was invoked from the <arg> subdirectory of the working tree. Any relative filenames are resolved as if they are prefixed by <arg> and will be printed in that form. This can be used to convert arguments to a command run in a subdirectory so that they can still be used after moving to the top-level of the repository. For example: prefix=$(git rev-parse --show-prefix) cd "$(git rev-parse --show-toplevel)" # rev-parse provides the -- needed for 'set' eval "set $(git rev-parse --sq --prefix "$prefix" -- "$@")" --verify Verify that exactly one parameter is provided, and that it can be turned into a raw 20-byte SHA-1 that can be used to access the object database. If so, emit it to the standard output; otherwise, error out. If you want to make sure that the output actually names an object in your object database and/or can be used as a specific type of object you require, you can add the ^{type} peeling operator to the parameter. For example, git rev-parse "$VAR^{commit}" will make sure $VAR names an existing object that is a commit-ish (i.e. a commit, or an annotated tag that points at a commit). To make sure that $VAR names an existing object of any type, git rev-parse "$VAR^{object}" can be used. Note that if you are verifying a name from an untrusted source, it is wise to use --end-of-options so that the name argument is not mistaken for another option. -q, --quiet Only meaningful in --verify mode. Do not output an error message if the first argument is not a valid object name; instead exit with non-zero status silently. SHA-1s for valid object names are printed to stdout on success. --sq Usually the output is made one line per flag and parameter. This option makes output a single line, properly quoted for consumption by shell. Useful when you expect your parameter to contain whitespaces and newlines (e.g. when using pickaxe -S with git diff-*). In contrast to the --sq-quote option, the command input is still interpreted as usual. --short[=length] Same as --verify but shortens the object name to a unique prefix with at least length characters. The minimum length is 4, the default is the effective value of the core.abbrev configuration variable (see git-config(1)). --not When showing object names, prefix them with ^ and strip ^ prefix from the object names that already have one. --abbrev-ref[=(strict|loose)] A non-ambiguous short name of the objects name. The option core.warnAmbiguousRefs is used to select the strict abbreviation mode. --symbolic Usually the object names are output in SHA-1 form (with possible ^ prefix); this option makes them output in a form as close to the original input as possible. --symbolic-full-name This is similar to --symbolic, but it omits input that are not refs (i.e. branch or tag names; or more explicitly disambiguating "heads/master" form, when you want to name the "master" branch when there is an unfortunately named tag "master"), and show them as full refnames (e.g. "refs/heads/master"). Options for Objects --all Show all refs found in refs/. --branches[=pattern], --tags[=pattern], --remotes[=pattern] Show all branches, tags, or remote-tracking branches, respectively (i.e., refs found in refs/heads, refs/tags, or refs/remotes, respectively). If a pattern is given, only refs matching the given shell glob are shown. If the pattern does not contain a globbing character (?, *, or [), it is turned into a prefix match by appending /*. --glob=pattern Show all refs matching the shell glob pattern pattern. If the pattern does not start with refs/, this is automatically prepended. If the pattern does not contain a globbing character (?, *, or [), it is turned into a prefix match by appending /*. --exclude=<glob-pattern> Do not include refs matching <glob-pattern> that the next --all, --branches, --tags, --remotes, or --glob would otherwise consider. Repetitions of this option accumulate exclusion patterns up to the next --all, --branches, --tags, --remotes, or --glob option (other options or arguments do not clear accumulated patterns). The patterns given should not begin with refs/heads, refs/tags, or refs/remotes when applied to --branches, --tags, or --remotes, respectively, and they must begin with refs/ when applied to --glob or --all. If a trailing /* is intended, it must be given explicitly. --exclude-hidden=[fetch|receive|uploadpack] Do not include refs that would be hidden by git-fetch, git-receive-pack or git-upload-pack by consulting the appropriate fetch.hideRefs, receive.hideRefs or uploadpack.hideRefs configuration along with transfer.hideRefs (see git-config(1)). This option affects the next pseudo-ref option --all or --glob and is cleared after processing them. --disambiguate=<prefix> Show every object whose name begins with the given prefix. The <prefix> must be at least 4 hexadecimal digits long to avoid listing each and every object in the repository by mistake. Options for Files --local-env-vars List the GIT_* environment variables that are local to the repository (e.g. GIT_DIR or GIT_WORK_TREE, but not GIT_EDITOR). Only the names of the variables are listed, not their value, even if they are set. --path-format=(absolute|relative) Controls the behavior of certain other options. If specified as absolute, the paths printed by those options will be absolute and canonical. If specified as relative, the paths will be relative to the current working directory if that is possible. The default is option specific. This option may be specified multiple times and affects only the arguments that follow it on the command line, either to the end of the command line or the next instance of this option. The following options are modified by --path-format: --git-dir Show $GIT_DIR if defined. Otherwise show the path to the .git directory. The path shown, when relative, is relative to the current working directory. If $GIT_DIR is not defined and the current directory is not detected to lie in a Git repository or work tree print a message to stderr and exit with nonzero status. --git-common-dir Show $GIT_COMMON_DIR if defined, else $GIT_DIR. --resolve-git-dir <path> Check if <path> is a valid repository or a gitfile that points at a valid repository, and print the location of the repository. If <path> is a gitfile then the resolved path to the real repository is printed. --git-path <path> Resolve "$GIT_DIR/<path>" and takes other path relocation variables such as $GIT_OBJECT_DIRECTORY, $GIT_INDEX_FILE... into account. For example, if $GIT_OBJECT_DIRECTORY is set to /foo/bar then "git rev-parse --git-path objects/abc" returns /foo/bar/abc. --show-toplevel Show the (by default, absolute) path of the top-level directory of the working tree. If there is no working tree, report an error. --show-superproject-working-tree Show the absolute path of the root of the superproject’s working tree (if exists) that uses the current repository as its submodule. Outputs nothing if the current repository is not used as a submodule by any project. --shared-index-path Show the path to the shared index file in split index mode, or empty if not in split-index mode. The following options are unaffected by --path-format: --absolute-git-dir Like --git-dir, but its output is always the canonicalized absolute path. --is-inside-git-dir When the current working directory is below the repository directory print "true", otherwise "false". --is-inside-work-tree When the current working directory is inside the work tree of the repository print "true", otherwise "false". --is-bare-repository When the repository is bare print "true", otherwise "false". --is-shallow-repository When the repository is shallow print "true", otherwise "false". --show-cdup When the command is invoked from a subdirectory, show the path of the top-level directory relative to the current directory (typically a sequence of "../", or an empty string). --show-prefix When the command is invoked from a subdirectory, show the path of the current directory relative to the top-level directory. --show-object-format[=(storage|input|output)] Show the object format (hash algorithm) used for the repository for storage inside the .git directory, input, or output. For input, multiple algorithms may be printed, space-separated. If not specified, the default is "storage". Other Options --since=datestring, --after=datestring Parse the date string, and output the corresponding --max-age= parameter for git rev-list. --until=datestring, --before=datestring Parse the date string, and output the corresponding --min-age= parameter for git rev-list. <args>... Flags and parameters to be parsed.
# git rev-parse > Display metadata related to specific revisions. More information: > https://git-scm.com/docs/git-rev-parse. * Get the commit hash of a branch: `git rev-parse {{branch_name}}` * Get the current branch name: `git rev-parse --abbrev-ref {{HEAD}}` * Get the absolute path to the root directory: `git rev-parse --show-toplevel`
patch
The patch utility shall read a source (patch) file containing any of four forms of difference (diff) listings produced by the diff utility (normal, copied context, unified context, or in the style of ed) and apply those differences to a file. By default, patch shall read from the standard input. The patch utility shall attempt to determine the type of the diff listing, unless overruled by a -c, -e, -n, or -u option. If the patch file contains more than one patch, patch shall attempt to apply each of them as if they came from separate patch files. (In this case, the application shall ensure that the name of the patch file is determinable for each diff listing.) The patch utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -b Save a copy of the original contents of each modified file, before the differences are applied, in a file of the same name with the suffix .orig appended to it. If the file already exists, it shall be overwritten; if multiple patches are applied to the same file, the .orig file shall be written only for the first patch. When the -o outfile option is also specified, file.orig shall not be created but, if outfile already exists, outfile.orig shall be created. -c Interpret the patch file as a copied context difference (the output of the utility diff when the -c or -C options are specified). -d dir Change the current directory to dir before processing as described in the EXTENDED DESCRIPTION section. -D define Mark changes with one of the following C preprocessor constructs: #ifdef define ... #endif #ifndef define ... #endif optionally combined with the C preprocessor construct #else. If the patched file is processed with the C preprocessor, where the macro define is defined, the output shall contain the changes from the patch file; otherwise, the output shall not contain the patches specified in the patch file. -e Interpret the patch file as an ed script, rather than a diff script. -i patchfile Read the patch information from the file named by the pathname patchfile, rather than the standard input. -l (The letter ell.) Cause any sequence of <blank> characters in the difference script to match any sequence of <blank> characters in the input file. Other characters shall be matched exactly. -n Interpret the script as a normal difference. -N Ignore patches where the differences have already been applied to the file; by default, already-applied patches shall be rejected. -o outfile Instead of modifying the files (specified by the file operand or the difference listings) directly, write a copy of the file referenced by each patch, with the appropriate differences applied, to outfile. Multiple patches for a single file shall be applied to the intermediate versions of the file created by any previous patches, and shall result in multiple, concatenated versions of the file being written to outfile. -p num For all pathnames in the patch file that indicate the names of files to be patched, delete num pathname components from the beginning of each pathname. If the pathname in the patch file is absolute, any leading <slash> characters shall be considered the first component (that is, -p 1 shall remove the leading <slash> characters). Specifying -p 0 shall cause the full pathname to be used. If -p is not specified, only the basename (the final pathname component) shall be used. -R Reverse the sense of the patch script; that is, assume that the difference script was created from the new version to the old version. The -R option cannot be used with ed scripts. The patch utility shall attempt to reverse each portion of the script before applying it. Rejected differences shall be saved in swapped format. If this option is not specified, and until a portion of the patch file is successfully applied, patch attempts to apply each portion in its reversed sense as well as in its normal sense. If the attempt is successful, the user shall be prompted to determine whether the -R option should be set. -r rejectfile Override the default reject filename. In the default case, the reject file shall have the same name as the output file, with the suffix .rej appended to it; see Patch Application. -u Interpret the patch file as a unified context difference (the output of the diff utility when the -u or -U options are specified).
# patch > Patch a file (or files) with a diff file. Note that diff files should be > generated by the `diff` command. More information: https://manned.org/patch. * Apply a patch using a diff file (filenames must be included in the diff file): `patch < {{patch.diff}}` * Apply a patch to a specific file: `patch {{path/to/file}} < {{patch.diff}}` * Patch a file writing the result to a different file: `patch {{path/to/input_file}} -o {{path/to/output_file}} < {{patch.diff}}` * Apply a patch to the current directory: `patch -p1 < {{patch.diff}}` * Apply the reverse of a patch: `patch -R < {{patch.diff}}`
size
The GNU size utility lists the section sizes and the total size for each of the binary files objfile on its argument list. By default, one line of output is generated for each file or each module if the file is an archive. objfile... are the files to be examined. If none are specified, the file "a.out" will be used instead. The command-line options have the following meanings: -A -B -G --format=compatibility Using one of these options, you can choose whether the output from GNU size resembles output from System V size (using -A, or --format=sysv), or Berkeley size (using -B, or --format=berkeley). The default is the one-line format similar to Berkeley's. Alternatively, you can choose the GNU format output (using -G, or --format=gnu), this is similar to Berkeley's output format, but sizes are counted differently. Here is an example of the Berkeley (default) format of output from size: $ size --format=Berkeley ranlib size text data bss dec hex filename 294880 81920 11592 388392 5ed28 ranlib 294880 81920 11888 388688 5ee50 size The Berkeley style output counts read only data in the "text" column, not in the "data" column, the "dec" and "hex" columns both display the sum of the "text", "data", and "bss" columns in decimal and hexadecimal respectively. The GNU format counts read only data in the "data" column, not the "text" column, and only displays the sum of the "text", "data", and "bss" columns once, in the "total" column. The --radix option can be used to change the number base for all columns. Here is the same data displayed with GNU conventions: $ size --format=GNU ranlib size text data bss total filename 279880 96920 11592 388392 ranlib 279880 96920 11888 388688 size This is the same data, but displayed closer to System V conventions: $ size --format=SysV ranlib size ranlib : section size addr .text 294880 8192 .data 81920 303104 .bss 11592 385024 Total 388392 size : section size addr .text 294880 8192 .data 81920 303104 .bss 11888 385024 Total 388688 --help -h -H -? Show a summary of acceptable arguments and options. -d -o -x --radix=number Using one of these options, you can control whether the size of each section is given in decimal (-d, or --radix=10); octal (-o, or --radix=8); or hexadecimal (-x, or --radix=16). In --radix=number, only the three values (8, 10, 16) are supported. The total size is always given in two radices; decimal and hexadecimal for -d or -x output, or octal and hexadecimal if you're using -o. --common Print total size of common symbols in each file. When using Berkeley or GNU format these are included in the bss size. -t --totals Show totals of all objects listed (Berkeley or GNU format mode only). --target=bfdname Specify that the object-code format for objfile is bfdname. This option may not be necessary; size can automatically recognize many formats. -v -V --version Display the version number of size. -f Ignored. This option is used by other versions of the size program, but it is not supported by the GNU Binutils version. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively.
# size > Displays the sizes of sections inside binary files. More information: > https://sourceware.org/binutils/docs/binutils/size.html. * Display the size of sections in a given object or executable file: `size {{path/to/file}}` * Display the size of sections in a given object or executable file in [o]ctal: `size {{-o|--radix=8}} {{path/to/file}}` * Display the size of sections in a given object or executable file in [d]ecimal: `size {{-d|--radix=10}} {{path/to/file}}` * Display the size of sections in a given object or executable file in he[x]adecimal: `size {{-x|--radix=16}} {{path/to/file}}`